Nov 202022
 

Today I want to talk about Memory Deduplication on ESXi with Transparent Page Sharing (TPS). This is a technology that isn’t widely known about, even amongst IT professionals with significant experience with VMware products.

And you may ask “Memory Deduplication, why aren’t we using this?!?” as it sounds like a pretty cool piece of technology… Well, I’m about to tell you why you’re not (Inter-VM), and share a few examples of where you would want to enable this.

I also want to show you how to enable TPS globally (Inter-VM), and also discuss TPS being used with VMware Horizon and VDI.

What is Transparent Page Sharing (TPS)?

Transparent Page Sharing is the process in which ESXi can provide memory deduplication by storing duplicate memory pages as a single page on the physical memory of the host. This process stops the system from storing redundant memory pages, and thus frees up physical memory for other uses.

If my memory serves me right, this was originally enabled by default in ESX/ESXi version 5, but was later globally disabled due to security vulnerabilities and concerns.

Note, that TPS is still enabled by default from within the same VM, even today with ESXi 8.

Security Concerns

I recall two potential scenarios and security concerns which led to VMware changing the original default behavior for TPS.

  • Scenario 1 included a concern about an attacker gaining access to a VM, and then having the ability to modify the memory contents of another VM.
  • Scenario 2 included a concern where an attacker may be able to get access to encryption keys used on another system.

A quick search led to a KB titled “Security considerations and disallowing inter-Virtual Machine Transparent Page Sharing (2080735)“, which outlines the details of scenario 2, along with stating “This technique works only in a highly controlled system configured in a non-standard way that VMware believes would not be recreated in a production environment”.

With that being said, it sounds like this would be an extremely difficult attack that requires systems to be configured in a non-standard way.

Current status of TPS

Believe it or not, TPS and memory deduplication is still enabled, however it’s only deduplicating pages from within the same VM. TPS is not deduplicating pages from multiple VMs.

Additionally, VMware has given us controls to configure TPS to allow it amongst multiple VMs, or even enable it globally across the ESXi host.

See below for the settings to configure TPS on ESXi via “Advanced Settings”:

A table providing configurable options for Transparent Page Sharing (TPS) on VMware vSphere ESXi
Transparent Page Sharing (TPS) Settings for ESXi Host

The above table was provided by VMware’s “Additional Transparent Page Sharing management capabilities and new default settings (2097593)” KB.

In short, you could enable TPS globally (Inter-VM) by setting “Mem.ShareForceSalting” in “Advanced Settings”, to a value of “0”. You can also use the salting to configure groups of VMs that are allow to share memory pages.

Additionally, you can tweak the behavior of TPS by modifying some of the settings shown below:

TPS Memory Sharing Settings

As you can see you can configure things like the scanning occurrence (Mem.ShareScanTime) of how often the system will check for memory pages that can be shared/deduplicated and other settings.

TPS is enabled, but not working

So, you may have decided to enable TPS in your environment, but you’re noticing that either no, or very little memory pages are being marked as shared.

ESXi Memory Graph showing Memory Deduplication from TPS
TPS Memory Deduplication – Amount of host physical memory that backs shared guest physical memory

In the example above, you’ll notice that on a loaded host, with TPS enabled globally (Inter-VM, amongst all VMs), that the host is only deduplicating 1,052KB of memory.

This is because you will most often only see TPS being heavily utilized on an ESXi host that has over-committed memory, there’s also a chance that you simply don’t have enough memory pages that can be duplicated.

Memory Deduplication, TPS, and VMware Horizon VDI

Because VMware Horizon utilizes the “vmfork” with “Just-in-Time” desktop delivery, non-persistent VDI will benefit from some level of memory deduplication by default when using Instant Clones with non-persistent VDI. This is because non-persistent VDI guests are spawned from a running base image.

Additionally, you can further implement, enable, and configure TPS by configuring some Transparent Page Sharing options inside of the VMware Horizon Administration console.

When creating a Desktop Pool, you can set the “Transparent Page Sharing” open to “Virtual Machine” (Memory dedupe inside of the VM only), “Pool” (Memory dedupe across the Desktop Pool), “Pod” (Dedupe across the pod), or “Global” (Full Inter-VM memory deduplication across the ESXi host).

If you enabled TPS on the ESXi host globally, these settings are null and not used.

TPS Use Cases

So you might be asking when it’s a good time to use TPS?

  • The Homelab – When is a homelab not a good reason to try something? Looking to save some memory and overcommit memory resources? Implement TPS.
  • VDI Environments – On highly dense hosts, you may consider implementing TPS at some level to maximize the utilization of resources, however you must be aware of the security consequences and factor this in when configuring TPS.
  • Environments with no Sensitive Information – It’s hard to imagine, but if you have an environment that doesn’t contain any sensitive information and doesn’t use any security keys, it would be suitable to enable TPS.

I’m sure there’s a number of other use cases, so leave a comment if you can think of one.

Conclusion

In my opinion Transparent Page Sharing is a technology that should not be forgotten and discarded. VMware admins should be aware of it, how to configure it, and what the implications are of using it.

If you are considering enabling TPS in your environment, you must factor in the potential security consequences of doing so.

Sep 042022
 

When either directly passing through a GPU, or attaching an NVIDIA vGPU to a Virtual Machine on VMware ESXi that has more than 16GB of Video Memory, you may run in to a situation where the VM fails to boot with the error “Module ‘DevicePowerOn’ power on failed.”. Special considerations are required when performing GPU or vGPU Passthrough with 16GB+ of video memory.

This issue is specifically caused by memory mapping a GPU or vGPU device that has 16GB of memory or higher, and could involve both the host system (the ESXi host) and/or the Virtual Machine configuration.

In this post, I’ll address the considerations and requirements to passthrough these devices to virtual machines in your environment.

In the order of occurrence, it’s usually VM configuration related, however if the recommendations in the “VM Configuration Considerations” section do not resolve the issue, please proceed to reviewing the “ESXi Host Considerations” section.

Please note that if the issue is host related, other errors may be present, or the device may not even be visible to ESXi.

VM GPU and vGPU Configuration Considerations

First and foremost, all new VMs should be created using the “EFI” Firmware type. EFI provides numerous advantages in device access and memory mapping versus the older style “BIOS” firmware types.

VM Firmware type EFI

To do this, create a new virtual machine, navigate to “VM Options”, expand “Boot Options”, and confirm/change the Firmware to “EFI”. I recommend this for all new VMs, and not only for VMs accessing GPUs or vGPUs with over 16GB of memory. Please note that you shouldn’t change an existing VM, and should do this on a fresh new VM.

With performing GPU or vGPU Passthrough with 16GB+ of video memory, you’ll need to create a couple of entries under “Advanced” settings to properly configure access to these PCIe devices and provide the proper environment for memory mapping. The lack of these settings is specifically what causes the “Module ‘DevicePowerOn’ power on failed.” error.

Under the VM settings, head over to “VM Options”, expand “Advanced” and click on “Edit Configuration”, click on “Add Configuration Params”, and add the following entries:

pciPassthru.use64bitMMIO=”TRUE”
pciPassthru.64bitMMIOSizeGB=32

Example below:

VM GPU and vGPU Memory Settings for 16GB or higher memory mapping

You’ll notice that while our GPU or vGPU profile may have 16GB of memory, we need to double that value, and set it for the “pciPassthru.64bitMMIOSizeGB” variable. If your card or vGPU profile had 32GB, you’d set it to “64”.

Additionally if you were passing through multiple GPUs or vGPU devices, you’d need to factor all the memory being mapped, and double the combined amount.

ESXi GPU and vGPU Host Considerations

On most new and modern servers, the host level doesn’t require any special configuration as they are already designed to pass through such devices to the hypervisor properly. However in some special cases, and/or when using older servers, you may need to modify configuration and settings in the UEFI or BIOS.

If setting the VM Configuration above still results in the same error (or possibly other errors), than you most likely need to make modifications to the ESXi hosts BIOS/UEFI/RBSU to allow the proper memory mapping of the PCIe device, in our case being the GPU.

This is where things get a bit tricky because every server manufacturer has different settings that will need to be configured.

Look for the following settings, or settings with similar terminology:

  • “Memory Mapping Above 4G”
  • “Above 4G Decoding”
  • “PCI Express 64-Bit BAR Support”
  • “64-Bit IOMMU Mapping”

Once you find the correct setting or settings, enable them.

Every vendor could be using different terminology and there may be other settings that need to be configured that I don’t have listed above. In my case, I had to go in to a secret “SERVICE OPTIONS” menu on my HPE Proliant DL360p Gen8, as documented here.

After performing the recommendations in this guide, you should now be able to passthrough devices with over 16GB of memory.

Additional Resources:

Aug 142022
 
HP Printer on VDI

When it comes to troubleshooting login times with non-persistent VDI (VMware Horizon Instant Clones), I often find delays associated with printer drivers not being included in the golden image. In this post, I’m going to show you how to add a printer driver to an Instant Clone golden image!

Printing with non-persistent VDI and Instant Clones

In most environments, printers will be mapped for users during logon. If a printer is mapped or added and the driver is not added to the golden image, it will usually be retrieved from the print server and installed, adding to the login process and ultimately leading to a delay.

Due of the nature of non-persistent VDI and Instant Clones, every time the user goes to login and get’s a new VM, the driver will then be downloaded and installed each of these times, creating a redundant process wasting time and network bandwidth.

To avoid this, we need to inject the required printer drivers in to the golden image. You can add numerous drivers and should include all the drivers that any and all the users are expecting to use.

An important consideration: Try using Universal Print Drivers as much as possible. Universal Printer Drivers often support numerous different printers, which allows you to install one driver to support many different printers from the same vendor.

How to add a printer driver to an instant clone golden image

Below, I’ll show you how to inject a driver in to the Instant Clone golden image. Note that this doesn’t actually add a printer, but only installs the printer driver in to the Windows operating system so it is available for a printer to be configured and/or mapped.

Let’s get started! In this example we’ll add the HP Universal Driver. These instructions work on both Windows 10 and Windows 11 (as well as Windows Server operating systems):

  1. Click Start, type in “Print Management” and open the “Print Management”. You can also click Start, Run, and type “printmanagement.msc”.
    Launch Print Management
  2. On the left hand side, expand “Print Servers”, then expand your computer name, and select “Drivers”.
    Print Management Drivers
  3. Right click on “Drivers” and select “Add Driver”.
    Print Management Add Driver
  4. When the “Welcome to the Add Printer Driver Wizard” opens, click Next.
    Add Printer Driver Wizard
  5. Leave the default for the architecture. It should default to the architecture of the golden image.
  6. When you are at the “Printer Driver Selection” stage, click on “Have Disk”.
    Print Management Add Printer Driver Location
  7. Browse to the location of your printer driver. In this example, we navigate to the extracted HP Universal Print Driver.
    Browse Printer Driver Location
  8. Select the driver you want to install.
    VDI Select Printer Driver to Install
  9. Click on Finish to complete the driver installation.
    Finish installing Instant Clone Printer Driver

The driver you installed should now appear in the list as it has been installed in to the operating system and is now available should a user add a printer, or have a printer automatically mapped.

Screenshot of Printer Driver installed on non-persistent VDI Instant Clone golden image
Printer Driver installed on Non-Persistent Instance Clone Golden Image

Now seal, snap, and deploy your image, and you’re good to go!

Jul 172022
 
VMware vSphere ESXi with vTPM from NKP

It’s been coming for a while: The requirement to deploy VMs with a TPM module… Today I’ll be showing you the easiest and quickest way to create and deploy Virtual Machines with vTPM on VMware vSphere ESXi!

As most of you know, Windows 11 has a requirement for Secureboot as well as a TPM module. It’s with no doubt that we’ll also possibly see this requirement with future Microsoft Windows Server operating systems.

While users struggle to deploy TPM modules on their own workstations to be eligible for the Windows 11 upgrade, ESXi administrators are also struggling with deploying Virtual TPM modules, or vTPM modules on their virtualized infrastructure.

What is a TPM Module?

TPM stands for Trusted Platform Module. A Trusted Platform Module, is a piece of hardware (or chip) inside or outside of your computer that provides secured computing features to the computer, system, or server that it’s attached to.

This TPM modules provides things like a random number generator, storage of encryption keys and cryptographic information, as well as aiding in secure authentication of the host system.

In a virtualization environment, we need to emulate this physical device with a Virtual TPM module, or vTPM.

What is a Virtual TPM (vTPM) Module?

A vTPM module is a virtualized software instance of a traditional physical TPM module. A vTPM can be attached to Virtual Machines and provide the same features and functionality that a physical TPM module would provide to a physical system.

vTPM modules can be can be deployed with VMware vSphere ESXi, and can be used to deploy Windows 11 on ESXi.

Deployment of vTPM modules, require a Key Provider on the vCenter Server.

For more information on vTPM modules, see VMware’s “Virtual Trust Platform Module Overview” documentation.

Deploying vTPM (Virtual TPM Modules) on VMware vSphere ESXi

In order to deploy vTPM modules (and VM encryption, vSAN Encryption) on VMware vSphere ESXi, you need to configure a Key Provider on your vCenter Server.

Traditionally, this would be accomplished with a Standard Key Provider utilizing a Key Management Server (KMS), however this required a 3rd party KMS server and is what I would consider a complex deployment.

VMware has made this easy as of vSphere 7 Update 2 (7U2), with the Native Key Provider (NKP) on the vCenter Server.

The Native Key Provider, allows you to easily deploy technologies such as vTPM modules, VM encryption, vSAN encryption, and the best part is, it’s all built in to vCenter Server.

Enabling VMware Native Key Provider (NKP)

To enable NKP across your vSphere infrastructure:

  1. Log on to your vCenter Server
  2. Select your vCenter Server from the Inventory List
  3. Select “Key Providers”
  4. Click on “Add”, and select “Add Native Key Provider”
  5. Give the new NKP a friendly name
  6. De-select “Use key provider only with TPM protected ESXi hosts” to allow your ESXi hosts without a TPM to be able to use the native key provider.

In order to activate your new native key provider, you need to click on “Backup” to make sure you have it backed up. Keep this backup in a safe place. After the backup is complete, you NKP will be active and usable by your ESXi hosts.

Screenshot of VMware vCenter Server with Native Key Provider (NKP) Configured
VMware vCenter with Native Key Provider (NKP) Configured

There’s a few additional things to note:

  • Your ESXi hosts do NOT require a physical TPM module in order to use the Native Key Provider
    • Just make sure you disable the checkbox “Use key provider only with TPM protected ESXi hosts”
  • NKP can be used to enable vTPM modules on all editions of vSphere
  • If your ESXi hosts have a TPM module, using the Native Key Provider with your hosts TPM modules can provide enhanced security
    • Onboard TPM module allows keys to be stored and used if the vCenter server goes offline
  • If you delete the Native Key Provider, you are also deleting all the keys stored with it.
    • Make sure you have it backed up
    • Make sure you don’t have any hosts/VMs using the NKP before deleting

You can now deploy vTPM modules to virtual machines in your VMware environment.

Jun 182022
 
Nvidia GRID Logo

When performing a VMware vMotion on a Virtual Machine with an NVIDIA vGPU attached to it, the VM may freeze during migration. Additionally, when performing a vMotion on a VM without a vGPU, the VM does not freeze during migration.

So why is it that adding a vGPU to a VM causes it to become frozen during vMotion? This is referred to as the VM Stun Time.

I’m going to explain why this happens, and what you can do to reduce these STUN times.

VMware vMotion

First, let’s start with traditional vMotion without a vGPU attached.

VMware vMotion with vSphere and ESXi
VMware vMotion with vSphere

vMotion allows us to live migrate a Virtual Machine instance from one ESXi host, to another, with (visibly) no downtime. You’ll notice that I put “visibly” in brackets…

When performing a vMotion, vSphere will migrate the VM’s memory from the source to destination host and create checkpoints. It will then continue to copy memory deltas including changes blocks after the initial copy.

Essentially vMotion copies the memory of the instance, then initiates more copies to copy over the changes after the original transfer was completed, until the point where it’s all copied and the instance is now running on the destination host.

VMware vMotion with vGPU

For some time, we have had the ability to perform a vMotion with a VM that as a GPU attached to it.

VMware vSphere with NVIDIA vGPU
VMware VMs with vGPU

However, in this situation things work slightly different. When performing a vMotion, it’s not only the system RAM memory that needs to be transferred, but the GPU’s memory (VRAM) as well.

Unfortunately the checkpoint/delta transfer technology that’s used with then system RAM isn’t available to transfer the GPU, which means that the VM has to be stunned (frozen) to stop it so that the video RAM can be transferred and then the instance can be initialized on the destination host.

STUN Time

The STUN time is essentially the time it takes to transfer the video RAM (framebuffer) from one host to another.

When researching this, you may find examples of the time it takes to transfer various sizes of VRAM. An example would be from VMware’s documentation “Using vMotion to Migrate vGPU Virtual Machines“:

NVIDIA vGPU Estimated STUN Times
Expected STUN Times for vMotion with vGPU on 10Gig vMotion NIC

However, it will always vary depending on a number of factors. These factors include:

  • vMotion Network Speed
  • vMotion Network Optimization
    • Multi-NIC vMotion to utilize multiple NICs
    • Multi-vmk vMotion to optimize and saturate single NICs
  • Server Load
  • Network Throughput
  • The number of VM’s that are currently being migrated with vMotion

As you can see, there’s a number of things that play in to this. If you have a single 10Gig link for vMotion and you’re migrating many VMs with a vGPU, it’s obviously going to take longer than if you were just migrating a single VM with a vGPU.

Optimizing and Minimizing vGPU STUN Time

There’s a number of things we can look at to minimize the vGPU STUN times. This includes:

  • Upgrading networking throughput with faster NICs
  • Optimizing vMotion (Configure multiple vMotion VMK adapters to saturate a NIC)
  • Configure Multi-NIC vMotion (Utilize multiple physical NICs to increase vMotion throughput)
  • Reduce DRS aggressiveness
  • Migrate fewer VMs at the same time

All of the above can be implemented together, which I would actually recommend.

In short, the faster we migrate the VM, the less the STUN Time will be. Check out my blog post on Optimizing VMware vMotion which includes how to perform the above recommendations.

Hope this helps!

Mar 062022
 
Azure AD SSO with Horizon

Whether deploying VDI for the first time or troubleshooting existing Azure AD SSO issues for an existing environment, special consideration must be made for Microsoft Azure AD SSO and VDI.

When you implement and use Microsoft 365 and Office 365 in a VDI environment, you should have your environment configured to handle Azure AD SSO for a seamless user experience, and to avoid numerous login prompts when accessing these services.

Microsoft Azure Active Directory has two different methods for handling SSO (Single Sign On), these include SSO via a Primary Refresh Token (PRT) and Azure Seamless SSO. In this post, I’ll explain the differences, and when to use which one.

Microsoft Azure AD SSO and VDI

What does Azure AD SSO do?

Azure AD SSO allows your domain joined Windows workstations (and Windows Servers) to have a Single Sign On experience so that users can have an single sign-on integrated experience when accessing Microsoft 365 and/or Office 365.

When Azure AD SSO is enabled and functioning, your users will not be prompted nor have to log on to Microsoft 365 or Office 365 applications or services (including web services) as all this will be handled transparently in the background with Azure AD SSO.

For VDI environments, especially non-persistent VDI (VMware Instant Clones), this is an important function so that users are not prompted to login every time they launch an Office 365 application.

Persistent VDI is not complex and doesn’t have any special considerations for Azure AD SSO, as it will function the same way as traditional workstations, however non-persistent VDI requires special planning.

Please Note: Organizations often associate the Office 365 login prompts to activation issues when in fact activation is functioning fine, however Azure AD SSO is either not enabled, incorrect configured, or not functioning which is why the users are being prompted for login credentials every time they establish a new session with non-persistent VDI. After reading this guide, it should allow you to resolve the issue of Office 365 login prompts on VDI non-persistent and Instant Clone VMs.

Azure AD SSO methods

There are two different ways to perform Azure AD SSO in an environment that is not using ADFS. These are:

  • Azure AD SSO via Primary Refresh Token
  • Azure AD Seamless SSO

Both accomplish the same task, but were created at different times, have different purposes, and are used for different scenarios. We’ll explore this below so that you can understand how each works.

Screenshot of a Hybrid Azure AD Joined login
Hybrid Azure AD Joined Login

Fun fact: You can have both Azure AD SSO via PRT and Azure AD Seamless SSO configured at the same time to service your Active Directory domain, devices, and users.

Azure SSO via Primary Refresh Token

When using Azure SSO via Primary Refresh Token, SSO requests are performed by Windows Workstations (or Windows Servers), that are Hybrid Azure AD Joined. When a device is Hybrid Azure AD Joined, it is joined both to your on-premise Active Directory domain, as well registered to your Azure Active Directory.

Azure SSO via Primary Refresh token requires the Windows instance to be running Windows 10 (or later), and/or Windows Server 2016 (or later), as well the Windows instance has to be Azure Hybrid AD joined. If you meet these requirements, SSO with PRT will be performed transparently in the background.

If you require your non-persistent VDI VMs to be Hybrid Azure AD joined and require Azure AD SSO with PRT, special considerations and steps are required:

This includes:

  • Scripts to automatically unjoin non-persistent (Instant Clone) VDI VMs from Azure AD on logoff.
  • Scripts to cleanup old entries on Azure AD

If you properly deploy this, it should function. If you don’t require your non-persistent VDI VMs to be Hybrid Azure AD joined, then Azure AD Seamless SSO may be better for your environment.

Azure AD Seamless SSO

Microsoft Azure AD Seamless SSO after configured and implemented, handles Azure AD SSO requests without the requirement of the device being Hybrid Azure AD joined.

Seamless SSO works on Windows instances instances running Windows 7 (or later, including Windows 10 and Windows 11), and does NOT require the the device to be Hybrid joined.

Seamless SSO allows your Windows instances to access Azure related services (such as Microsoft 365 and Office 365) and provides a single sign-on experience.

This may be the easier method to use when deploying non-persistent VDI (VMware Instant Clones), if you want to implement SSO with Azure, but do not have the requirement of Hybrid AD joining your devices.

Additionally, by using Seamless SSO, you do not need to implement the require log-off and maintenance scripts mentioned in the above section (for Azure AD SSO via PRT).

To use Azure AD Seamless SSO with non-persistent VDI, you must configure and implement Seamless SSO, as well as perform one of the following to make sure your devices do not attempt to Hybrid AD join:

  • Exclude the non-persistent VDI computer OU containers from Azure AD Connect synchronization to Azure AD
  • Implement a registry key on your non-persistent (Instant Clone) golden image, to disable Hybrid Azure AD joining.

To disable Hybrid Azure AD join on Windows, create the registry key on your Windows image below:

HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001

Conclusion

Different methods can be used to implement SSO with Active Directory and Azure AD as stated above. Use the method that will be the easiest to maintain and provide support for the applications and services you need to access. And remember, you can also implement and use both methods in your environment!

After configuring Azure AD SSO, you’ll still be required to implement the relevant GPOs to configure Microsoft 365 and Office 365 behavior in your environment.

Additional Resources

Please see below for additional information and resources:

Jan 162022
 

Welcome to Episode 04 of The Tech Journal Vlog at www.StephenWagner.com

The Tech Journal Vlog Episode 04

In this episode

Updates

  • VMware Horizon
    • Apache Log4j Mitigation with VMware Products
  • Homelab Update
    • HPE MSA 2040 vs Synology DS1621+
    • Migrating from MSA 2040 to a Synology DS1621+
    • Synology Benchmarking NVME Cache
  • DST Root CA X3 Expiration
    • End of Life Operating Systems

New Blog/Video Posts

Life Update/Fun Stuff

  • Work
  • Travel
  • Move

Current Projects

  • Synology DS1621+

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!

Dec 022021
 

In a VMware Horizon environment with DUO MFA configured via RADIUS on the VMware Horizon Connection Server, you may notice authentication issues when logging in through a UAG (Unified Access Gateway) after upgrading to VMware Horizon 8 Version 2111.

During this condition, you can still login and use the connection server directly with MFA working, however all UAG connections will get stuck on authenticating.

Horzion 8 Version 2111 UAG Stuck on Authenticating using DUO MFA (RADIUS)

Disabling MFA and/or RADIUS on the connection server will allow the UAG to function, however MFA will be disabled. This occurs on upgrades to version 2111 of the UAG both when configuring fresh, and importing the JSON configuration backup.

Temporary Fix

Update January 26 2022: VMware has now released version 2111.2 of the Unified Access Gateway which resolves this issue. You can download it here, or view the release notes here.

Update January 12 2022: It appears VMware now has a KB on this issue at: https://kb.vmware.com/s/article/87253.

Temporary workaround/fix: To fix this issue, log on to the UAG and under “Horizon Edge Settings”, configure “Client Encryption Mode” to “Disabled”.

“Client Encryption Mode” is a new setting on UAG 2111 (and UAG 2111.1) that enables new functionality. Disabling this reverts the UAG to the previous behavior of older Unified Access Gateway versions.

More information on “Client Encryption Mode” can be found at https://docs.vmware.com/en/Unified-Access-Gateway/2111/uag-deploy-config/GUID-1B8665A2-485E-4471-954E-56DB9BA540E9.html.

Another workaround is to deploy an older version of the UAG, version 2106. After downgrading, the UAG functions with DUO and RADIUS even though the Connection Server is at version 2111.

If you use an older version of the UAG, please make sure that you mitigate against the Apache log4j vulnerabilities on the UAG using information from the following post: https://kb.vmware.com/s/article/87092.

Sep 202021
 

Welcome to Episode 03.1 of The Tech Journal Vlog (Special Episode on VMware Horizon 8 Version 2106)

In this episode – VMware Horizon 8 Version 2106

This is a special episode dedicated to the release of VMware Horizon View 8, version 2106.

What’s new

In the video, I cover what’s new in the 2106 release.

My Favorite Changes & Enhancements:

  • Audio recording support for 48Khz Audio via RTAV, defaults to 16Khz
    • Persistence on Audio quality recording settings across sessions
    • Sample Rate can be configured via GPO
  • VMware Horizon Linux Client supports Microsoft Teams Optimization
    • Linux Based Zero Clients should add functionality shortly (10ZiG already has!)
  • Raspberry Pi 4 Support!!!!
    • Also supports RTAV

Other interesting changes and enhancements:

  • UI Change on VMware Horizon Client
  • Instant Clones now support SysPrep: Instant Clones with Parent
    • No duplicate SIDs when using SysPrep
  • Ability to use 6 x 4K Displays
  • No Longer have to re-install VMware Horizon Agent after VMware Tools Upgrade
  • Forgot to mention: Support added for USB Redirection with Xbox Gaming Controllers

Additional Items:

  • VMware OSOT Optimization tool Versioning now matches Horizon
    • Removal of Custom Templates
  • VMware VDI Base Image Creation Guide has been updated
    • New guide on automating the VMware VDI Base Image Creation added

Links Mentioned in this post:

Don’t forget to like and subscribe!

Leave a comment, feedback, or suggestions!

Sep 182021
 

Welcome to Episode 03 of The Tech Journal Vlog at StephenWagner.com

In this episode

Fun Stuff

  • Homelab Video Demo (https://youtu.be/oaZv2hpQKac)
  • Telus Fiber 1G Internet (for Business)
    • Sophos UTM Dual WAN Balancing
  • Synology
    • Synology Diskstation DS1621+
    • DSM 7.0
    • Synology C2 Cloud Backup

Work Update

  • VDI Consulting
    • VDI Golden Images for Non-Persistent VDI
  • DUO MFA/2FA
    • Implementations of DUO with Horizon
  • Exchange Projects
  • IT Director as a Service 🙂

Life Update

  • Back at the Gym
  • Travel is Back (Regina, Vancouver)

New Blog Posts

Current Projects

  • Synology DS1621+
  • AMD S7150 x2 MxGPU
  • NVME Storage Server Project
  • 10ZiG Thin Clients

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!