May 092024
 
NVIDIA vGPU Network Licensing Token

When deploying NVIDIA vGPU across a VDI environment, I often see IT teams deploy the licensing token directly on the persistent VMs, or on the non-persistent base golden image. This often causes a nightmare when the client activation token must be updated.

I highly recommend considering network placement of the NVIDIA vGPU Licensing Client Configuration token file for your deployments.

In this post we’ll review the Client Configuration Token File, why you’d want to place it on the network, and how to do so.

What is the Client Configuration Token File

The Client Configuration Token File, tells the NVIDIA vGPU driver on your VM where to find the licensing server information. This token will point the driver to either the CLS or DLS licensing server and request the applicable license to be issued.

By default, the vGPU driver will check the following location for the token:

C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\

While this is common, there’s a much better (and easier) method that you can use to deploy the Client Configuration Tokens, using Network Shares, to ease management of these files.

Placing the NVIDIA vGPU Licensing client configuration token on a network share

Using the Windows Registry, along with a GPO (Group Policy Object), you can configure a network location for the NVIDIA Client Configuration Token, so that your systems whether Persistent or Non-Persistent will use this location.

In the event of a token change, you can simply delete and remove the old token, and place a new configuration token, and all the systems will have immediate access to it, without manually updating individual systems.

Here we’ll use the registry and a GPO to configure the token location:

  1. Using an administrative account, create a folder called “vGPU-Licensing” on your domain SYSVOL share.
    • Example: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing\
  2. Place your NVIDIA Licensing Client Configuration Token in this folderNVIDIA Licensing Token SYSVOL
  3. Open “Group Policy Management” and create a new GPO called “VDI-NVIDIA-LicensingToken”
  4. Navigate to: Computer Configuration -> Preferences -> Windows Settings -> Registry
  5. Right Click and select New -> Registry Item
  6. Under the New Registry Window Enter the following:
    • Action: Update
    • Hive: HKEY_LOCAL_MACHINE
    • Key Path: SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing
    • Value Name: ClientConfigTokenPath
    • Value Type: REG_SZ
    • Value Data: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing
    • Change the network location to match your environment and your setup
  7. After populating the fields, it should be similar to the following example: NVIDIA GPO Registry Client Configuration Token
  8. Hit Apply, then Ok, then link the newly created GPO to the OU where your VDI VM guests are located with NVIDIA vGPU.

That’s it! All we did was created a GPO which configures the Registry key “ClientConfigTokenPath” inside of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing\ and set it to a network share that has the configuration tokens.

Please note, the NVIDIA licensing service accesses the network location using the services security context (not the user’s context), which is why I chose the SYSVOL share, as the computer accounts have read access to this location (example, reading the GPOs on boot and user logon).

Additionally, note that the registry key and location may vary if you’re using older versions of the NVIDIA vGPU Driver. The key used in this post is for versions 16.x and 17.x.

May 092024
 
VMware App Volumes Logo

In this post, I’ll go over the process how to Migrate a VMware App Volumes SQL Database to a new server (or location), and also go over the reasons why you may want to do this.

VMware App Volumes stores all of it’s configuration data inside of a Microsoft SQL Database. This database is used and shared by all the App Volumes Managers in an environment.

Please make sure before any modification of your deployment that you have the proper backups in place.

Why move the database?

There’s a number of reasons why you may want to move your VMware App Volumes SQL Database. These include (but are not limited to):

  • Migrating from Standard SQL Server Deployment to a highly available Microsoft SQL Always On Availability Group
  • Deploying a new Microsoft SQL Server and decommissioning your old SQL Server

In any case, we need the flexibility and ability to be able to move and migrate the SQL database to a new server and/or location.

Considerations

When moving the VMware App Volumes SQL Database, you’ll need to shut down all of your VMware App Volumes Manager Servers.

Note, that while this may result in the inability to attach App Volume VMDKs to new VDI sessions, if your environment is properly configured, you shouldn’t have any interruption of App Volume Apps already attached to existing sessions. If you’re in a zero-downtime environment, make sure any users that may require apps, logon, and attach the apps before starting your migration and maintenance.

ODBC Configuration will be updated/changed during this process.

Always make a backup of your App Volumes Manager servers and SQL database before making any changes.

Migrating the App Volumes Database to a new SQL Server

To migrate the database, we’ll need to essentially shutdown all the App Volumes Services, migrate the database, modify a configuration file, and then bring up 1 (one) single App Volumes Manager server, confirm everything is working, and then update and bring online any additional App Volume Manager Servers.

Perform the following steps to migrate the database:

  1. Perform Backups
    1. Snapshot App Volumes Manager Servers
    2. Backup SQL Database
    3. Backup the “database.yml” file in C:\Program Files (x86)\CloudVolumes\Manager\config
  2. RDP or Console Access all VMware App Volumes Manager Servers
  3. Stop all the App Volumes Services on ALL App Volumes Manager Servers
  4. Migrate SQL Database to a new Microsoft SQL Server (Standard deployment, or High Availability SQL Always-On)
  5. Update your ODBC Configuration on ALL your App Volumes Manager Servers
    1. Open “ODBC Data Source Administrator (64-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
    2. Open “ODBC Data Source Administrator (32-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
  6. If you’re using SQL Authentication, you’ll need update your database.yml file. You’ll need to do this on all your App Volumes Manager Servers if you’re using SQL authentication.
    1. Open C:\Program Files (x86)\CloudVolumes\Manager\config\database.yml
    2. Under “production:” add and/or modify the following two entries:
      • username: <SQL Username>
      • password: <SQL password>
    3. Replace both <SQL Username> and <SQL password> with your App Volumes SQL service account that the App Volumes Manager is using to access the SQB database. Please note: After starting the services, the password will be removed from the configuration file.
  7. You can now start the App Volumes Manager services on ONE of your App Volumes Managers. Please make sure you only start only one as this will allow you to test the configuration, and it will also perform a discovery on the environment to determine active sessions, and update the database.
  8. Monitor the logs, and activity. You’ll want to confirm that everything is working.
  9. After you have confirmed the success of the migration and functionality of one of the App Volumes Servers, and after the activity of that server has become idle, you can now start the services on your other App Volumes Managers.

You have now successfully migrated your App Volumes SQL DB to a new server.

May 092024
 
NVIDIA vGPU

You may notice a frozen session or frozen screen with NVIDIA vGPU, Windows 11, and VMware Horizon in your VDI environment.

While I’ve mostly observed this issue using non-persistent Instant Clones with vGPU on Windows 11 23H2, I have also noticed issues and anomalies with persistent VMs as well.

I’ve noticed this issue across multiple customer environments, and was able to replicate it in my own environment. I’ll go over the problem and solution below.

The Problem

This issue occurs due to the combination of hardware being used, the VMware SVGA driver, a secondary “Virtual Display”, and the resolution being set during logon and initialization of the VMware Horizon VDI session.

When a user logs on, the resolutions are set across all virtual displays. There is an issue where due to a timeout (observed in log files), the resolution cannot be set, resulting in a session that either appears to be frozen, or if active, the interactive cursor is actually off-set from the visible display (your mouse is somewhere else, other than where it’s being displayed).

The Solution

In my troubleshooting, I’ve identified the following solutions:

Solution #1

To resolve this issue, disable the “VMware SVGA 3D” Display Adapter in the Windows Explorer (as shown below). Simply right-click on “VMware SVGA 3D” and set to Disabled.

After disabling this Display Adapter, you’ll noticed the issue will be resolved, and you’ll also notice your VDI sessions are established very quickly (including initializing the resolutions with vGPU).

If you’re using non-persistent VDI (VMware Horizon Instant Clones), you’ll need to perform this on your base image.

Note: By disabling this adapter, you will lose the ability to use the VMware Console on VMware vSphere vCenter. To gain console access, you’ll either need to enable the VMware SVGA 3D adapter in a VDI session, or remove the vGPU adapter.

Solution #2

Another solution is to force the VDI session to use the VMware Horizon Indirect Display Driver.

  1. Open Windows Registry and navigate to the following location: HKLM\Software\Policies\VMware, Inc.\VMware Blast\Config
  2. Create a new Registry String (REG_SZ) called “PixelProviderForceViddCapture” and set it to: 1

Note: If you force the use of the VMware Horizon Indirect Display Driver as your Primary Display Driver, you may run in to GPU issues with the VMware Horizon Indirect Display Driver where the capabilities of your NVIDIA vGPU may not be detected by your applications that require the features and capabilities that come from an NVIDIA GPU.

Jan 112024
 
ESXi-Host-Decommission

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host.

Some might be surprised to learn that you cannot simply just power down and remove the host from the vCenter server, as there are a number of steps that must be taken beforehand to ensure a proper successful decommission. Properly decommissioning the ESXi host avoids orphaned objects in the vCenter database, which can sometimes cause problems in the future.

Today we’ll go over how to properly decommission a VMware ESXi host in an environment with VMware vCenter Server.

The Process – How to decommission ESXi

We will detail the process and considerations to decommission an ESXi host. We will assume that you have since migrated all your VMs, templates, and files from the host, and it contains no data that requires backup or migration.

VMware ESXi Host Decommission Procedures
VMware ESXi Host Decommission Procedures

Process in Short:

  1. Enter Maintenance Mode
  2. Remove Host from vDS Switches
  3. Unmount and Detach iSCSI LUNs
  4. Move host from cluster to datacenter as standalone host
  5. Remove Host from Inventory

Please read further for extended procedures and more information.

Enter Maintenance Mode

We enter maintenance mode to confirm that no VMs are running on the host. You can simply right click the host, and enter maintenance mode.

Remove Host from vDS Switches

You must gracefully remove the host from any vDS switches (VMware Distributed Switches) before removing the host from vCenter Server.

You can create a standard vSwitch and migrate vmk (VMware Kernel) adapters from the vDS switch to standard vSwitch, to maintain communication with the vCenter server and other networks.

Please Note: If you are using vDS switches for iSCSI connectivity, you must gracefully develop a plan to deal with this beforehand, either by unmounting/detaching the iSCSI LUNs on the vDS before removing the switch, or gracefully migrating the vmk adapters to a standard vSwitch, using MPIO to avoid losing connectivity during the process.

Unmount and Detach iSCSI LUNs

You can now proceed to unmount and detach iSCSI LUNs from the selected system:

  1. Unmount the iSCSI LUN(s) from the host
  2. Detach the iSCSI LUN(s) from the host

You will unmount only on the selected host to be decommissioned, and then detach the LUNs (again only on the host you are decommissioning).

Move Host from Cluster to Datacenter as standalone host

While this may not be required, I usually do this to let vSphere Cluster Services (HA/DRS) adjust for the host removal, and also deal with reconfiguration of the HA agent on the ESXi Host. You can simply move the host from the cluster to the parent datacenter level.

Remove Host from Inventory

Once the host has been moved and a moment or two have elapsed, you can now proceed to remove the host from inventory.

While the host is powered on and still connected to vCenter, right click on the host and choose “Remove from Inventory”. This will gracefully remove objects from vCenter, and also uninstall the HA agent from the ESXi host.

Host Repurposing

At this point, you can now log directly on to the ESXi host using the local root password, and shutdown the host.

Jan 072024
 
VMware Horizon View Logo

This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client Menu Bar or Shade Bar) for full screen Horizon VDI sessions.

Horizon Client Menu Bar (Shade)

The Horizon Client Menu Bar, or “Shade”, is the Session bar at the top of full screen VMware Horizon VDI Sessions.

This Menu Bar provides information on the connection, ability to send key sequences, connect USB devices, restart a VDI guest VM and more.

In same cases, users or administrators may want to disable the Shade.

Disable the Horizon Client Menu Bar (Shade)

There are multiple ways that you can disable the shade including using GPOs as well as the registry on client systems. Please note that if you are setting up clients in Kiosk mode, the shade will be automatically disabled and these instructions aren’t required.

Disable Horizon Shade using GPO

To disable the Shade with GPOs, create a Group Policy Object (or edit the local group policy on the client system), and navigate to the following location:

User Configuration -> Policies -> Administrative Templates -> VMware Horizon Client Configuration

Here, we will set Enable the shade to “Disabled”, as show below:

Disable VMware Horizon Shade using GPO to set “Enable the Shade” to Disabled

Disable Horizon Shade using Registry

To disable the Shade using registry on the client system, navigate to the following registry key:

HKEY_LOCAL_MACHINE\Software\VMware, Inc.\VMware VDM\Client\

Here, we can create a String (REG_SZ) value called EnableShade and set it to False which will disable the Shade.

Additional Information

Jan 062024
 
vMotion with vGPU

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While we may have grown accustomed to this, there is a better way, with vGPU Enabled VM DRS Evacuation during Maintenance mode!

A new feature that was introduced with vSphere 7.0 U3f, was the ability to configure and allow automatic vMotion of VMs with vGPUs, meaning that DRS can now migrate your VDI and AI/ML vGPU enabled workloads when hosts are placed in to maintenance mode. This also allows you to streamline remediation with vLCM when updating vGPU enabled hosts running vGPU enabled VMs.

Additionally, as of vSphere 8.0 U2, DRS can now estimate the STUN times required for vMotion of vGPU enabled VMs, and control whether automatic DRS vMotion’s are allowed. This STUN time limit can be set buy an administrator.

Enable automatic vMotion evacuation of vGPU enabled VMs

To enable the automatic vMotion of vGPU enabled VMs on your vSphere Cluster:

  1. Navigate to your vSphere Cluster.vSphere Cluster Selected
  2. Click on the “Configure” Tab, and then select “vSphere DRS”, and click “Edit”.vSphere DRS Cluster - DRS Advanced Settings
  3. Navigate to the “Advanced Options” tab.
  4. Add “VgpuMMAutomationTimeoutSecs” and set to “-1”.vSphere DRS set VgpuMMAutomationTimeoutSecs

After performing the above, when you place a host with vGPU enabled Virtual Machines in to Maintenance Mode, vSphere DRS will evacuate and migrate the VMs to other hosts in the cluster that have the required hardware.

If you attempt to place a host in to Maintenance Mode without enabling automatic vMotion of vGPU enabled VMs, it will fail with the error: “DRS failed to generate a vMotion recommendation for a virtual machine on a host entering Maintenance Mode“.

Enable and Configure vGPU STUN Time Estimate and Limits

If you are running vSphere 8U2 or higher, you can enable vGPU STUN time estimation and limits for DRS on the vGPU enabled cluster. Similar to the instructions above, we can add and configure two variables to the vSphere DRS cluster “Advanced Options”.

To enable STUN time estimation, add PassthroughDrsAutomation and set to “1”.

To override the default vMotion STUN time limit of 100 seconds, add VmDevicesStunTimeTolerated and set it to your preferred maximum number of seconds. Alternatively, you can set this limit Per VM by navigating to the VM in vSphere and adding this variable under the “VM Options” “Advanced Settings” section.

Additional Documentation

Jan 052024
 
NVIDIA vGPU Installed in VMware ESXi Host

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with the inability to use and/or run GPU accelerated workloads including VDI, AI, and ML.

This issue effects NVIDIA vGPU (both vGPU and vDGA passthrough), AMD MxGPU, and Intel Data Center GPU Flex GPUs using SR-IOV, in any deployment where the VMware Indirect Display Driver is installed.

When this issue occurs, the application will incorrectly query the capabilities of the VMware Indirect Display Adapter instead of the GPU that is presented to the VM, resulting in a scenario where the application isn’t aware of the capabilities of the GPU you are utilizing, failing to utilize the GPU, and hardware acceleration, such as hardware encoding (NVENC) and hardware decoding.

What is the VMware Horizon Indirect Display Driver

The VMware Horizon Indirect Display Driver, also known as the VMware Indirect Display Driver, is a “virtual” display driver that isn’t bound to a specific hypervisor, and works with many deployments because of the lack of that limitation.

GPU Issues with the VMware Horizon Indirect Display Driver Enabled

This driver is installed with the VMware Horizon agent, and can work in conjunction with hardware acceleration, including GPUs (such as NVIDIA vGPU, AMD MxGPU, and Intel Data Center GPUs using SR-IOV).

Under normal circumstances, the VMware Horizon Indirect Display Driver is prioritized as a fallback driver for remoting protocols, except in environments where no hypervisor or GPU display drivers are available (like Horizon Cloud on Azure) in which case it would become the priority.

The Problem

Applications designed to use a GPU, may not be able to correctly identify which display adapter to use on the VM. While you may have a GPU, vGPU, or 3D acceleration in your environment, the application may be unaware of the device and/or its capabilities.

This is caused by the application either not correctly using the preferred primary display adapter (GPU and/or vGPU), or not being designed to handle multiple display adapters (and drivers).

Example Scenario:

When using CyberLink PowerDirector 360 in a VMware Horizon environment with an NVIDIA vGPU, the application will query the VM’s Windows instance for hardware acceleration capabilities, specifically hardware encoding, hardware decoding, and use of APIs like NVIDIA’s NVENC encoder. In this scenario, while the VM does have an NVIDIA vGPU workstation profile attached with a valid NVIDIA RTX Virtual Workstation (vWS) license, the application is only aware of the VMware Indirect Display Driver and it’s capabilities. This results in all hardware accelerated encoding and decoding capabilities to be disabled.

Example Symptoms

  • 3D Acceleration not detected by application
  • CUDA Cores not available for application
  • OpenCL not available
  • DirectX and Direct3D usage unavailable

In all scenarios, the VM will appear to have 3D acceleration, however one or multiple applications won’t have access.

The Solution

Thanks to the design of the VMware Indirect Display Driver, it should be prioritized in a fashion that it’s used only when other display drivers aren’t available (including NVIDIA vGPU), or system resources aren’t available; however, some 3rd party application may not be able to reference the prioritization, or support multi-GPU (multi display driver), resulting in the incorrect display adapter being used.

As a workaround, you can remove the VMware Indirect Display Driver from the Windows instance running in the VM.

NVIDIA vGPU with VMware Horizon Indirect Display Driver Removed

Please note that simply disabling the “VMware Horizon Indirect Display Driver” will not suffice. A full removal (Right Click, “Uninstall Device”) is required to workaround this issue. Additionally, upgrading or re-installing the VMware Horizon Agent will re-install the VMware Indirect Display Driver.

Dec 082023
 
vCenter-Root-CA-Missing

Today we’ll go over how to install the vSphere vCenter Root Certificate on your client system.

Certificates are designed to verify the identity of the systems, software, and/or resources we are accessing. If we aren’t able to verify and authenticate what we are accessing, how do we know that the resource we are sending information to, is really who they are?

Installing the vSphere vCenter Root Certificate on your client system, allows you to verify the identity of your VMware vCenter server, VMware ESXi hosts, and other resources, all while getting rid of those pesky certificate errors.

Certificate warning when connecting to vCenter vCSA
Certificate warning when connecting to vCenter vCSA

I see too many VMware vSphere administrators simply dismiss the certificate warnings, when instead they (and you) should be installing the Root CA on your system.

Why install the vCenter Server Root CA

Installing the vCenter Server’s Root CA, allows your computer to trust, verify, and validate any certificates issued by the vSphere Root Certification authority running on your vCenter appliance (vCSA). Essentially this translates to the following:

  • Your system will trust the Root CA and all certificates issued by the Root CA
    • This includes: VMware vCenter, vCSA VAMI, and ESXi hosts
  • When connecting to your vCenter server or ESXi hosts, you will not be presented with certificate issues
  • You will no longer have vCenter OVF Import and Datastore File Access Issues
    • This includes errors when deploying OVF templates
    • This includes errors when uploading files directly to a datastore
File Upload in vCenter to ESXi host operation failed

In addition to all of the above, you will start to take advantage of certificate based validation. Your system will verify and validate that when you connect to your vCenter or ESXi hosts, that you are indeed actually connecting to the intended system. When things are working, you won’t be prompted with a notification of certificate errors, whereas if something is wrong, you will be notifying of a possible security event.

How to install the vCenter Root CA

To install the vCenter Root CA on your system, perform the following:

  1. Navigate to your VMware vCenter “Getting Started” page.
    • This is the IP or FQDN of your vCenter server without the “ui” after the address. We only want to access the base domain.
    • Do not click on “Launch vSphere Client”.
  2. Right click on “Download trusted root CA certificates”, and click on save link as.
    Link to download vCenter trusted root CA Certificates
  3. Save this ZIP file to your computer, and extract the archive file
    • You must extract the ZIP file, do not open it by double-clicking on the ZIP file.
  4. Open and navigate through the extracted folders (certs/win in my case) and locate the certificates.
    VMware vCenter Root Certificates
  5. For each file that has the type of “Security Certificate”, right click on it and choose “Install Certificate”.
  6. Change “Store Location” to “Local Machine”
    • This makes your system trust the certificate, not just your user profile
  7. Choose “Place all certificates in the following store”, click Browse, and select “Trusted Root Certification Authorities”.
    Screenshot to Place in Trusted Root Certification Authorities
  8. Complete the wizard. If successful, you’ll see: “The import was successful.”.
  9. Repeat this for each file in that folder with the type of “Security Certificate”.

Alternatively, you can use a GPO with Active Directory or other workstation management techniques to deploy the Root CAs to multiple systems or all the systems in your domain.

Oct 072023
 
Installing VDI optimized New Teams client application on Windows VDI

In this guide we will deploy and install the new Microsoft Teams for VDI (Virtual Desktop Infrastructure) client, and enable Microsoft Teams Media Optimization on VMware Horizon.

This guide replaces and supersedes my old guide “Microsoft (Classic) Teams VDI Optimization for VMware Horizon” which covered the old Classic Teams client and VDI optimizations. The new Microsoft Teams app requires the same special considerations on VDI, and requires special installation instructions to function VMware Horizon and other VDI environments.

You can run the old and new Teams applications side by side in your environment as you transition users.

New Teams client with toggle for old version running on VMware Horizon VDI with optimization
Switch between New Teams and old Teams on VDI

Let’s cover what the new Microsoft Teams app is about, and how to install it in your VDI deployment.

Please note: VDI (Virtual Desktop Infrastructure) support for the new Teams client went G.A. (Generally Availabile) on December 05, 2023. Additionally, Classic teams will go end of support on June 30, 2024.

Table of Contents

Please see below for a table of contents:

The New Microsoft Teams App

On October 05, 2023, Microsoft announced the availability of the new Microsoft Teams application for Windows and Mac computers. This application is a complete rebuild from the old client, and provides numerous enhancements with performance, resource utilization, and memory management.

New Microsoft Teams app VDI optimized with Toggle for new/old version

Ultimately, it’s way faster, and consumes way less memory. And fortunately for us, it supports media optimizations for VDI environments.

My close friend and colleague, mobile jon, did a fantastic in-depth Deep Dive into the New Microsoft Teams and it’s inner workings that I highly recommend reading.

Interestingly enough, it uses the same media optimization channels for VDI as the old client used, so enablement and/or migrating from the old version is very simple if you’re running VMware Horizon, Citrix, AVD, and/or Windows 365.

Install New Microsoft Teams for VDI

While installing the new Teams is fairly simple for non-VDI environment (by simply either enabling the new version in the Teams Admin portal, or using your application manager to deploy the installer), a special method is required to deploy on your VDI images, whether persistent or non-persistent.

Do not include and bundle the Microsoft Teams install with your Microsoft 365 (Office 365) deployment as these need to be installed separately.

Please Note: If you have deployed non-persistent VDI (Instant Clones), you’ll want to make sure you disable auto-updates, as these should be performed manually on the base image. For persistent VDI, you will want auto updates enabled. See below for more information on configurating auto-updates.

You will also need to enable Microsoft Teams Media Optimization for the VDI platform you are using (in my case and example, VMware Horizon).

Considerations for New Teams on VDI

  • Auto-updates can be disabled via a registry key
  • New Teams client app uses the same VDI media optimization channels as the old teams (for VMware Horizon, Citrix, AVD, and W365)
    • If you have already enabled Media Optimization for Teams on VDI for the old version, you can simply install the client using the special bulk installer for all users as shown below, as the new client uses the existing media optimizations.
  • While it is recommended to uninstall the old client and install the new client, you can choose to run both versions side by side together, providing an option to your users as to which version they would like to use.

Enable Media Optimization for Microsoft Teams on VDI

If you haven’t previously for the old client, you’ll need to enable the Teams Media Optimizations for VDI for your VDI platform.

For VMware Horizon, we’ll create a GPO and set the “Enable HTML5 Features” and “Enable Media Optimization for Microsoft Teams”, to “Enabled”. If you have done this for the old Teams app, you can skip this.

Please see below for the GPO setting locations:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> Enable VMware HTML5 Features
Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> VMware WebRTC Redirection Features -> Enable Media Optimization for Microsoft Teams

When installing the VMware Horizon client on Windows computers, you’ll need to make sure you check and enable the “Media Optimization for Microsoft Teams” option on the installer if prompted. Your install may automatically include Teams Optimization and not prompt.

Screenshot of VMware View Client Install with Microsoft Teams Optimization
VMware Horizon Client Install with Media Optimization for Microsoft Teams

If you are using a thin client or zero client, you’ll need to make sure you have the required firmware version installed, and any applicable vendor plugins installed and/or configurables enabled.

Install New Microsoft Teams client on VDI

At this time, we will now install the new Teams app on to both non-persistent images, and persistent VDI VM guests. This method performs a live download and provisions as Administrator. If running this un-elevated, an elevation prompt will appear:

  1. Download the new Microsoft Teams Bootstrapper: https://go.microsoft.com/fwlink/?linkid=2243204&clcid=0x409
  2. On your persistent or non-persistent VM, run the following command as an administrator: teamsbootstrapper.exe -p
  3. Restart the VM (and/or seal your image for deployment)
Installing
Install the new Teams for VDI (Virtual Desktop Infrastructure) with teamsbootstrapper.exe

See below for an example of the deployment:

C:\Users\Administrator.DOMAIN\Downloads>teamsbootstrapper.exe -p
{
  "success": true
}

You’ll note that running the command returns success equals true, and Teams is now installed for all users on this machine.

Install New Microsoft Teams client on VDI (Offline Installer using MSIX package)

Additionally, you can perform an offline installation by also downloading the MSI-X packages and running the following command:

teamsbootstrapper.exe -p -o "C:\LOCATION\MSTeams-x64.msix"
New Teams admin provisioned offline install for VDI
New Teams admin provisioned offline install for VDI

For the offline installation, you’ll need to download the appropriate MSI-X file in additional to the bootstrapper above. See below for download links:

Disable New Microsoft Teams Client Auto Updates

For non-persistent environments, you’ll want to disable the auto update feature and install updates manually on your base image.

To disable auto-updates for the new Teams client, configure the registry key below on your base image:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams

Create a DWORD value called “disableAutoUpdate”, and set to value of “1”.

New Teams app disappears after Optimization with OSOT

If you are using the VMware Operating System Optimization Tool (OSOT), you may notice that after installing New Teams in your base or golden image, that it disappears when publishing and pushing the image to your desktop pool.

The New Teams application is a Windows Store app, and organizations commonly choose to remove all Windows Store apps inside the golden image using the OSOT tool when optimizing the image. Doing this will remove New Teams from your image.

To workaround this issue, you’ll need to choose “Keep all Windows Store Applications” in the OSOT common options, which won’t remove Teams.

Using New Microsoft Teams with FSLogix Profile Containers

When using the new Teams client with FSLogix Profile Containers on non-persistent VDI, you must upgrade to FSLogix version 2.9.8716.30241 to support the new teams client.

Confirm New Microsoft Teams VDI Optimization is working

To confirm that VDI Optimization is working on New Teams, open New Teams, click the “…” in the top right next to your user icon, click “Settings”, then click on “About Teams” on the far bottom of the Settings menu.

New Teams showing “VMware Media Optimized”

You’ll notice “VMware Media Optimized” which indicates VDI Optimization for VMware Horizon is functioning. The text will reflect for other platforms as well.

Uninstall New Microsoft Teams on VDI

The Teams Boot Strap utility can also remove teams for all users on this machine as well by using the “-x” flag. Please see below for all the options for “teamsbootstrapper.exe”:

C:\Users\Administrator.DOMAIN\Downloads>teamsbootstrapper.exe --help
Provisioning program for Microsoft Teams.

Usage: teamsbootstrapper.exe [OPTIONS]

Options:
  -p, --provision-admin    Provision Teams for all users on this machine.
  -x, --deprovision-admin  Remove Teams for all users on this machine.
  -h, --help               Print help

Install New Microsoft Teams on VMware App Volumes / Citrix App Layering

As of April 9th, 2024, you can now deploy the New Teams (Teams 2.0) via VMware App Volumes, using the workflow provided at Capturing new teams as a package in App Volumes 4.x (97141) (vmware.com).

Previously, using the New Teams bootstrapper, it appeared that it evaded and didn’t work with App Packaging and App attaching technologies such as VMware App Volumes and Citrix Application layering, however following the instructions on KB97141 will work.

The New Teams bootstrapper downloads and installs an MSIX app package to the computer running the bootstrapper.

Conclusion

It’s great news that we finally have a better performing Microsoft Teams client that supports VDI optimizations. With new Teams support for VDI reaching GA, and with the extensive testing I’ve performed in my own environment, I’d highly recommend switching over at your convenience!

Jul 282023
 
NVIDIA GPU Manager

In May of 2023, NVIDIA released the NVIDIA GPU Manager for VMware vCenter. This appliance allows you to manage your NVIDIA vGPU Drivers for your VMware vSphere environment.

Since the release, I’ve had a chance to deploy it, test it, and use it, and want to share my findings.

In this post, I’ll cover the following (click to skip ahead):

  1. What is the NVIDIA GPU Manager for VMware vCenter
  2. How to deploy and configure the NVIDIA GPU Manager for VMware vCenter
    • Deployment of OVA
    • Configuration of Appliance
  3. Using the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts

Let’s get to it!

What is the NVIDIA GPU Manager for VMware vCenter

The NVIDIA GPU Manager is an (OVA) appliance that you can deploy in your VMware vSphere infrastructure (using vCenter and ESXi) to act as a driver (and update) repository for vLCM (vSphere Lifecycle Manager).

In addition to acting as a repo for vLCM, it also installs a plugin on your vCenter that provides a GUI for browsing, selecting, and downloading NVIDIA vGPU host drivers to the local repo running on the appliance. These updates can then be deployed using LCM to your hosts.

In short, this allows you to easily select, download, and deploy specific NVIDIA vGPU drivers to your ESXi hosts using vLCM baselines or images, simplifying the entire process.

Supported vSphere Versions

The NVIDIA GPU Manager supports the following vSphere releases (vCenter and ESXi):

  • VMware vSphere 8.0 (and later)
  • VMware vSphere 7.0U2 (and later)

The NVIDIA GPU Manager supports vGPU driver releases 15.1 and later, including the new vGPU 16 release version.

How to deploy and configure the NVIDIA GPU Manager for VMware vCenter

To deploy the NVIDIA GPU Manager Appliance, we have to download an OVA (from NVIDIA’s website), then deploy and configure it.

See below for the step by step instructions:

Download the NVIDIA GPU Manager

  1. Log on to the NVIDIA Application Hub, and navigate to the “NVIDIA Licensing Portal” (https://nvid.nvidia.com).
  2. Navigate to “Software Downloads” and select “Non-Driver Downloads”
  3. Change Filter to “VMware vCenter” (there is both VMware vSphere, and VMware vCenter, pay attention to select the correct).
  4. To the right of “NVIDIA GPU Manager Plug-in 1.0.0 for VMware vCenter”, click “Download” (see below screenshot).
Screenshot of download link for NVIDIA GPU Manager for VMware vCenter
NVIDIA GPU Manager Download Page

After downloading the package and extracting, you should be left with the OVA, along with Release Notes, and the User Guide. I highly recommend reviewing the documentation at your leisure.

Deploy and Configure the NVIDIA GPU Manager

We will now deploy the NVIDIA GPU Manager OVA appliance:

  1. Deploy the OVA to either a cluster with DRS, or a specific ESXi host. In vCenter either right click a cluster or host, and select “Deploy OVF Template”. Choose the GPU Manager OVA file, and continue with the wizard.NVIDIA GPU Manager OVA Deploy
  2. Configure Networking for the Appliance
    • You’ll need to assign an IP Address, and relevant networking information.
    • I always recommend creating DNS (forward and reverse entries) for the IP.NVIDIA GPU Manager OVA Network Configuration
  3. Finally, power on Appliance.

We must now create a role and service account that the GPU Manager will use to connect to the vCenter server.

While the vCenter Administrator account will work, I highly recommend creating a service account specifically for the GPU Manager that only has the required permissions that are necessary for it to function.

  1. Log on to your vCenter Server
  2. Click on the hamburger menu item on the top left, and open “Administration”.
  3. Under “Access Control” select Roles. vCenter-Roles
  4. Select New to create a new role. We can call it “NVIDIA Update Services”.
  5. Assign the following permissions:
    • Extension Privileges
      • Register Extension
      • Unregister Extension
      • Update Extension
    • VMware vSphere Lifecycle Manager Configuration Priveleges
      • Configure Service
    • VMware vSphere Lifecycle Manager Settings Priveleges
      • Read
    • Certificate Management Privileges
      • Create/Delete (Admins priv)
      • Create/Delete (below Admins priv)
    • ***PLEASE NOTE: The above permissions were provided in the documentation and did not work for me (resulted in an insufficient privileges error). To resolve this, I chose “Select All” for “VMware vSphere Lifecycle Manager”, which resolved the issue.***
  6. Save the Role
  7. On the left hand side, navigate to “Users and Groups” under “Single Sign On”
  8. Change the domain to your local vSphere SSO domain (vsphere.local by default)
  9. Create a new user account for the NVIDIA appliance, as an example you could use “nvidia-svc”, and choose a secure password.
  10. Navigate to “Global Permissions” on the left hand side, and click “Add” to create a new permission.
  11. Set the domain, and choose the new “nvidia-svc” service account we created, and set the role to “NVIDIA Update Services”, and check “Propagate to Children”.
  12. You have now configured the service account.

Now, we will perform the initial configuration of the appliance. To configure the application, we must do the following:

  1. Access the appliance using your browser and the IP you configured above (or FQDN) GPU Manager Account Creation
  2. Create a new password for the administrative “vcp_admin” account. This account will be used to manage the appliance.
    • A secret key will be generated that will allow the password to be reset, if required. Save this key somewhere safe.
  3. We must now register the appliance (and plugin) with our vCenter Server. Click on “REGISTER”. NVIDIA GPU Manager Register
  4. Enter the FQDN or IP of your vCenter server, the NVIDIA Service account (“nvidia-svc” from example), and password.
  5. Once the GPU Manager is registered with your vCenter server, the remainder of the configuration will be completed from the vCenter GPU.
    • The registration process will install the GPU Manager Plugin in to VMware vCenter
    • The registration process will also configure a repository in LCM (this repo is being hosted on the GPU manager appliance).

We must now configure an API key on the NVIDIA Licensing portal, to allow your GPU Manager to download updates on your behalf.

  1. Open your browser and navigate to https://nvid.nvidia.com. Then select “NVIDIA LICENSING PORTAL”. Login using your credentials.
  2. On the left hand side, select “API Keys”.
  3. On the upper right hand, select “CREATE API KEY”.
  4. Give the key a name, and for access type choose “Software Downloads”. I would recommend extending the key validation time, or disabling key expiration. NVIDIA Download API Create Key
  5. The key should now be created.
  6. Click on “view api key”, and record the key. You’ll need to enter this in later in to the vCenter GPU Manager plugin.

And now we can finally log on to the vCenter interface, and perform the final configuration for the appliance.

  1. Log on to the vCenter client, click on the hamburger menu, and select “NVIDIA GPU Manager”.
  2. Enter the API key you created above in to the “NVIDIA Licensing Portal API Key” field, and select “Apply”.
  3. The appliance should now be fully configured and activated. GPU Manager Activated API Key
  4. Configuration is complete.

We have now fully deployed and completed the base configuration for the NVIDIA GPU Manager.

Using the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts

In this section, I’ll be providing an overview of how to use the NVIDIA GPU Manager to manage, update, and deploy vGPU drivers to ESXi hosts. But first, lets go over the workflow…

The workflow is a simple one:

  1. Using the vCenter client plugin, you choose the drivers you want to deploy. These get downloaded to the repo on the GPU Manager appliance, and are made available to Lifecycle Manager.
  2. You then use Lifecycle Manager to deploy the vGPU Host Drivers to the applicable hosts, using baselines or images.

As you can see, there’s not much to it, despite all the configuration we had to do above. While it is very simple, it simplifies management quite a bit, especially if you’re using images with Lifecycle Manager.

To choose and download the drivers, load up the plugin, use the filters to filter the list, and select your driver to download.

GPU Manager downloading vGPU Driver
NVIDIA GPU Manager downloading vGPU Driver

As you can see in the example, I chose to download the vGPU 15.3 host driver. Once completed, it’ll be made available in the repo being hosted on the appliance.

Once LCM has a changed to sync with the updated repos, the driver is then made available to be deployed. You can then deploy using baselines or host images.

LCM Image Update with NVIDIA vGPU Driver from NVIDIA GPU Manager
LCM Image Update with NVIDIA vGPU Driver from NVIDIA GPU Manager

In the example above, I added the vGPU 16 (535.54.06) host driver to my clusters update image, which I will then remediate and deploy to all the hosts in that cluster. The vGPU driver was made available from the download using GPU Manager.