May 262024
 
NVIDIA vGPU

When using Omnissa Horizon (formerly VMware Horizon), you may note that NVENC offload is disabled when using RDSH with NVIDIA vGPU. This may also affect other VDI and Application Delivery platforms that use RDSH (Remote Desktop Session Hosts) and NVIDIA vGPU (Virtual GPU).

One of the key benefits of deploying NVIDIA vGPU with Omnissa Horizon, is being able to use the NVIDIA NvENC (NVIDIA Encoder) to hardware encode your VDI session. This is also known as H264/H265/HEVC/AV1 offload.

This means that the encoding and compression of the remoted video session is handled by the GPU, instead of the CPU, freeing up resources on the VM guest and host, reducing latency with encoding, and also providing a much better user experience.

The Observation

When deploying NVIDIA vGPU with vApps and Horizon Apps, you’ll note the following in the VMware Horizon Performance Tracker:

VMware Horizon Performance Tracker on RDSH showing software encoder

You can see above that the “Encoder Name” is using “h264 4:2:0”. This means that the CPU Software Based encoder is handing the encoding of the H264 BLAST Session. While the environment is 3D accelerated, the remoting protocol encoding is not hardware offloaded.

You’ll also note the following:

  • VMware Horizon Agent High CPU Usage
  • “nvidia-smi” on the host and VM does not report the encoder being used

This behavior is as expected due to the inability of RDS session hosts to be able to utilize NvENC. RDSH hosts utilize a software framebuffer for user environment and desktop delivery which cannot be used with NVENC.

Solution and/or Workaround

To work around this limitation, you have the option of using VDI desktops (in this case it would be preferable to use non-persistent Instant Clones) to deploy an “Application Pool” with vGPU enabled VMs.

Note that this is a major change to your solution architecture because pushing applications (and desktops) from Windows 10 or Windows 11 Guest VMs is a 1 to 1 relation, versus RDSH which supports many users to one VM.

Using Horizon, you could then push applications (not desktops) from these vGPU enabled Instant Clones, which would support NVENC and hardware offload, as shown in the example below:

VMware Horizon Performance Tracker showing NVIDIA NvEnc Hardware encoder on instant clone

In the image above, you’ll note that the “Encoder Name” is “NVIDIA NvEnc HEVC 4:2:0” showing us that NvEnc hardware offload and encoding is functioning and being used.

Note, that using this method to deploy Horizon Apps will result in more framebuffer being required, however may be offset since a smaller framebuffer can be used with individual VMs versus a large framebuffer being assigned and attached to an RDSH host.

May 252024
 
VDI Gaming Demo with NVIDIA vGPU and Omnissa Horizon

Here’s a fun quick VDI Gaming Demo with NVIDIA vGPU and Omnissa Horizon 8, using an NVIDIA L4 GPU and the L4-12Q Profile.

This video is just for fun, and is just to show some of the capabilities of the technology, hardware, and software, in this case, with Cloud Gaming.

The NVIDIA vGPU solution provides the ability to “slice” and create multiple Virtual GPU (vGPU) devices for your Virtual Machines and Virtual workloads.

In this video:

  • Quick Introduction to NVIDIA vGPU with Omnissa Horizon 8
  • Validating NVIDIA vGPU functionality (with DirectX Diagnostics, Horizon Performance Monitor Tracker)
  • MechWarrior 5 Cloud Gaming
  • Heaven Benchmark

Environment Details:

  • 2 x HPE DL360p Gen8 Servers (2 x 10 Core Procs, 384GB of RAM)
    • 1 Server with NVIDIA A2
    • 1 Server with NVIDIA L4
  • VMware vSphere 8U2
  • Omnissa Horizon 8

Hope you enjoy the video and demo!

May 222024
 
Default New User Registry Hive

Today we’re going to dive in to how to modify or add to the new default user registry on Windows. This is the registry that is provisioned to new users when they log on to Windows for the first time.

These steps are required to make modifications to the registry, either to configure the users environment, and/or configure registry settings required for applications that may be install on the windows system that require configuration for a seamless user experience.

I regularly use this method to modify the default user registry on non-persistent VDI golden images for use with Omnissa Horizon (formerly VMware Horizon), however this can be used on traditional Windows systems (non-VDI), and/or other VDI platforms such as Citrix, AVD, and more!

Load the Default User Registry Hive

Let’s go ahead and get started! We’ll need to open “regedit” with administrative credentials (either logon as an admin, or “Run As” administrator). Then we’ll expand “HKEY_USERS”.

Next, we’ll go to “File” and then “Load Hive”. This will open a Windows File Explorer. We’ll navigate to the following directory:

C:\Users\Default\NTUSER.DAT

Once we select the “NTUSER.DAT” file, we’ll be prompted to load the hive and give it a key name. You can call it whatever you’d like (as long as it doesn’t conflict with an existing key), but for this example I’ll call it “Default-User”.

You’ll now notice that the Default User’s “HKEY_CURRENT_USER”, is now loaded as the hive you specified above, in our case it’s loaded as “Default-User”.

You can now make any modifications to the default users registry, including importing keys. If you’re using a “.reg” file, make sure you update it to reflect the registry hive location you’ve loaded.

Unload the Default User Registry Hive

Once you’ve made the modifications to the default user registry hive, whenever new users log on, they will be provisioned this hive.

We can now go ahead and unload the registry hive.

We’ll select the “Default-User” key (or whatever you called it), and select “Unload Hive”.

This will properly and gracefully close the default users registry hive.

May 092024
 
NVIDIA vGPU Network Licensing Token

When deploying NVIDIA vGPU across a VDI environment, I often see IT teams deploy the licensing token directly on the persistent VMs, or on the non-persistent base golden image. This often causes a nightmare when the client activation token must be updated.

I highly recommend considering network placement of the NVIDIA vGPU Licensing Client Configuration token file for your deployments.

In this post we’ll review the Client Configuration Token File, why you’d want to place it on the network, and how to do so.

What is the Client Configuration Token File

The Client Configuration Token File, tells the NVIDIA vGPU driver on your VM where to find the licensing server information. This token will point the driver to either the CLS or DLS licensing server and request the applicable license to be issued.

By default, the vGPU driver will check the following location for the token:

C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\

While this is common, there’s a much better (and easier) method that you can use to deploy the Client Configuration Tokens, using Network Shares, to ease management of these files.

Placing the NVIDIA vGPU Licensing client configuration token on a network share

Using the Windows Registry, along with a GPO (Group Policy Object), you can configure a network location for the NVIDIA Client Configuration Token, so that your systems whether Persistent or Non-Persistent will use this location.

In the event of a token change, you can simply delete and remove the old token, and place a new configuration token, and all the systems will have immediate access to it, without manually updating individual systems.

Here we’ll use the registry and a GPO to configure the token location:

  1. Using an administrative account, create a folder called “vGPU-Licensing” on your domain SYSVOL share.
    • Example: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing\
  2. Place your NVIDIA Licensing Client Configuration Token in this folderNVIDIA Licensing Token SYSVOL
  3. Open “Group Policy Management” and create a new GPO called “VDI-NVIDIA-LicensingToken”
  4. Navigate to: Computer Configuration -> Preferences -> Windows Settings -> Registry
  5. Right Click and select New -> Registry Item
  6. Under the New Registry Window Enter the following:
    • Action: Update
    • Hive: HKEY_LOCAL_MACHINE
    • Key Path: SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing
    • Value Name: ClientConfigTokenPath
    • Value Type: REG_SZ
    • Value Data: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing
    • Change the network location to match your environment and your setup
  7. After populating the fields, it should be similar to the following example: NVIDIA GPO Registry Client Configuration Token
  8. Hit Apply, then Ok, then link the newly created GPO to the OU where your VDI VM guests are located with NVIDIA vGPU.

That’s it! All we did was created a GPO which configures the Registry key “ClientConfigTokenPath” inside of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing\ and set it to a network share that has the configuration tokens.

Please note, the NVIDIA licensing service accesses the network location using the services security context (not the user’s context), which is why I chose the SYSVOL share, as the computer accounts have read access to this location (example, reading the GPOs on boot and user logon).

Additionally, note that the registry key and location may vary if you’re using older versions of the NVIDIA vGPU Driver. The key used in this post is for versions 16.x and 17.x.

May 092024
 
VMware App Volumes Logo

In this post, I’ll go over the process how to Migrate a VMware App Volumes SQL Database to a new server (or location), and also go over the reasons why you may want to do this.

VMware App Volumes stores all of it’s configuration data inside of a Microsoft SQL Database. This database is used and shared by all the App Volumes Managers in an environment.

Please make sure before any modification of your deployment that you have the proper backups in place.

Why move the database?

There’s a number of reasons why you may want to move your VMware App Volumes SQL Database. These include (but are not limited to):

  • Migrating from Standard SQL Server Deployment to a highly available Microsoft SQL Always On Availability Group
  • Deploying a new Microsoft SQL Server and decommissioning your old SQL Server

In any case, we need the flexibility and ability to be able to move and migrate the SQL database to a new server and/or location.

Considerations

When moving the VMware App Volumes SQL Database, you’ll need to shut down all of your VMware App Volumes Manager Servers.

Note, that while this may result in the inability to attach App Volume VMDKs to new VDI sessions, if your environment is properly configured, you shouldn’t have any interruption of App Volume Apps already attached to existing sessions. If you’re in a zero-downtime environment, make sure any users that may require apps, logon, and attach the apps before starting your migration and maintenance.

ODBC Configuration will be updated/changed during this process.

Always make a backup of your App Volumes Manager servers and SQL database before making any changes.

Migrating the App Volumes Database to a new SQL Server

To migrate the database, we’ll need to essentially shutdown all the App Volumes Services, migrate the database, modify a configuration file, and then bring up 1 (one) single App Volumes Manager server, confirm everything is working, and then update and bring online any additional App Volume Manager Servers.

Perform the following steps to migrate the database:

  1. Perform Backups
    1. Snapshot App Volumes Manager Servers
    2. Backup SQL Database
    3. Backup the “database.yml” file in C:\Program Files (x86)\CloudVolumes\Manager\config
  2. RDP or Console Access all VMware App Volumes Manager Servers
  3. Stop all the App Volumes Services on ALL App Volumes Manager Servers
  4. Migrate SQL Database to a new Microsoft SQL Server (Standard deployment, or High Availability SQL Always-On)
  5. Update your ODBC Configuration on ALL your App Volumes Manager Servers
    1. Open “ODBC Data Source Administrator (64-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
    2. Open “ODBC Data Source Administrator (32-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
  6. If you’re using SQL Authentication, you’ll need update your database.yml file. You’ll need to do this on all your App Volumes Manager Servers if you’re using SQL authentication.
    1. Open C:\Program Files (x86)\CloudVolumes\Manager\config\database.yml
    2. Under “production:” add and/or modify the following two entries:
      • username: <SQL Username>
      • password: <SQL password>
    3. Replace both <SQL Username> and <SQL password> with your App Volumes SQL service account that the App Volumes Manager is using to access the SQB database. Please note: After starting the services, the password will be removed from the configuration file.
  7. You can now start the App Volumes Manager services on ONE of your App Volumes Managers. Please make sure you only start only one as this will allow you to test the configuration, and it will also perform a discovery on the environment to determine active sessions, and update the database.
  8. Monitor the logs, and activity. You’ll want to confirm that everything is working.
  9. After you have confirmed the success of the migration and functionality of one of the App Volumes Servers, and after the activity of that server has become idle, you can now start the services on your other App Volumes Managers.

You have now successfully migrated your App Volumes SQL DB to a new server.