Jun 262024
 
vSphere 8U3 vGPU Mixed-Size Profiles

I’m happy to announce today that you can now deploy vGPU Mixed Size Virtual GPU types with VMware vSphere 8U3, also known as “Heterogeneous Time-Slice Sizes” or “Heterogeneous vGPU types”.

VMware vSphere 8U3 was released yesterday (June 26th, 2024), and brought with it numerous new features and functionality. However, mixed vGPU types deserves it’s own blog post as it’s a major game-changer for those who use NVIDIA vGPU for AI and VDI workloads, including Omnissa Horizon.

NVIDIA vGPU (Virtual GPU) Types

When deploying NVIDIA vGPU, you configure Virtual GPU types that provide Workstation RTX (vWS Q-Series), Virtual PC (vPC B-Series), or Virtual Apps (vApps A-Series) class capabilities to virtual machines.

On top of the classifications above, you also needed to configure the Framebuffer memory size (or VRAM/Video RAM) allotted to the vGPU.

Historically, when you powered the first VM, the physical GPU that provides vGPU, would then only be able to serve that Virtual GPU type (class and Framebuffer size) to other VMs, locking all the VMs on running on that GPU to same vGPU type. If you had multiple GPUs in a server, you could run different vGPU types on the different physical GPU, however each GPU would be locked to the vGPU type of the first VM started with it.

NVIDIA Mixed Size Virtual GPU Type functionality

Earlier this year, NVIDIA provided the ability to deploy heterogeneous mixed vGPU types through the vGPU drivers, first starting with the ability to run different classifications (you could mix vWS and vPC), and the later adding support for mixed-size frame buffers (example, mixing a 4Q and 8Q profile on the same GPU).

While the NVIDIA vGPU solution supported this, VMware vSphere did not immediately add support so it couldn’t take advantage of this until the new release of VMware vSphere 8U3, VMware vCenter 8U3, and VMware ESXi 8U3.

To configure different classifications (vWS mixed with vPC), it requires no configuration other than using a host-driver and guest-driver that support it, however to use different sized framebuffers, it needs to be enabled on the host.

To Enable vGPU Mixed Size Virtual GPU types:

  1. Log on to VMware vCenter
  2. Confirm all vGPU enabled Virtual Machines are powered off
  3. Select the host in your inventory
  4. Select the “Configure” tab on the selected host
  5. Navigate to “Graphics” under “Hardware”
  6. Select the GPU from the list, click “Edit”, and change the “vGPU Mode” to “Mixed Size”
Screenshot showing the "Graphics Properties" for GPU adapters on VMware ESXi 8U3 with the "vGPU Mode" set to "Mixed Size"

Once you configure this, you can now deploy mixed-size vGPU profiles.

When you SSH in to your host, you can query to confirm it’s configured:

[root@ESXi-HOST:~] nvidia-smi -q

    vGPU Device Capability
        Fractional Multi-vGPU             : Supported
        Heterogeneous Time-Slice Profiles : Supported
        Heterogeneous Time-Slice Sizes    : Supported
        vGPU Heterogeneous Mode           : Enabled

It’s supported, and enabled!

Additional Notes

Please note the following:

  • When restarting your hosts, resetting the GPU, and/or restarting the vGPU Manager daemon, the ESXi host will change back to it’s default “Same Size” mode. You will need to manually change it back to “Mixed Mode”.
  • When enabling mixed-size vGPU types, the number of some types of vGPU profiles may be reduced vs running the GPU in equal-size mode (to allow other profile types). Please see the additional links for information on Mixed-Size vGPU types inside the “Virtual GPU Types for Supported GPUs” link.
  • Only “Best Effort and “Equal Share” schedulers are supported with mixed mode vGPU. Fixed Share scheduling is not supported.
Jun 222024
 

I hope that you’ll have a chance to Join me at VMware Explore 2024 this year!

VMware Explore 2024, is being held at the Venetian and Palazzo in Las Vegas, on August 26 to 29th, 2024.

Image showcasing that Registration is open for VMware Explore 2024

Register here for VMware Explore 2024!

VMware Explore is one of my favorite, and most important annual conferences for a number of reasons.

Networking

Through technology we make friends, connections, and friendships with those in our community. Chances are, you’ll see all your favorite people and that community, at VMware Explore.

You’ll have the chance to catch up with communities like VMUG (VMware User Group), the vExpert community, the vCommunity and I’m sure even some folks from World of EUC (like myself).

Additionally, you’ll get to network with like minded people passionate about the technology, experts in the field, and a diverse group of individuals from all over the world.

Learning

I can’t say how important the technical sessions are…

The sessions at VMware Explore help you learn in so many ways:

  • Learn about products you have interest in, but no experience
  • Learn more about the products you’re familiar with, become an expert!
  • Have a chat and dialogue with Product Managers, Presenters, and Staff about the solutions you work with, or are curious about
  • Catch up with and connect with experts (like vExperts)!

You can also save on certification by getting certified and taking exams at VMware Explore at a 50% discount!

Business

As the President and Owner of Digitally Accurate Inc. (a VMware and Broadcom Partner), attending this conference is crucial as it allows me to connect with customers, vendors, and VMware/Broadcom staff.

Deals get done, laughs are had, and these interactions really help advance and move business forward.

You get to have fun

And don’t forget, this event is FUN! There’s numerous events and parties that are held by vendors and community programs (such as vCommunity, VMUG, and more).

I highly recommend you keep your eyes glued to Discord, Slack, E-Mails, and the web to find the invite links to all the parties. Ask around!

Join me at VMware Explore 2024

With all that said, I hope to see you there!

Follow these hashtags to stay up to date

  • #VMwareExplore
  • #VMwareExploreHOL
  • #VMwareExploreSelfie
  • #VMwareExploreParty

Follow the official VMware Explore Social Media pages

May 252024
 
VDI Gaming Demo with NVIDIA vGPU and Omnissa Horizon

Here’s a fun quick VDI Gaming Demo with NVIDIA vGPU and Omnissa Horizon 8, using an NVIDIA L4 GPU and the L4-12Q Profile.

This video is just for fun, and is just to show some of the capabilities of the technology, hardware, and software, in this case, with Cloud Gaming.

The NVIDIA vGPU solution provides the ability to “slice” and create multiple Virtual GPU (vGPU) devices for your Virtual Machines and Virtual workloads.

In this video:

  • Quick Introduction to NVIDIA vGPU with Omnissa Horizon 8
  • Validating NVIDIA vGPU functionality (with DirectX Diagnostics, Horizon Performance Monitor Tracker)
  • MechWarrior 5 Cloud Gaming
  • Heaven Benchmark

Environment Details:

  • 2 x HPE DL360p Gen8 Servers (2 x 10 Core Procs, 384GB of RAM)
    • 1 Server with NVIDIA A2
    • 1 Server with NVIDIA L4
  • VMware vSphere 8U2
  • Omnissa Horizon 8

Hope you enjoy the video and demo!

May 092024
 
NVIDIA vGPU Network Licensing Token

When deploying NVIDIA vGPU across a VDI environment, I often see IT teams deploy the licensing token directly on the persistent VMs, or on the non-persistent base golden image. This often causes a nightmare when the client activation token must be updated.

I highly recommend considering network placement of the NVIDIA vGPU Licensing Client Configuration token file for your deployments.

In this post we’ll review the Client Configuration Token File, why you’d want to place it on the network, and how to do so.

What is the Client Configuration Token File

The Client Configuration Token File, tells the NVIDIA vGPU driver on your VM where to find the licensing server information. This token will point the driver to either the CLS or DLS licensing server and request the applicable license to be issued.

By default, the vGPU driver will check the following location for the token:

C:\Program Files\NVIDIA Corporation\vGPU Licensing\ClientConfigToken\

While this is common, there’s a much better (and easier) method that you can use to deploy the Client Configuration Tokens, using Network Shares, to ease management of these files.

Placing the NVIDIA vGPU Licensing client configuration token on a network share

Using the Windows Registry, along with a GPO (Group Policy Object), you can configure a network location for the NVIDIA Client Configuration Token, so that your systems whether Persistent or Non-Persistent will use this location.

In the event of a token change, you can simply delete and remove the old token, and place a new configuration token, and all the systems will have immediate access to it, without manually updating individual systems.

Here we’ll use the registry and a GPO to configure the token location:

  1. Using an administrative account, create a folder called “vGPU-Licensing” on your domain SYSVOL share.
    • Example: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing\
  2. Place your NVIDIA Licensing Client Configuration Token in this folderNVIDIA Licensing Token SYSVOL
  3. Open “Group Policy Management” and create a new GPO called “VDI-NVIDIA-LicensingToken”
  4. Navigate to: Computer Configuration -> Preferences -> Windows Settings -> Registry
  5. Right Click and select New -> Registry Item
  6. Under the New Registry Window Enter the following:
    • Action: Update
    • Hive: HKEY_LOCAL_MACHINE
    • Key Path: SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing
    • Value Name: ClientConfigTokenPath
    • Value Type: REG_SZ
    • Value Data: \\Domain.com\SYSVOL\Domain.com\vGPU-Licensing
    • Change the network location to match your environment and your setup
  7. After populating the fields, it should be similar to the following example: NVIDIA GPO Registry Client Configuration Token
  8. Hit Apply, then Ok, then link the newly created GPO to the OU where your VDI VM guests are located with NVIDIA vGPU.

That’s it! All we did was created a GPO which configures the Registry key “ClientConfigTokenPath” inside of HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\nvlddmkm\Global\GridLicensing\ and set it to a network share that has the configuration tokens.

Please note, the NVIDIA licensing service accesses the network location using the services security context (not the user’s context), which is why I chose the SYSVOL share, as the computer accounts have read access to this location (example, reading the GPOs on boot and user logon).

Additionally, note that the registry key and location may vary if you’re using older versions of the NVIDIA vGPU Driver. The key used in this post is for versions 16.x and 17.x.

May 092024
 
VMware App Volumes Logo

In this post, I’ll go over the process how to Migrate a VMware App Volumes SQL Database to a new server (or location), and also go over the reasons why you may want to do this.

VMware App Volumes stores all of it’s configuration data inside of a Microsoft SQL Database. This database is used and shared by all the App Volumes Managers in an environment.

Please make sure before any modification of your deployment that you have the proper backups in place.

Why move the database?

There’s a number of reasons why you may want to move your VMware App Volumes SQL Database. These include (but are not limited to):

  • Migrating from Standard SQL Server Deployment to a highly available Microsoft SQL Always On Availability Group
  • Deploying a new Microsoft SQL Server and decommissioning your old SQL Server

In any case, we need the flexibility and ability to be able to move and migrate the SQL database to a new server and/or location.

Considerations

When moving the VMware App Volumes SQL Database, you’ll need to shut down all of your VMware App Volumes Manager Servers.

Note, that while this may result in the inability to attach App Volume VMDKs to new VDI sessions, if your environment is properly configured, you shouldn’t have any interruption of App Volume Apps already attached to existing sessions. If you’re in a zero-downtime environment, make sure any users that may require apps, logon, and attach the apps before starting your migration and maintenance.

ODBC Configuration will be updated/changed during this process.

Always make a backup of your App Volumes Manager servers and SQL database before making any changes.

Migrating the App Volumes Database to a new SQL Server

To migrate the database, we’ll need to essentially shutdown all the App Volumes Services, migrate the database, modify a configuration file, and then bring up 1 (one) single App Volumes Manager server, confirm everything is working, and then update and bring online any additional App Volume Manager Servers.

Perform the following steps to migrate the database:

  1. Perform Backups
    1. Snapshot App Volumes Manager Servers
    2. Backup SQL Database
    3. Backup the “database.yml” file in C:\Program Files (x86)\CloudVolumes\Manager\config
  2. RDP or Console Access all VMware App Volumes Manager Servers
  3. Stop all the App Volumes Services on ALL App Volumes Manager Servers
  4. Migrate SQL Database to a new Microsoft SQL Server (Standard deployment, or High Availability SQL Always-On)
  5. Update your ODBC Configuration on ALL your App Volumes Manager Servers
    1. Open “ODBC Data Source Administrator (64-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
    2. Open “ODBC Data Source Administrator (32-Bit)” from the Windows Control Panel. Identify your App Volumes ODBC Connection, after selecting it, click on “Configure”. Walk through the wizard and update it to the new location of the SQL Database and server. Make sure you test and confirm the connection is working.
  6. If you’re using SQL Authentication, you’ll need update your database.yml file. You’ll need to do this on all your App Volumes Manager Servers if you’re using SQL authentication.
    1. Open C:\Program Files (x86)\CloudVolumes\Manager\config\database.yml
    2. Under “production:” add and/or modify the following two entries:
      • username: <SQL Username>
      • password: <SQL password>
    3. Replace both <SQL Username> and <SQL password> with your App Volumes SQL service account that the App Volumes Manager is using to access the SQB database. Please note: After starting the services, the password will be removed from the configuration file.
  7. You can now start the App Volumes Manager services on ONE of your App Volumes Managers. Please make sure you only start only one as this will allow you to test the configuration, and it will also perform a discovery on the environment to determine active sessions, and update the database.
  8. Monitor the logs, and activity. You’ll want to confirm that everything is working.
  9. After you have confirmed the success of the migration and functionality of one of the App Volumes Servers, and after the activity of that server has become idle, you can now start the services on your other App Volumes Managers.

You have now successfully migrated your App Volumes SQL DB to a new server.