Sep 042022
 

With VMware ESXi 6.5 and 6.7 going End of Life on October 15th, 2022, many of you are looking for options to update hosts in your homelab, especially in my case putting ESXi 7.0 on HP Proliant DL360p Gen8 servers.

As far as support goes, HPE last provided a custom installer for ESXi for versions 6.5 U3 which was released December of 2019. This was the “last Pre-Gen9 custom image” released, as ESXi 7.0 on the DL360p Gen8 is totally unsupported.

Update: Check out my post covering ESXi 8.0 on HPE Proliant DL360p Gen8 servers!

ESXi 6.7 or higher on the Gen8 Servers

The jump from 6.5 to 6.7 was a little easier, as you could use the 6.5 custom installer, and then upgrade to 6.7. For the most part, as long as the hardware itself was supported, you were in pretty good shape.

Additionally, with the HPE vibsdepot loaded in to VMware Update Manager (now known as Lifecycle Manager), you could also keep all the HPE drivers and agents up to date.

ESXi 7.0 on the Gen8 Servers

Some were lucky enough to upgrade their current installs to 7 with no or limited problems, however the general consensus online was to expect problems. There were some major driver changes, which I think at one point led to an advisory to perform a fresh install, even if you had a fully supported configuration with newer generation servers such as the Proliant Gen9 and Gen10 servers, when upgrading from older versions.

In my setup, I have the following:

  • 2 x HPE Proliant DL360p Gen8 Servers
    • Dual Intel Xeon E5-2660v2 Processors in each server
    • USB and/or SD for booting ESXi
    • No other internal storage
  • External SAN iSCSI Storage

Concerns and Considerations

My main concern is to not only have a stable and functioning ESXi 7 instance, but I also, if possible would like to have the HPE drivers, agents, and integrations with iLO.

You must consider that while this is completely unsupported, you do need to make sure that the components of your current configuration are supported, such as the processor and PCIe cards, even if the server as a whole is not supported.

Make sure you reference your hardware on the VMware Compatibility Guide (HCL).

In my case, my processors were supported, however my RAID controller was not. So theoretically, since I’m not using my RAID controllers, I should be fine.

The process – Installing ESXi 7.0

I was able to install ESXi 7.0 on my HPE Proliant Gen8 servers, by performing the following steps.

  1. Download the Generic ESXi installer from VMware directly.
    1. Link: Download VMware vSphere
  2. Download the “HPE Custom Addon for ESXi 7.0”.
    1. Link: HPE Custom Addon for ESXi 7.0 U3 for July 2022
  3. Boot server, install using the Generic Installer downloaded above.
  4. Mount NFS or iSCSI datastore.
  5. Copy HPE Custom Addon for ESXi zip file to datastore.
  6. Enable SSH on host (or use console).
  7. Place host in to maintenance mode.
  8. Run “esxcli software vib install -d /vmfs/volumes/datastore-name/folder-name/HPE-703.0.0.10.9.1.5-Jul2022-Addon-depot.zip” from the command line.
  9. The install will run and complete successfully.
  10. Restart your server as needed, you’ll now notice that not only were HPE drivers installed, but also agents like the Agentless management agent, and iLO integrations.

You’ll now have a functioning instance.

HP Proliant DL360p Gen8 running ESXi 7.0

In my case everything was working, except for the “Smart Array P420i” RAID Controller, which I don’t use anyways.

Additionally, if you have a vCenter instance running, make sure that you add the HPE vibsdepot repo to your Lifecycle Manager. After you add the repo, create a baseline, and attach the baseline to the host, go ahead and proceed to scan, stage, and remediate the server which will then further update all the HPE specific drivers and software.

Aug 142022
 
HP Printer on VDI

When it comes to troubleshooting login times with non-persistent VDI (VMware Horizon Instant Clones), I often find delays associated with printer drivers not being included in the golden image. In this post, I’m going to show you how to add a printer driver to an Instant Clone golden image!

Printing with non-persistent VDI and Instant Clones

In most environments, printers will be mapped for users during logon. If a printer is mapped or added and the driver is not added to the golden image, it will usually be retrieved from the print server and installed, adding to the login process and ultimately leading to a delay.

Due of the nature of non-persistent VDI and Instant Clones, every time the user goes to login and get’s a new VM, the driver will then be downloaded and installed each of these times, creating a redundant process wasting time and network bandwidth.

To avoid this, we need to inject the required printer drivers in to the golden image. You can add numerous drivers and should include all the drivers that any and all the users are expecting to use.

An important consideration: Try using Universal Print Drivers as much as possible. Universal Printer Drivers often support numerous different printers, which allows you to install one driver to support many different printers from the same vendor.

How to add a printer driver to an instant clone golden image

Below, I’ll show you how to inject a driver in to the Instant Clone golden image. Note that this doesn’t actually add a printer, but only installs the printer driver in to the Windows operating system so it is available for a printer to be configured and/or mapped.

Let’s get started! In this example we’ll add the HP Universal Driver. These instructions work on both Windows 10 and Windows 11 (as well as Windows Server operating systems):

  1. Click Start, type in “Print Management” and open the “Print Management”. You can also click Start, Run, and type “printmanagement.msc”.
    Launch Print Management
  2. On the left hand side, expand “Print Servers”, then expand your computer name, and select “Drivers”.
    Print Management Drivers
  3. Right click on “Drivers” and select “Add Driver”.
    Print Management Add Driver
  4. When the “Welcome to the Add Printer Driver Wizard” opens, click Next.
    Add Printer Driver Wizard
  5. Leave the default for the architecture. It should default to the architecture of the golden image.
  6. When you are at the “Printer Driver Selection” stage, click on “Have Disk”.
    Print Management Add Printer Driver Location
  7. Browse to the location of your printer driver. In this example, we navigate to the extracted HP Universal Print Driver.
    Browse Printer Driver Location
  8. Select the driver you want to install.
    VDI Select Printer Driver to Install
  9. Click on Finish to complete the driver installation.
    Finish installing Instant Clone Printer Driver

The driver you installed should now appear in the list as it has been installed in to the operating system and is now available should a user add a printer, or have a printer automatically mapped.

Screenshot of Printer Driver installed on non-persistent VDI Instant Clone golden image
Printer Driver installed on Non-Persistent Instance Clone Golden Image

Now seal, snap, and deploy your image, and you’re good to go!

Aug 102022
 

As we approach the date, I wanted to write a post sharing Why I’m looking forward to VMware Explore 2022 and share what I hope to get from the conference and experience.

As most of you know, VMware Explore 2022 (formerly known as VMware VMworld) is taking place this month in San Francisco at the Moscone Center August 29th, 2022 to September 1st, 2022.

VMware Explore 2022 Conference Logo

If you haven’t gotten your ticket, you can sign up here: https://www.vmware.com/explore/us.html

As some of you know, I regularly write about virtualization technologies, in particular VMware. VMware products are not only involved in the work that I do, but part of a personal hobby and passion. I was an early adopter of Virtualization, and on top of that, VDI (Virtual Desktop Infrastructure) has become a personal obsession of mine.

Because of the content I’ve written online, I’ve had the pleasure of helping others with these technologies. Over the years this has brought me new friendships, business customers, and given me a sense of participation in the larger community, ultimately leading to me achieving my VMware vExpert status, as well as being a part of the VMware vExpert EUC sub program.

Even though I’ve been in tech since becoming an adult, I’ve actually never had the opportunity to visit a large-scale conference in person in my entire life. VMware Explore 2022 will be my first in-person tech conference!

I'm going to VMware Explore Conference

So why am I going? What do I hope to get from it? What are my reasons for attending?

Essentially there’s 3 big reasons why I’m going to be attending:

  • Community
  • Knowledge
  • Business

Let’s dive in to each one…

Community

As mentioned above, I’ve had the pleasure of being a part of the VMware vExpert program for the past couple years. During this time, it has helped my content reach new audiences, I’ve had the chance to converse and talk with the top industry experts, I’ve also had the chance to learn more about the technologies I love, and it’s given a sense of belonging and participating in something “big”.

VMware vExpert BadgevExpert Badge

Blogging has been a passion of mine for as long as I can remember, with the first post on this blog going back to April 11th, 2010. Blogging has allowed me to not only share my knowledge, but also participate and contribute to the community. This has helped me meet new people, network, learn even more, and also help others pursue their passions and goals with technology.

Attending VMware Explore 2022 will help me take this a step further to actually meet some of those in the community face to face. I love meeting new people, and this will allow me to engage with those who have stumbled across my blog, and it will also allow me to meet those who are leaders with the community and hopefully even learn some new things from them.

I’ve already started working on my list of people to meetup with!

Knowledge

In addition to the knowledge I hope to learn from others in the community, VMware Explore 2022 has over 600 technical sessions (some even hosted by fellow vExperts) where you can learn more about the technologies you use everyday, as well as technologies you’re considering or planning on using in the future.

VMware Explore Content Catalog

The full content catalog for VMware Explore 2022 can be found here: https://event.vmware.com/flow/vmware/explore2022us/content/page/catalog

In particular, a few products and solutions I want to increase my knowledge with are:

  • VMware Workspace One
  • VMware Horizon Cloud Service
  • VMware vSphere+

In addition to the above, I’m sure I’ll be expanding my knowledge on things I wasn’t even planning on… You could say the point of the conference is to “Explore”!

Business

VMware products and solutions have been an important part of the solutions and offerings my business provides. In addition, those products and solutions are also the foundations of many businesses and organizations key IT infrastructure.

These conferences are great to network, discuss business, find new potential clients and vendors, and also connect with those that you already do business with!

In the last 4 years the amount of international consulting I’ve been providing has increased exponentially on a year over year basis. And while it’s been amazing experience and I’ve had the chance to help many organizations with their VMware infrastructure, the only complaint I have is that I can’t meet face-to-face and shake hands with those customers as much as I’d like to. We have Zoom and Teams, but it’s not the same thing…

One thing I’m really looking forward to, is finally meeting quite a few of those customers face-to-face for the first time. I’m sure we’ll even have a few stories after attending a few (or many) of the VMware Explore parties that happen during that week.

Additionally, many major vendors sponsor VMware Explore and will have booths at the event, so I’m looking forward to meeting and shaking hands with some of my favorite vendors!

All in all, I think it’s going to be a great time and I’m really excited to attend. I hope to see you there!

Jul 172022
 
VMware vSphere ESXi with vTPM from NKP

It’s been coming for a while: The requirement to deploy VMs with a TPM module… Today I’ll be showing you the easiest and quickest way to create and deploy Virtual Machines with vTPM on VMware vSphere ESXi!

As most of you know, Windows 11 has a requirement for Secureboot as well as a TPM module. It’s with no doubt that we’ll also possibly see this requirement with future Microsoft Windows Server operating systems.

While users struggle to deploy TPM modules on their own workstations to be eligible for the Windows 11 upgrade, ESXi administrators are also struggling with deploying Virtual TPM modules, or vTPM modules on their virtualized infrastructure.

What is a TPM Module?

TPM stands for Trusted Platform Module. A Trusted Platform Module, is a piece of hardware (or chip) inside or outside of your computer that provides secured computing features to the computer, system, or server that it’s attached to.

This TPM modules provides things like a random number generator, storage of encryption keys and cryptographic information, as well as aiding in secure authentication of the host system.

In a virtualization environment, we need to emulate this physical device with a Virtual TPM module, or vTPM.

What is a Virtual TPM (vTPM) Module?

A vTPM module is a virtualized software instance of a traditional physical TPM module. A vTPM can be attached to Virtual Machines and provide the same features and functionality that a physical TPM module would provide to a physical system.

vTPM modules can be can be deployed with VMware vSphere ESXi, and can be used to deploy Windows 11 on ESXi.

Deployment of vTPM modules, require a Key Provider on the vCenter Server.

For more information on vTPM modules, see VMware’s “Virtual Trust Platform Module Overview” documentation.

Deploying vTPM (Virtual TPM Modules) on VMware vSphere ESXi

In order to deploy vTPM modules (and VM encryption, vSAN Encryption) on VMware vSphere ESXi, you need to configure a Key Provider on your vCenter Server.

Traditionally, this would be accomplished with a Standard Key Provider utilizing a Key Management Server (KMS), however this required a 3rd party KMS server and is what I would consider a complex deployment.

VMware has made this easy as of vSphere 7 Update 2 (7U2), with the Native Key Provider (NKP) on the vCenter Server.

The Native Key Provider, allows you to easily deploy technologies such as vTPM modules, VM encryption, vSAN encryption, and the best part is, it’s all built in to vCenter Server.

Enabling VMware Native Key Provider (NKP)

To enable NKP across your vSphere infrastructure:

  1. Log on to your vCenter Server
  2. Select your vCenter Server from the Inventory List
  3. Select “Key Providers”
  4. Click on “Add”, and select “Add Native Key Provider”
  5. Give the new NKP a friendly name
  6. De-select “Use key provider only with TPM protected ESXi hosts” to allow your ESXi hosts without a TPM to be able to use the native key provider.

In order to activate your new native key provider, you need to click on “Backup” to make sure you have it backed up. Keep this backup in a safe place. After the backup is complete, you NKP will be active and usable by your ESXi hosts.

Screenshot of VMware vCenter Server with Native Key Provider (NKP) Configured
VMware vCenter with Native Key Provider (NKP) Configured

There’s a few additional things to note:

  • Your ESXi hosts do NOT require a physical TPM module in order to use the Native Key Provider
    • Just make sure you disable the checkbox “Use key provider only with TPM protected ESXi hosts”
  • NKP can be used to enable vTPM modules on all editions of vSphere
  • If your ESXi hosts have a TPM module, using the Native Key Provider with your hosts TPM modules can provide enhanced security
    • Onboard TPM module allows keys to be stored and used if the vCenter server goes offline
  • If you delete the Native Key Provider, you are also deleting all the keys stored with it.
    • Make sure you have it backed up
    • Make sure you don’t have any hosts/VMs using the NKP before deleting

You can now deploy vTPM modules to virtual machines in your VMware environment.

Jun 192022
 
VMware vSphere 7 Logo

We all know that vMotion is awesome, but what is even more awesome? Optimizing VMware vMotion to make it redundant and faster!

vMotion allows us to migrate live Virtual Machines from one ESXi host to another without any downtime. This allows us to perform physical maintenance on the ESXi hosts, update and restart the hosts, and also load balance VMs across the hosts. We can even take this a step further use DRS (Distributed Resource Scheduler) automation to intelligently load the hosts on VM boot and to dynamically load balance the VMs as they run.

Picture of VMware vMotion diagram
VMware vMotion

In this post, I’m hoping to provide information on how to fully optimize and use vMotion to it’s full potential.

VMware vMotion

Most of you are probably running vMotion in your environment, whether it’s a homelab, dev environment, or production environment.

I typically see vMotion deployed on the existing data network in smaller environments, I see it deployed on it’s own network in larger environments, and in very highly configured environments I see it being used with the vMotion TCP stack.

While you can preform a vMotion with 1Gb networking, you certainly almost always want at least 10Gb networking for the vMotion network, to avoid any long running VMs. Typically most IT admins are happy with live migration vMotion’s in the seconds, and not the minutes.

VMware vMotion Optimization

So you might ask, if vMotion is working and you’re satisfied, what is there to optimize? There’s actually a few things, but first let’s talk about what we can improve on.

We’re aiming for improvements with:

  • Throughput/Speed
    • Faster vMotion
      • Faster Speed
      • Less Time
    • Migrate more VMs
      • Evacuate hosts faster
      • Enable more aggressive DRS
      • Migrate many VMs at once very quickly
  • Redundancy
    • Redundant vMotion Interfaces (NICs and Uplinks)
  • More Complex vMotion Configurations
    • vMotion over different subnets and VLANs
      • vMotion routed over Layer 3 networks

To achieve the above, we can focus on the following optimizations:

  1. Enable Jumbo Frames
  2. Saturation of NIC/Uplink for vMotion
  3. Multi-NIC/Uplink vMotion
  4. Use of the vMotion TCP Stack

Let’s get to it!

Enable Jumbo Frames

I can’t stress enough how important it is to use Jumbo Frames for specialized network traffic on high speed network links. I highly recommend you enable Jumbo Frames on your vMotion network.

Note, that you’ll need to have a physical switch and NICs that supports Jumbo frames.

In my own high throughput testing on a 10Gb link, without using Jumbo frames I was only able to achieve transfer speeds of ~6.7Gbps, whereas enabling Jumbo Frames allowed me to achieve speeds of ~9.8Gbps.

When enabling this inside of vSphere and/or ESXi, you’ll need to make sure you change and update the applicable vmk adapter, vSwitch/vDS switches, and port groups. Additionally as mentioned above you’ll need to enable it on your physical switches.

You may assume that once you configure a vMotion enabled NIC, that when performing migrations you will be able to fully saturate it. This is not necessarily the case!

When performing a vMotion, the vmk adapter is bound to a single thread (or CPU core). Depending on the power of your processor and the speed of the NIC, you may not actually be able to fully saturate a single 10Gb uplink.

In my own testing in my homelab, I needed to have a total of 2 VMK adapters to saturate a single 10Gb link.

If you’re running 40Gb or even 100Gb, you definitely want to look at adding multiple VMK adapters to your vMotion network to be able to fully saturate a single NIC or Uplink.

You can do this by simply configuring multiple VMK adapters per host with different IP addresses on the same subnet.

One important thing to mention is that if you have multiple physical NICs and Uplinks connected to your vMotion switch, this change will not help you utilize multiple physical interfaces (NICs/Uplinks). See “Multi-NIC/Uplink vMotion”.

Please note: As of VMware vSphere 7 Update 2, the above is not required as vMotion has been optimized to use multiple streams to fully saturate the interface. See VMware’s blog post “Faster vMotion Makes Balancing Workloads Invisible” for more information.

Multi-NIC/Uplink vMotion

Another situation is where we may want to utilize multiple NICs and Uplinks for vMotion. When implemented correctly, this can provide load balancing (additional throughput) as well as redundancy on the vMotion network.

If you were to simply add additional NIC interfaces as Uplinks to your vMotion network, this would add redundancy in the event of a failover but it wouldn’t actually result in increased speed or throughput as special configuration is required.

To take advantage of the additional bandwidth made available by additional Uplinks, we need to specially configure multiple portgroups on the switch (vSwitch or vDS Distributed Switch), and configure each portgroup to only use one of the Uplinks as the “Active Uplink” with the rest of the uplinks under “Standby Uplink”.

Example Configuration

  • vSwitch or vDS Switch
    • Portgroup 1
      • Active Uplink: Uplink 1
      • Standby Uplinks: Uplink 2, Uplink 3, Uplink 4
    • Portgroup 2
      • Active Uplink: Uplink 2
      • Standby Uplinks: Uplink 1, Uplink 3, Uplink 4
    • Portgroup 3
      • Active Uplink: Uplink 3
      • Standby Uplinks: Uplink 1, Uplink 2, Uplink 4
    • Portgroup 4
      • Active Uplink: Uplink 4
      • Standby Uplinks: Uplink 1, Uplink 2, Uplink 3

You would then place a single or multiple vmk adapters on each of the portgroups per host, which would result in essentially mapping the vmk(s) to the specific uplink. This will allow you to utilize multiple NICs for vMotion.

And remember, you may not be able to fully saturate a NIC interface (as stated above) with a single vmk adapter, so I highly recommend creating multiple vmk adapters on each of the Port groups above to make sure that you’re not only using multiple NICs, but that you can also fully saturate each of the NICs.

For more information, see VMware’s KB “Multiple-NIC vMotion in vSphere (2007467)“.

Use of the vMotion TCP Stack

VMware released the vMotion TCP Stack to provided added security to vMotion capabilities, as well as introduce vMotion over multiple subnets (routed vMotion over layer 3).

Using the vMotion TCP Stack, you can isolate and have the vMotion network using it’s own gateway separate from the other vmk adapters using the traditional TCP stack on the ESXi host.

This stack is optimized for vMotion.

Please note, that troubleshooting processes may be different when Troubleshooting vMotion using the vMotion TCP/IP Stack (click the link for my blog post on troubleshooting).

For more information, see VMware’s Documentation on “vMotion TCP/IP Stack“.

Additional resources:

VMware – How to Tune vMotion for Lower Migration Times?

Jun 182022
 
Nvidia GRID Logo

When performing a VMware vMotion on a Virtual Machine with an NVIDIA vGPU attached to it, the VM may freeze during migration. Additionally, when performing a vMotion on a VM without a vGPU, the VM does not freeze during migration.

So why is it that adding a vGPU to a VM causes it to become frozen during vMotion? This is referred to as the VM Stun Time.

I’m going to explain why this happens, and what you can do to reduce these STUN times.

VMware vMotion

First, let’s start with traditional vMotion without a vGPU attached.

VMware vMotion with vSphere and ESXi
VMware vMotion with vSphere

vMotion allows us to live migrate a Virtual Machine instance from one ESXi host, to another, with (visibly) no downtime. You’ll notice that I put “visibly” in brackets…

When performing a vMotion, vSphere will migrate the VM’s memory from the source to destination host and create checkpoints. It will then continue to copy memory deltas including changes blocks after the initial copy.

Essentially vMotion copies the memory of the instance, then initiates more copies to copy over the changes after the original transfer was completed, until the point where it’s all copied and the instance is now running on the destination host.

VMware vMotion with vGPU

For some time, we have had the ability to perform a vMotion with a VM that as a GPU attached to it.

VMware vSphere with NVIDIA vGPU
VMware VMs with vGPU

However, in this situation things work slightly different. When performing a vMotion, it’s not only the system RAM memory that needs to be transferred, but the GPU’s memory (VRAM) as well.

Unfortunately the checkpoint/delta transfer technology that’s used with then system RAM isn’t available to transfer the GPU, which means that the VM has to be stunned (frozen) to stop it so that the video RAM can be transferred and then the instance can be initialized on the destination host.

STUN Time

The STUN time is essentially the time it takes to transfer the video RAM (framebuffer) from one host to another.

When researching this, you may find examples of the time it takes to transfer various sizes of VRAM. An example would be from VMware’s documentation “Using vMotion to Migrate vGPU Virtual Machines“:

NVIDIA vGPU Estimated STUN Times
Expected STUN Times for vMotion with vGPU on 10Gig vMotion NIC

However, it will always vary depending on a number of factors. These factors include:

  • vMotion Network Speed
  • vMotion Network Optimization
    • Multi-NIC vMotion to utilize multiple NICs
    • Multi-vmk vMotion to optimize and saturate single NICs
  • Server Load
  • Network Throughput
  • The number of VM’s that are currently being migrated with vMotion

As you can see, there’s a number of things that play in to this. If you have a single 10Gig link for vMotion and you’re migrating many VMs with a vGPU, it’s obviously going to take longer than if you were just migrating a single VM with a vGPU.

Optimizing and Minimizing vGPU STUN Time

There’s a number of things we can look at to minimize the vGPU STUN times. This includes:

  • Upgrading networking throughput with faster NICs
  • Optimizing vMotion (Configure multiple vMotion VMK adapters to saturate a NIC)
  • Configure Multi-NIC vMotion (Utilize multiple physical NICs to increase vMotion throughput)
  • Reduce DRS aggressiveness
  • Migrate fewer VMs at the same time

All of the above can be implemented together, which I would actually recommend.

In short, the faster we migrate the VM, the less the STUN Time will be. Check out my blog post on Optimizing VMware vMotion which includes how to perform the above recommendations.

Hope this helps!

Mar 062022
 
Azure AD SSO with Horizon

Whether deploying VDI for the first time or troubleshooting existing Azure AD SSO issues for an existing environment, special consideration must be made for Microsoft Azure AD SSO and VDI.

When you implement and use Microsoft 365 and Office 365 in a VDI environment, you should have your environment configured to handle Azure AD SSO for a seamless user experience, and to avoid numerous login prompts when accessing these services.

Microsoft Azure Active Directory has two different methods for handling SSO (Single Sign On), these include SSO via a Primary Refresh Token (PRT) and Azure Seamless SSO. In this post, I’ll explain the differences, and when to use which one.

Microsoft Azure AD SSO and VDI

What does Azure AD SSO do?

Azure AD SSO allows your domain joined Windows workstations (and Windows Servers) to have a Single Sign On experience so that users can have an single sign-on integrated experience when accessing Microsoft 365 and/or Office 365.

When Azure AD SSO is enabled and functioning, your users will not be prompted nor have to log on to Microsoft 365 or Office 365 applications or services (including web services) as all this will be handled transparently in the background with Azure AD SSO.

For VDI environments, especially non-persistent VDI (VMware Instant Clones), this is an important function so that users are not prompted to login every time they launch an Office 365 application.

Persistent VDI is not complex and doesn’t have any special considerations for Azure AD SSO, as it will function the same way as traditional workstations, however non-persistent VDI requires special planning.

Please Note: Organizations often associate the Office 365 login prompts to activation issues when in fact activation is functioning fine, however Azure AD SSO is either not enabled, incorrect configured, or not functioning which is why the users are being prompted for login credentials every time they establish a new session with non-persistent VDI. After reading this guide, it should allow you to resolve the issue of Office 365 login prompts on VDI non-persistent and Instant Clone VMs.

Azure AD SSO methods

There are two different ways to perform Azure AD SSO in an environment that is not using ADFS. These are:

  • Azure AD SSO via Primary Refresh Token
  • Azure AD Seamless SSO

Both accomplish the same task, but were created at different times, have different purposes, and are used for different scenarios. We’ll explore this below so that you can understand how each works.

Screenshot of a Hybrid Azure AD Joined login
Hybrid Azure AD Joined Login

Fun fact: You can have both Azure AD SSO via PRT and Azure AD Seamless SSO configured at the same time to service your Active Directory domain, devices, and users.

Azure SSO via Primary Refresh Token

When using Azure SSO via Primary Refresh Token, SSO requests are performed by Windows Workstations (or Windows Servers), that are Hybrid Azure AD Joined. When a device is Hybrid Azure AD Joined, it is joined both to your on-premise Active Directory domain, as well registered to your Azure Active Directory.

Azure SSO via Primary Refresh token requires the Windows instance to be running Windows 10 (or later), and/or Windows Server 2016 (or later), as well the Windows instance has to be Azure Hybrid AD joined. If you meet these requirements, SSO with PRT will be performed transparently in the background.

If you require your non-persistent VDI VMs to be Hybrid Azure AD joined and require Azure AD SSO with PRT, special considerations and steps are required:

This includes:

  • Scripts to automatically unjoin non-persistent (Instant Clone) VDI VMs from Azure AD on logoff.
  • Scripts to cleanup old entries on Azure AD

If you properly deploy this, it should function. If you don’t require your non-persistent VDI VMs to be Hybrid Azure AD joined, then Azure AD Seamless SSO may be better for your environment.

VMware Horizon 8 2303 now supports Hybrid Azure AD joined non-persistent VDI, using Azure AD Connect, providing Azure AD SSO with PRT. Using Horizon 8 version 2303, no scripts are required to manage Azure AD Devices.

Azure AD Seamless SSO

Microsoft Azure AD Seamless SSO after configured and implemented, handles Azure AD SSO requests without the requirement of the device being Hybrid Azure AD joined.

Seamless SSO works on Windows instances instances running Windows 7 (or later, including Windows 10 and Windows 11), and does NOT require the the device to be Hybrid joined.

Seamless SSO allows your Windows instances to access Azure related services (such as Microsoft 365 and Office 365) and provides a single sign-on experience.

This may be the easier method to use when deploying non-persistent VDI (VMware Instant Clones), if you want to implement SSO with Azure, but do not have the requirement of Hybrid AD joining your devices.

Additionally, by using Seamless SSO, you do not need to implement the require log-off and maintenance scripts mentioned in the above section (for Azure AD SSO via PRT).

To use Azure AD Seamless SSO with non-persistent VDI, you must configure and implement Seamless SSO, as well as perform one of the following to make sure your devices do not attempt to Hybrid AD join:

  • Exclude the non-persistent VDI computer OU containers from Azure AD Connect synchronization to Azure AD
  • Implement a registry key on your non-persistent (Instant Clone) golden image, to disable Hybrid Azure AD joining.

To disable Hybrid Azure AD join on Windows, create the registry key on your Windows image below:

HKLM\SOFTWARE\Policies\Microsoft\Windows\WorkplaceJoin: "BlockAADWorkplaceJoin"=dword:00000001

Conclusion

Different methods can be used to implement SSO with Active Directory and Azure AD as stated above. Use the method that will be the easiest to maintain and provide support for the applications and services you need to access. And remember, you can also implement and use both methods in your environment!

After configuring Azure AD SSO, you’ll still be required to implement the relevant GPOs to configure Microsoft 365 and Office 365 behavior in your environment.

Additional Resources

Please see below for additional information and resources:

Jan 162022
 

Welcome to Episode 04 of The Tech Journal Vlog at www.StephenWagner.com

The Tech Journal Vlog Episode 04

In this episode

Updates

  • VMware Horizon
    • Apache Log4j Mitigation with VMware Products
  • Homelab Update
    • HPE MSA 2040 vs Synology DS1621+
    • Migrating from MSA 2040 to a Synology DS1621+
    • Synology Benchmarking NVME Cache
  • DST Root CA X3 Expiration
    • End of Life Operating Systems

New Blog/Video Posts

Life Update/Fun Stuff

  • Work
  • Travel
  • Move

Current Projects

  • Synology DS1621+

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!

Dec 022021
 

In a VMware Horizon environment with DUO MFA configured via RADIUS on the VMware Horizon Connection Server, you may notice authentication issues when logging in through a UAG (Unified Access Gateway) after upgrading to VMware Horizon 8 Version 2111.

During this condition, you can still login and use the connection server directly with MFA working, however all UAG connections will get stuck on authenticating.

Horzion 8 Version 2111 UAG Stuck on Authenticating using DUO MFA (RADIUS)

Disabling MFA and/or RADIUS on the connection server will allow the UAG to function, however MFA will be disabled. This occurs on upgrades to version 2111 of the UAG both when configuring fresh, and importing the JSON configuration backup.

Temporary Fix

Update January 26 2022: VMware has now released version 2111.2 of the Unified Access Gateway which resolves this issue. You can download it here, or view the release notes here.

Update January 12 2022: It appears VMware now has a KB on this issue at: https://kb.vmware.com/s/article/87253.

Temporary workaround/fix: To fix this issue, log on to the UAG and under “Horizon Edge Settings”, configure “Client Encryption Mode” to “Disabled”.

“Client Encryption Mode” is a new setting on UAG 2111 (and UAG 2111.1) that enables new functionality. Disabling this reverts the UAG to the previous behavior of older Unified Access Gateway versions.

More information on “Client Encryption Mode” can be found at https://docs.vmware.com/en/Unified-Access-Gateway/2111/uag-deploy-config/GUID-1B8665A2-485E-4471-954E-56DB9BA540E9.html.

Another workaround is to deploy an older version of the UAG, version 2106. After downgrading, the UAG functions with DUO and RADIUS even though the Connection Server is at version 2111.

If you use an older version of the UAG, please make sure that you mitigate against the Apache log4j vulnerabilities on the UAG using information from the following post: https://kb.vmware.com/s/article/87092.

Oct 102021
 
VMware vSphere 7 Logo

In this post, I wanted to go over some Backup and Restore tips and tricks when it comes to VMware vCSA Updates and Upgrades.

We’ve almost all been there, performing an update or upgrade of the VMware vCenter Server Appliance when it fails, and we must restore from a backup. There’s also times where the update or upgrade has been successful, however numerous issues occur afterwards prompting for the requirement of a restore from backup.

In this post, I wanted to briefly go over the methods of backups (and restores) for the vCSA, as well as some Tips and Tricks which might help you out for avoiding failed updates or upgrades in the future!

We all want to avoid a failed update or upgrade! 🙂

vCSA Update Installation
vCSA Update Installation

VMware vCSA Update Tips and Tricks for Backup and Restore

Please enjoy this video version of the blog post:

vCSA Update and Upgrade – Tips and Tricks for Backup and Restore

vCSA Backup methods

There are essentially two backup methods for backing up the vCenter Server Appliance:

  1. vCSA Management Interface Backup
  2. vSphere/ESXi Virtual Machine Snapshot

vCSA Management Interface Backup

If you log in to the vCSA Management Interface, you can configure a scheduled backup that will perform a full backup of your vCSA (and vCenter Server) instance.

This backup can be automatically ran and saved to an HTTP, HTTPS, FTP, FTPS, SFTP, NFS, or SMB destination. It’s a no-brainer if you have a Windows File Server or an NFS datastore.

vCSA Backup Screenshot
vCSA Backup

In the event of a failed update/upgrade or a disaster, this backup can be restored to a new vCSA instance to recover from the failure.

For more information on backups from the vCSA Management Interface, please see https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vcenter.install.doc/GUID-8C9D5260-291C-44EB-A79C-BFFF506F2216.html.

For information on restoring a vCSA file based backup, please see https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vcenter.install.doc/GUID-F02AF073-7CFD-45B2-ACC8-DE3B6ED28022.html.

vSphere/ESXi Virtual Machine Snapshot

In addition to the scheduled automatic backups configured above, you should snapshot your vCSA appliance VM prior to initiating an update or upgrade. In the event of a failure, you can easily restore the vCSA VM snapshot to get back to a running state.

vCSA Snapshot Screenshot
vCSA Snapshot

Only after you test and confirm the upgrade or update was successful should you delete the snapshot.

You should also have your Backup application or suite performing regularly snapshot based backups of your vCSA.

Additional Tips and Tricks

I have a few very important tips and tricks to share which may help you either avoid a failed update or upgrade, or increase the chances of a successful restore from backup.

  1. Gracefully Shutdown and Restart the vCSA Appliance before Upgrading
  2. Application Consistent Snapshot – Snapshot after graceful shutdown

Let’s dive in to these below.

Gracefully Shutdown and Restart the vCSA Appliance before Upgrading

I noticed that I significantly reduced the amount of failed upgrades by simply gracefully shutting down and restarting the vCenter Server Appliance prior to an upgrade.

This allows you to clear out the memory, virtual memory, and restart all vCenter services prior to starting the upgrade.

Please Note: Make sure that you give the vCSA appliance enough time to boot, start services, and let some of the maintenance tasks run before initiating an upgrade.

Application Consistent Snapshot – Snapshot after graceful shutdown

Most VMware System Administrators I have talked to, usually snapshot the running vCSA appliance and do not snapshot the memory. This creates a crash consistent snapshot.

If you follow my advice above and gracefully shutdown and restart the vCSA appliance, you can use this time to perform a VM snapshot after a graceful shutdown. This will provide you with an application consistent snapshot instead of a crash consistent snapshot.

If you perform an application consistent snapshot by gracefully shutting down the VM prior to creating the snapshot, the virtual machine and database inside of it will be in a cleaner state.

Conclusion

Some of the Tips and Tricks in this post definitely aren’t necessary, however they can help you increase the chance of a successful upgrade, and a successful restore in the event of a failed upgrade.

For more information on upgrading the vCenter Server Appliance, please visit https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vcenter.upgrade.doc/GUID-30485437-B107-42EC-A0A8-A03334CFC825.html.