Oct 032022
 
NVIDIA A2 vGPU

When deploying automated desktop pools with NVIDIA vGPU on VMware Horizon with an NVIDIA A2 GPU, you may notice provisioning fails with an error.

Error during Provisioning Cloning of VM VM-NAME-01 has failed: Fault type is UNKNOWN_FAULT_FATAL - No GPU capable host available for provisioning VM-NAME-01 with profile nvidia_a2-4q. Please refer to VMware KB 59271 for more details.

Further, when visiting VMware KB 59271 and performing the instructions, provisioning still continues to fail.

Screenshot of error message Automated vGPU Desktop Pool fails to provision due to missing vGPU profiles
Automated vGPU Desktop Pool fails to provision due to missing vGPU profiles

Essentially, at present there is no “supported” to resolve this issue without applying the fix listed in this post. Additionally, if you’re a VMware customer with an active support agreement, I would recommend opening a ticket with VMware Support so that it can be addressed in a future release.

The Problem

The NVIDIA A2 GPU is fairly new, along with VMware vSphere support. Even newer, is the support for vGPU and VMware Horizon, requiring the latest drivers (vGPU Drivers versions 14.2 released August 2022) to enable vGPU profiles for the card.

After troubleshooting this, I noted that the “graphic-profiles.properties” file in “C:\Program Files\VMware\VMware View\Server\broker\conf” did not contain any NVIDIA A2 vGPU Profiles. Additionally, the file available on the VMware KB was also missing these profiles.

The Fix

To fix this, I referenced the NVIDIA vGPU User Guide to note the vGPU profiles allowed on the card, and created my own entries for the configuration file.

After adding these entries, restarting the server (or service), I was able to provision NVIDIA A2 enabled vGPU desktop pools.

To resolve this issue, add the following entries to your “graphic-profiles.properties” file in “C:\Program Files\VMware\VMware View\Server\broker\conf” (note, the contents of the file is case-sensitive):

# NVIDIA A2 Profiles
# Q-Series Virtual GPU Types for NVIDIA A2
nvidia_a2-16q=1
nvidia_a2-8q=2
nvidia_a2-4q=4
nvidia_a2-2q=8
nvidia_a2-1q=16

# B-Series Virtual GPU Types for NVIDIA A2
nvidia_a2-2b=8
nvidia_a2-1b=16

# C-Series Virtual GPU Types for NVIDIA A2
nvidia_a2-16c=1
nvidia_a2-8c=2
nvidia_a2-4c=4

# A-Series Virtual GPU Types for NVIDIA A2
nvidia_a2-16a=1
nvidia_a2-8a=2
nvidia_a2-4a=4
nvidia_a2-2a=8
nvidia_a2-1a=16

After restarting the server or services, you should now be able to use the NVIDIA A2 vGPU profiles with VMware Horizon automated (vGPU) desktop pools.

You should be able to use this fix for other new vGPU cards that have been recently released where the profiles have not been configured for Horizon. VMware is likely to fix this in future released of VMware Horizon.

Oct 022022
 
Veeam-SQL

So, there’s a common problem where when performing a backup, you’ll see it fail with Veeam Unable to Truncate Microsoft SQL Server transaction logs.

This is usually due to permission problems either with the account used for guest processing, or with permissions inside of your SQL database. Typically in most cases this can be resolved by referencing the appropriate Veeam KB article which outlines the permissions required for proper guest processing of Microsoft SQL servers.

However, in some rare cases you may have everything configured properly, however the backup may continue to present these warnings with where it’s unable to truncate Microsoft SQL Server transaction logs.

The Problem

I recently deployed an SQL Server in a domain, and of course made sure to setup the proper backup procedures as I’ve done a million times.

However, when performing a backup, the backup would present a warning with the following message:

Error message on Veeam backup, Unable to Truncate Microsoft SQL Server transaction logs
Veeam Backup Warning – Unable to Truncate Microsoft SQL Server transaction logs.
Unable to truncate Microsoft SQL Server transaction logs. Details: Failed to call RPC function 'Vss.TruncateSqlLogs': Error code: 0x80004005. Failed to invoke func [TruncateSqlLogs]: Unspecified error. Failed to process 'TruncateSQLLog' command. Failed to logon user [ReallyLongDomainName\Admin-Account]. Win32 error:The user name or password is incorrect. Code: 1326.

This was very odd as I configured everything properly, and even confirmed it when referring the Veeam KB listed above in this post.

So I decided to look at this as if it was something different, something with credentials, or a different problem.

I noticed that in this specific customer environment, that their FQDN for their domain was so long, that the NETBIOS domain name did not equal their FQDN domain name.

In this example, the following was observed:

FQDN: LongCompanyName.com
NETBIOS DOMAIN: LCNDOMAIN

Due to the length of the domain, they shortened the NETBIOS domain with abbreviated letters.

When configuring the Veeam credentials for guest processing, one would assume that when using the “AD Search” function, it would have pulled the “LCNDOMAIN\BackupAdminProcessing” account, however when using the check feature, it actually created an entry for “LongCompanyName\BackupAdminProcessing”, which was technically incorrect as it didn’t match the SAM logon format for the account.

The Fix

Because of the observation noted above, I created a manual credential entry for “LCNDOMAIN\BackupAdminProcessing”, reconfigured the backup job to use those new credentials, and it worked!

The issue is because when using the AD search function in the credential manager, Veeam doesn’t translate and pull the NETBIOS domain, but uses the SAM logon format and assumes the UPN Domain matches the NETBIOS domain name.

While this may hold true in most scenarios, there may be rare situations (like above) where the NETBIOS domain name does not match the domain used in the UPN suffix.

Sep 042022
 

When either directly passing through a GPU, or attaching an NVIDIA vGPU to a Virtual Machine on VMware ESXi that has more than 16GB of Video Memory, you may run in to a situation where the VM fails to boot with the error “Module ‘DevicePowerOn’ power on failed.”. Special considerations are required when performing GPU or vGPU Passthrough with 16GB+ of video memory.

This issue is specifically caused by memory mapping a GPU or vGPU device that has 16GB of memory or higher, and could involve both the host system (the ESXi host) and/or the Virtual Machine configuration.

In this post, I’ll address the considerations and requirements to passthrough these devices to virtual machines in your environment.

In the order of occurrence, it’s usually VM configuration related, however if the recommendations in the “VM Configuration Considerations” section do not resolve the issue, please proceed to reviewing the “ESXi Host Considerations” section.

Please note that if the issue is host related, other errors may be present, or the device may not even be visible to ESXi.

VM GPU and vGPU Configuration Considerations

First and foremost, all new VMs should be created using the “EFI” Firmware type. EFI provides numerous advantages in device access and memory mapping versus the older style “BIOS” firmware types.

VM Firmware type EFI

To do this, create a new virtual machine, navigate to “VM Options”, expand “Boot Options”, and confirm/change the Firmware to “EFI”. I recommend this for all new VMs, and not only for VMs accessing GPUs or vGPUs with over 16GB of memory. Please note that you shouldn’t change an existing VM, and should do this on a fresh new VM.

With performing GPU or vGPU Passthrough with 16GB+ of video memory, you’ll need to create a couple of entries under “Advanced” settings to properly configure access to these PCIe devices and provide the proper environment for memory mapping. The lack of these settings is specifically what causes the “Module ‘DevicePowerOn’ power on failed.” error.

Under the VM settings, head over to “VM Options”, expand “Advanced” and click on “Edit Configuration”, click on “Add Configuration Params”, and add the following entries:

pciPassthru.use64bitMMIO=”TRUE”
pciPassthru.64bitMMIOSizeGB=32

Example below:

VM GPU and vGPU Memory Settings for 16GB or higher memory mapping

You’ll notice that while our GPU or vGPU profile may have 16GB of memory, we need to double that value, and set it for the “pciPassthru.64bitMMIOSizeGB” variable. If your card or vGPU profile had 32GB, you’d set it to “64”.

Additionally if you were passing through multiple GPUs or vGPU devices, you’d need to factor all the memory being mapped, and double the combined amount.

ESXi GPU and vGPU Host Considerations

On most new and modern servers, the host level doesn’t require any special configuration as they are already designed to pass through such devices to the hypervisor properly. However in some special cases, and/or when using older servers, you may need to modify configuration and settings in the UEFI or BIOS.

If setting the VM Configuration above still results in the same error (or possibly other errors), than you most likely need to make modifications to the ESXi hosts BIOS/UEFI/RBSU to allow the proper memory mapping of the PCIe device, in our case being the GPU.

This is where things get a bit tricky because every server manufacturer has different settings that will need to be configured.

Look for the following settings, or settings with similar terminology:

  • “Memory Mapping Above 4G”
  • “Above 4G Decoding”
  • “PCI Express 64-Bit BAR Support”
  • “64-Bit IOMMU Mapping”

Once you find the correct setting or settings, enable them.

Every vendor could be using different terminology and there may be other settings that need to be configured that I don’t have listed above. In my case, I had to go in to a secret “SERVICE OPTIONS” menu on my HPE Proliant DL360p Gen8, as documented here.

After performing the recommendations in this guide, you should now be able to passthrough devices with over 16GB of memory.

Additional Resources:

Sep 042022
 

With VMware ESXi 6.5 and 6.7 going End of Life on October 15th, 2022, many of you are looking for options to update hosts in your homelab, especially in my case putting ESXi 7.0 on HP Proliant DL360p Gen8 servers.

As far as support goes, HPE last provided a custom installer for ESXi for versions 6.5 U3 which was released December of 2019. This was the “last Pre-Gen9 custom image” released, as ESXi 7.0 on the DL360p Gen8 is totally unsupported.

ESXi 6.7 or higher on the Gen8 Servers

The jump from 6.5 to 6.7 was a little easier, as you could use the 6.5 custom installer, and then upgrade to 6.7. For the most part, as long as the hardware itself was supported, you were in pretty good shape.

Additionally, with the HPE vibsdepot loaded in to VMware Update Manager (now known as Lifecycle Manager), you could also keep all the HPE drivers and agents up to date.

ESXi 7.0 on the Gen8 Servers

Some were lucky enough to upgrade their current installs to 7 with no or limited problems, however the general consensus online was to expect problems. There were some major driver changes, which I think at one point led to an advisory to perform a fresh install, even if you had a fully supported configuration with newer generation servers such as the Proliant Gen9 and Gen10 servers, when upgrading from older versions.

In my setup, I have the following:

  • 2 x HPE Proliant DL360p Gen8 Servers
    • Dual Intel Xeon E5-2660v2 Processors in each server
    • USB and/or SD for booting ESXi
    • No other internal storage
  • External SAN iSCSI Storage

Concerns and Considerations

My main concern is to not only have a stable and functioning ESXi 7 instance, but I also, if possible would like to have the HPE drivers, agents, and integrations with iLO.

You must consider that while this is completely unsupported, you do need to make sure that the components of your current configuration are supported, such as the processor and PCIe cards, even if the server as a whole is not supported.

Make sure you reference your hardware on the VMware Compatibility Guide (HCL).

In my case, my processors were supported, however my RAID controller was not. So theoretically, since I’m not using my RAID controllers, I should be fine.

The process – Installing ESXi 7.0

I was able to install ESXi 7.0 on my HPE Proliant Gen8 servers, by performing the following steps.

  1. Download the Generic ESXi installer from VMware directly.
    1. Link: Download VMware vSphere
  2. Download the “HPE Custom Addon for ESXi 7.0”.
    1. Link: HPE Custom Addon for ESXi 7.0 U3 for July 2022
  3. Boot server, install using the Generic Installer downloaded above.
  4. Mount NFS or iSCSI datastore.
  5. Copy HPE Custom Addon for ESXi zip file to datastore.
  6. Enable SSH on host (or use console).
  7. Place host in to maintenance mode.
  8. Run “esxcli software vib install -d /vmfs/volumes/datastore-name/folder-name/HPE-703.0.0.10.9.1.5-Jul2022-Addon-depot.zip” from the command line.
  9. The install will run and complete successfully.
  10. Restart your server as needed, you’ll now notice that not only were HPE drivers installed, but also agents like the Agentless management agent, and iLO integrations.

You’ll now have a functioning instance.

HP Proliant DL360p Gen8 running ESXi 7.0

In my case everything was working, except for the “Smart Array P420i” RAID Controller, which I don’t use anyways.

Additionally, if you have a vCenter instance running, make sure that you add the HPE vibsdepot repo to your Lifecycle Manager. After you add the repo, create a baseline, and attach the baseline to the host, go ahead and proceed to scan, stage, and remediate the server which will then further update all the HPE specific drivers and software.

Aug 142022
 
HP Printer on VDI

When it comes to troubleshooting login times with non-persistent VDI (VMware Horizon Instant Clones), I often find delays associated with printer drivers not being included in the golden image. In this post, I’m going to show you how to add a printer driver to an Instant Clone golden image!

Printing with non-persistent VDI and Instant Clones

In most environments, printers will be mapped for users during logon. If a printer is mapped or added and the driver is not added to the golden image, it will usually be retrieved from the print server and installed, adding to the login process and ultimately leading to a delay.

Due of the nature of non-persistent VDI and Instant Clones, every time the user goes to login and get’s a new VM, the driver will then be downloaded and installed each of these times, creating a redundant process wasting time and network bandwidth.

To avoid this, we need to inject the required printer drivers in to the golden image. You can add numerous drivers and should include all the drivers that any and all the users are expecting to use.

An important consideration: Try using Universal Print Drivers as much as possible. Universal Printer Drivers often support numerous different printers, which allows you to install one driver to support many different printers from the same vendor.

How to add a printer driver to an instant clone golden image

Below, I’ll show you how to inject a driver in to the Instant Clone golden image. Note that this doesn’t actually add a printer, but only installs the printer driver in to the Windows operating system so it is available for a printer to be configured and/or mapped.

Let’s get started! In this example we’ll add the HP Universal Driver. These instructions work on both Windows 10 and Windows 11 (as well as Windows Server operating systems):

  1. Click Start, type in “Print Management” and open the “Print Management”. You can also click Start, Run, and type “printmanagement.msc”.
    Launch Print Management
  2. On the left hand side, expand “Print Servers”, then expand your computer name, and select “Drivers”.
    Print Management Drivers
  3. Right click on “Drivers” and select “Add Driver”.
    Print Management Add Driver
  4. When the “Welcome to the Add Printer Driver Wizard” opens, click Next.
    Add Printer Driver Wizard
  5. Leave the default for the architecture. It should default to the architecture of the golden image.
  6. When you are at the “Printer Driver Selection” stage, click on “Have Disk”.
    Print Management Add Printer Driver Location
  7. Browse to the location of your printer driver. In this example, we navigate to the extracted HP Universal Print Driver.
    Browse Printer Driver Location
  8. Select the driver you want to install.
    VDI Select Printer Driver to Install
  9. Click on Finish to complete the driver installation.
    Finish installing Instant Clone Printer Driver

The driver you installed should now appear in the list as it has been installed in to the operating system and is now available should a user add a printer, or have a printer automatically mapped.

Screenshot of Printer Driver installed on non-persistent VDI Instant Clone golden image
Printer Driver installed on Non-Persistent Instance Clone Golden Image

Now seal, snap, and deploy your image, and you’re good to go!

Aug 132022
 
Azure AD

Many of you may be not aware of the Azure AD Connect 1.x End of Life on August 31st, 2022. What this means is that as of August 31st, 2022 (later this month), you’ll no longer be able to use Azure AD Connect 1.4 or Azure AD Connect 1.6 to sync your on-premise Active Directory to Azure AD.

It’s time to plan your upgrade and/or migration!

This is catching a lot of System Administrators by surprise. In quite a few environments, Azure AD connect was implemented on older servers that haven’t been touched (except for Windows Updates) in the years that they’ve been running, because Azure AD Connect “just works”.

Azure AD Connect End of Life

Azure AD Connect has to major releases that are being used right now, being 1.x and 2.x.

Windows Server 2022 Logo

Version 1.x which is the release going end of life is the first release, generally seen installed on older Windows Server 2012 R2 systems (or even earlier versions).

Version 2.x which is the version you *should* be running, does not support Windows Server 2012. Azure AD Connect 2.x can only be deployed on Windows Server 2016 or higher.

Click here for more information on the Azure AD Connect: Version release history.

Azure AD Connect Upgrade and Migration

For a lot of you, there is no easy in-place upgrade unless you have 1.x installed on Windows Server 2016 or higher. If you are running 1.x on Server 2016 or higher, you can simply do an in-place upgrade!

If you’re running Windows Server 2012 R2 or earlier, because 2.x requires Server 2016 or higher, you will need to migrate to another system running a newer version of Windows Server.

However, the process to migrate to a newer server is simpler and cleaner than most would suspect. I highly recommend reviewing all the Microsoft documentation (see below), but a simplified overview of the process is as follows:

  1. Deploy new Windows Server (version 2016 or higher)
  2. Export Configuration (JSON file) from old Azure AD Connect 1.x server
  3. Install the latest version of Azure AD Connect 2.x on new server, load configuration file and place in staging mode.
  4. Enable Staging mode on old server (this stops syncing of old server)
  5. Disable Staging mode on new server (this starts syncing of new server)
  6. Decommission old server (uninstall Azure AD Connect, unjoin from domain)

I highly recommend reviewing Microsoft’s Azure AD Connect: Upgrade from a previous version to the latest for the full process, as well as Microsoft’s Import and export Azure AD Connect configuration settings.

As always, I highly recommend having an “Alternative Admin” account on your Azure AD. If you lose the ability to sync or authenticate against Azure AD, you’ll need a local Azure AD admin account to connect and manage and re-establish the synchronization.

Aug 102022
 

As we approach the date, I wanted to write a post sharing Why I’m looking forward to VMware Explore 2022 and share what I hope to get from the conference and experience.

As most of you know, VMware Explore 2022 (formerly known as VMware VMworld) is taking place this month in San Francisco at the Moscone Center August 29th, 2022 to September 1st, 2022.

VMware Explore 2022 Conference Logo

If you haven’t gotten your ticket, you can sign up here: https://www.vmware.com/explore/us.html

As some of you know, I regularly write about virtualization technologies, in particular VMware. VMware products are not only involved in the work that I do, but part of a personal hobby and passion. I was an early adopter of Virtualization, and on top of that, VDI (Virtual Desktop Infrastructure) has become a personal obsession of mine.

Because of the content I’ve written online, I’ve had the pleasure of helping others with these technologies. Over the years this has brought me new friendships, business customers, and given me a sense of participation in the larger community, ultimately leading to me achieving my VMware vExpert status, as well as being a part of the VMware vExpert EUC sub program.

Even though I’ve been in tech since becoming an adult, I’ve actually never had the opportunity to visit a large-scale conference in person in my entire life. VMware Explore 2022 will be my first in-person tech conference!

I'm going to VMware Explore Conference

So why am I going? What do I hope to get from it? What are my reasons for attending?

Essentially there’s 3 big reasons why I’m going to be attending:

  • Community
  • Knowledge
  • Business

Let’s dive in to each one…

Community

As mentioned above, I’ve had the pleasure of being a part of the VMware vExpert program for the past couple years. During this time, it has helped my content reach new audiences, I’ve had the chance to converse and talk with the top industry experts, I’ve also had the chance to learn more about the technologies I love, and it’s given a sense of belonging and participating in something “big”.

VMware vExpert BadgevExpert Badge

Blogging has been a passion of mine for as long as I can remember, with the first post on this blog going back to April 11th, 2010. Blogging has allowed me to not only share my knowledge, but also participate and contribute to the community. This has helped me meet new people, network, learn even more, and also help others pursue their passions and goals with technology.

Attending VMware Explore 2022 will help me take this a step further to actually meet some of those in the community face to face. I love meeting new people, and this will allow me to engage with those who have stumbled across my blog, and it will also allow me to meet those who are leaders with the community and hopefully even learn some new things from them.

I’ve already started working on my list of people to meetup with!

Knowledge

In addition to the knowledge I hope to learn from others in the community, VMware Explore 2022 has over 600 technical sessions (some even hosted by fellow vExperts) where you can learn more about the technologies you use everyday, as well as technologies you’re considering or planning on using in the future.

VMware Explore Content Catalog

The full content catalog for VMware Explore 2022 can be found here: https://event.vmware.com/flow/vmware/explore2022us/content/page/catalog

In particular, a few products and solutions I want to increase my knowledge with are:

  • VMware Workspace One
  • VMware Horizon Cloud Service
  • VMware vSphere+

In addition to the above, I’m sure I’ll be expanding my knowledge on things I wasn’t even planning on… You could say the point of the conference is to “Explore”!

Business

VMware products and solutions have been an important part of the solutions and offerings my business provides. In addition, those products and solutions are also the foundations of many businesses and organizations key IT infrastructure.

These conferences are great to network, discuss business, find new potential clients and vendors, and also connect with those that you already do business with!

In the last 4 years the amount of international consulting I’ve been providing has increased exponentially on a year over year basis. And while it’s been amazing experience and I’ve had the chance to help many organizations with their VMware infrastructure, the only complaint I have is that I can’t meet face-to-face and shake hands with those customers as much as I’d like to. We have Zoom and Teams, but it’s not the same thing…

One thing I’m really looking forward to, is finally meeting quite a few of those customers face-to-face for the first time. I’m sure we’ll even have a few stories after attending a few (or many) of the VMware Explore parties that happen during that week.

Additionally, many major vendors sponsor VMware Explore and will have booths at the event, so I’m looking forward to meeting and shaking hands with some of my favorite vendors!

All in all, I think it’s going to be a great time and I’m really excited to attend. I hope to see you there!

Aug 092022
 
A Lenovo Thinkpad X13s on desk powered on with Red Lenovo logo

I purchased the new Lenovo X13s Windows on ARM laptop, and wanted to share my first impressions with the device. I plan on creating a full review in a later post, however I wanted to provide some insight on my initial first impressions, as these can be a game changer or deal breaker for most people considering purchasing this laptop.

I’m going to break this blog post up in to a few key sections that were the most important, and most noticeable when first getting my hands on this device.

A Lenovo Thinkpad X13s laptop on counter with screen open.
Lenovo Thinkpad X13s

I’ll be limiting this post to the first impressions as much as possible saving the rest for the full review.

Pre-purchase expectations and initial thoughts

With lots of travel approaching, and with an aging laptop (Lenovo X1 Carbon Gen-2013), I needed to purchase a new laptop that I could use that would fit my requirements:

  • WWAN (Preferably 5G)
  • Good Battery Life
  • Good Performance
  • Stylish
  • Application Use
    • VDI – VMware Horizon Client
    • Microsoft Office
    • IT Applications (Putty, WinSCP, RDP)
    • Microsoft Teams
    • Zoom

You can see that my usage is similar to the business road warrior professional, with an IT add-on. I’m usually always connected to a VDI session, and also spend 50-100% of the day on Zoom or Microsoft Teams meetings.

With full knowledge about ARM architecture, and the new laptops and devices that have been released, I decided to take a big risk and try one of the new Windows on ARM laptops, specially the Lenovo X13s.

ARM laptops generally provide great performance, really good battery life, and an “always on” ready to go environment.

Specifications

I’ll be saving the tech spec deep dive for the full review, however I wanted to provide some basic information on the specifications of the model I purchased.

Lenovo X13s Box shot
Lenovo X13s in Box
  • Part Number: 21BX0008US
  • CPU: Snapdragon® 8cx Gen 3 Compute Platform (3.00 GHz up to 3.00 GHz)
  • RAM: 16 GB LPDDR4X 4266MHz (Soldered)
  • Disk: 1 TB PCIe SSD Gen 4
  • WWAN: Qualcomm® Snapdragon™ X55 5G Sub 6
  • Display: 13.3″ WUXGA (1920 x 1200) IPS, anti-glare, touchscreen, 300 nits

I specifically wanted a large SSD, lots of RAM, and definitely the 5G WWAN modem built in. I purchased the highest configured model without going custom (to take advantage of special pricing and promotions).

First Impressions

Design

Receiving the laptop, the first things that really stick out are the size, texture (quality of materials), thinness, and no fan ports. It’s a very beautifully designed laptop.

Lenovo X13s

While it is smaller than I expected, it does not feel cheap. The materials used with this laptop give it the same quality and feel as the X1 Carbon.

Physical Size

For whatever reasons, I was expecting something the same size as my original X1 Carbon, however the X13s is thinner and has a slightly smaller width and height in comparison.

Originally I thought this was going to be a problem, but after using the laptop, I’m absolutely in love with the size of this. As far as portability and usability, based on first impressions, this thing has both!

Keyboard

Surprisngly, because of the smaller size of the laptop, I’ve actually found is very easy to type quickly. I’ve noticed that on all the of laptops I’ve owned, as well as desktop keyboards, I can type the fasted on the X13s, because of the size of the keyboard as well as the layout and feel.

Keystrokes feel and sounds amazing, with a perfectly built keyboard. I honestly have no complaints…

Display

The display is absolutely beautiful. Even though I thought there is an option for a 400-knit display, my model has the 300-knit because I wanted the touchscreen.

Visibility in my apartment with all the windows open on a sunny day, I can see everything crisply on this display.

The only thing I noticed is that when viewing black/gray scale content (most of my UI and apps are in dark mode), it looks like the backlight dims and sometimes text becomes faded. You can still see everything fine, however this causes for an odd effect when the screen content changes to something with white or color.

To fix this, uncheck “Help improved battery by optimizing the content shown and brightness” in settings:

Display auto-dimming for battery

After unchecking this option, everything is perfect!

Battery

The battery on this unit is absolutely blowing my mind. In 4 days of usage, I’ve never used a laptop that can hold up to this and barely use any battery.

Comparing this to my old X1 in 4 days of usage, I probably would have had to charge it 3-4 times. The X13s just keeps going and going and going.

Very impressed with this, as it’s going to help with travel and staying connected on the go.

Speakers and Sound

The sound is fantastic, and playing music sounds great. The laptop includes a sound system enhanced with Dolby.

I’m not much of an audiophile, but I have to say I was impressed with the volume and quality of audio that comes from the laptop.

Termperature

This laptop has no fans or air ducts. One would think this would make up for a laptop that runs up hot, but I have to say I haven’t really noticed any hot temperatures except for when I first booted it up and did Windows Updates, Lenovo Updates, Microsoft Office installer, and a bunch of other things.

Even under extremely heavy load during the installs, the heat generated was actually less than what I would have expected, or experienced with my old Lenovo X1 Carbon.

Windows 11 for ARM64 (Windows on ARM)

For the most part, if you didn’t understand what Windows on ARM was, processor architectures, or the difference between this laptop and others, you’d notice absolutely nothing different from a normal laptop (except maybe if you were gaming).

I have to say that Microsoft knocked it out of the park with the development of Windows 11 on ARM, and it’s definitely 100% ready for primetime use, both for regular users as well as enterprise/business users.

The one thing I can’t comment on is gaming. While I haven’t done any testing (as I don’t game much), there may be additional considerations as far as stability and performance, or even capabilities of gaming.

Applications

When it comes to applications, while the X13s does support x86 and x64 emulation, you should always try to run native ARM/ARM64 applications. Running applications native to the architecture will provide the best performance as well as battery life.

After getting going, I noticed the following applications had native ARM64 support:

  • Microsoft Office
  • Microsoft Teams
  • Zoom
  • Putty
  • Edge (built off Chromium)

I also loaded numerous applications that are x86/x64 and emulated:

  • VMware Horizon Client
  • Chrome
  • WinSCP

All the above applications, both ARM and x86/x64 run fantastic without any problems. I was concerned that the whole emulation error would be a mess but I’ve seriously had no problems.

Performance

I can’t say enough how snappy Windows 11 on ARM and the X13s is. I never thought I’d say it, but this is the fastest performing Windows 11 system I’ve used when it comes to responsiveness of the OS and applications.

Connectivity

The built-in 5G connectivity was super easy to setup. The laptop can use an eSIM or traditional physical SIM. I had the experience of using both at different points (because of issues with my cell phone provider).

The eSIM was super easy to setup and you can manage multiple different profiles. I simply purchased an eSIM, and scanned the QR code with the webcam.

When I had to switch to the physical SIM (because my provider doesn’t support 5G with eSIMs), I simply popped the SIM tray and install the card.

It’s very easy to not only switch between eSIM profiles, but also switch between the eSIM and normal SIM. This is great if you’re travelling to other countries as you can easily switch between your local providers eSIM, and install a foreign SIM to use local data.

You speed will vary depending on provider, but I was able to achieve full speed that was expected my provider, and I was pleasantly surprised with better than expected low latencies, which is great for VDI which I use regularly.

Always on

Because of the ARM processor, Windows is “always on”. There’s no resume from suspend time, just like your ARM based cell/mobile phone.

The laptop is virtually always on and ready to go when I need to work.

Overall First Impressions

Overall, my first impressions with this laptop have been fantastic and this laptop is exceeding my best expectations. Windows 11 on ARM is definitely a serious contender when it comes to choosing the right laptop/notebook.

Lenovo X13s Powered On

The OS is snappy, everything works the way you’d expect on Windows, and so far I’m very happy with the investment I made when purchasing this laptop. I can’t wait to do some travelling with this to start using it to it’s full potential.

Add in 5G always-on connectivity, and it feels like this thing is unstoppable…

Stay tuned for the full review!

Jul 172022
 
VMware vSphere ESXi with vTPM from NKP

It’s been coming for a while: The requirement to deploy VMs with a TPM module… Today I’ll be showing you the easiest and quickest way to create and deploy Virtual Machines with vTPM on VMware vSphere ESXi!

As most of you know, Windows 11 has a requirement for Secureboot as well as a TPM module. It’s with no doubt that we’ll also possibly see this requirement with future Microsoft Windows Server operating systems.

While users struggle to deploy TPM modules on their own workstations to be eligible for the Windows 11 upgrade, ESXi administrators are also struggling with deploying Virtual TPM modules, or vTPM modules on their virtualized infrastructure.

What is a TPM Module?

TPM stands for Trusted Platform Module. A Trusted Platform Module, is a piece of hardware (or chip) inside or outside of your computer that provides secured computing features to the computer, system, or server that it’s attached to.

This TPM modules provides things like a random number generator, storage of encryption keys and cryptographic information, as well as aiding in secure authentication of the host system.

In a virtualization environment, we need to emulate this physical device with a Virtual TPM module, or vTPM.

What is a Virtual TPM (vTPM) Module?

A vTPM module is a virtualized software instance of a traditional physical TPM module. A vTPM can be attached to Virtual Machines and provide the same features and functionality that a physical TPM module would provide to a physical system.

vTPM modules can be can be deployed with VMware vSphere ESXi, and can be used to deploy Windows 11 on ESXi.

Deployment of vTPM modules, require a Key Provider on the vCenter Server.

For more information on vTPM modules, see VMware’s “Virtual Trust Platform Module Overview” documentation.

Deploying vTPM (Virtual TPM Modules) on VMware vSphere ESXi

In order to deploy vTPM modules (and VM encryption, vSAN Encryption) on VMware vSphere ESXi, you need to configure a Key Provider on your vCenter Server.

Traditionally, this would be accomplished with a Standard Key Provider utilizing a Key Management Server (KMS), however this required a 3rd party KMS server and is what I would consider a complex deployment.

VMware has made this easy as of vSphere 7 Update 2 (7U2), with the Native Key Provider (NKP) on the vCenter Server.

The Native Key Provider, allows you to easily deploy technologies such as vTPM modules, VM encryption, vSAN encryption, and the best part is, it’s all built in to vCenter Server.

Enabling VMware Native Key Provider (NKP)

To enable NKP across your vSphere infrastructure:

  1. Log on to your vCenter Server
  2. Select your vCenter Server from the Inventory List
  3. Select “Key Providers”
  4. Click on “Add”, and select “Add Native Key Provider”
  5. Give the new NKP a friendly name
  6. De-select “Use key provider only with TPM protected ESXi hosts” to allow your ESXi hosts without a TPM to be able to use the native key provider.

In order to activate your new native key provider, you need to click on “Backup” to make sure you have it backed up. Keep this backup in a safe place. After the backup is complete, you NKP will be active and usable by your ESXi hosts.

Screenshot of VMware vCenter Server with Native Key Provider (NKP) Configured
VMware vCenter with Native Key Provider (NKP) Configured

There’s a few additional things to note:

  • Your ESXi hosts do NOT require a physical TPM module in order to use the Native Key Provider
    • Just make sure you disable the checkbox “Use key provider only with TPM protected ESXi hosts”
  • NKP can be used to enable vTPM modules on all editions of vSphere
  • If your ESXi hosts have a TPM module, using the Native Key Provider with your hosts TPM modules can provide enhanced security
    • Onboard TPM module allows keys to be stored and used if the vCenter server goes offline
  • If you delete the Native Key Provider, you are also deleting all the keys stored with it.
    • Make sure you have it backed up
    • Make sure you don’t have any hosts/VMs using the NKP before deleting

You can now deploy vTPM modules to virtual machines in your VMware environment.

Jun 192022
 
VMware vSphere 7 Logo

We all know that vMotion is awesome, but what is even more awesome? Optimizing VMware vMotion to make it redundant and faster!

vMotion allows us to migrate live Virtual Machines from one ESXi host to another without any downtime. This allows us to perform physical maintenance on the ESXi hosts, update and restart the hosts, and also load balance VMs across the hosts. We can even take this a step further use DRS (Distributed Resource Scheduler) automation to intelligently load the hosts on VM boot and to dynamically load balance the VMs as they run.

Picture of VMware vMotion diagram
VMware vMotion

In this post, I’m hoping to provide information on how to fully optimize and use vMotion to it’s full potential.

VMware vMotion

Most of you are probably running vMotion in your environment, whether it’s a homelab, dev environment, or production environment.

I typically see vMotion deployed on the existing data network in smaller environments, I see it deployed on it’s own network in larger environments, and in very highly configured environments I see it being used with the vMotion TCP stack.

While you can preform a vMotion with 1Gb networking, you certainly almost always want at least 10Gb networking for the vMotion network, to avoid any long running VMs. Typically most IT admins are happy with live migration vMotion’s in the seconds, and not the minutes.

VMware vMotion Optimization

So you might ask, if vMotion is working and you’re satisfied, what is there to optimize? There’s actually a few things, but first let’s talk about what we can improve on.

We’re aiming for improvements with:

  • Throughput/Speed
    • Faster vMotion
      • Faster Speed
      • Less Time
    • Migrate more VMs
      • Evacuate hosts faster
      • Enable more aggressive DRS
      • Migrate many VMs at once very quickly
  • Redundancy
    • Redundant vMotion Interfaces (NICs and Uplinks)
  • More Complex vMotion Configurations
    • vMotion over different subnets and VLANs
      • vMotion routed over Layer 3 networks

To achieve the above, we can focus on the following optimizations:

  1. Enable Jumbo Frames
  2. Saturation of NIC/Uplink for vMotion
  3. Multi-NIC/Uplink vMotion
  4. Use of the vMotion TCP Stack

Let’s get to it!

Enable Jumbo Frames

I can’t stress enough how important it is to use Jumbo Frames for specialized network traffic on high speed network links. I highly recommend you enable Jumbo Frames on your vMotion network.

Note, that you’ll need to have a physical switch and NICs that supports Jumbo frames.

In my own high throughput testing on a 10Gb link, without using Jumbo frames I was only able to achieve transfer speeds of ~6.7Gbps, whereas enabling Jumbo Frames allowed me to achieve speeds of ~9.8Gbps.

When enabling this inside of vSphere and/or ESXi, you’ll need to make sure you change and update the applicable vmk adapter, vSwitch/vDS switches, and port groups. Additionally as mentioned above you’ll need to enable it on your physical switches.

You may assume that once you configure a vMotion enabled NIC, that when performing migrations you will be able to fully saturate it. This is not necessarily the case!

When performing a vMotion, the vmk adapter is bound to a single thread (or CPU core). Depending on the power of your processor and the speed of the NIC, you may not actually be able to fully saturate a single 10Gb uplink.

In my own testing in my homelab, I needed to have a total of 2 VMK adapters to saturate a single 10Gb link.

If you’re running 40Gb or even 100Gb, you definitely want to look at adding multiple VMK adapters to your vMotion network to be able to fully saturate a single NIC or Uplink.

You can do this by simply configuring multiple VMK adapters per host with different IP addresses on the same subnet.

One important thing to mention is that if you have multiple physical NICs and Uplinks connected to your vMotion switch, this change will not help you utilize multiple physical interfaces (NICs/Uplinks). See “Multi-NIC/Uplink vMotion”.

Please note: As of VMware vSphere 7 Update 2, the above is not required as vMotion has been optimized to use multiple streams to fully saturate the interface. See VMware’s blog post “Faster vMotion Makes Balancing Workloads Invisible” for more information.

Multi-NIC/Uplink vMotion

Another situation is where we may want to utilize multiple NICs and Uplinks for vMotion. When implemented correctly, this can provide load balancing (additional throughput) as well as redundancy on the vMotion network.

If you were to simply add additional NIC interfaces as Uplinks to your vMotion network, this would add redundancy in the event of a failover but it wouldn’t actually result in increased speed or throughput as special configuration is required.

To take advantage of the additional bandwidth made available by additional Uplinks, we need to specially configure multiple portgroups on the switch (vSwitch or vDS Distributed Switch), and configure each portgroup to only use one of the Uplinks as the “Active Uplink” with the rest of the uplinks under “Standby Uplink”.

Example Configuration

  • vSwitch or vDS Switch
    • Portgroup 1
      • Active Uplink: Uplink 1
      • Standby Uplinks: Uplink 2, Uplink 3, Uplink 4
    • Portgroup 2
      • Active Uplink: Uplink 2
      • Standby Uplinks: Uplink 1, Uplink 3, Uplink 4
    • Portgroup 3
      • Active Uplink: Uplink 3
      • Standby Uplinks: Uplink 1, Uplink 2, Uplink 4
    • Portgroup 4
      • Active Uplink: Uplink 4
      • Standby Uplinks: Uplink 1, Uplink 2, Uplink 3

You would then place a single or multiple vmk adapters on each of the portgroups per host, which would result in essentially mapping the vmk(s) to the specific uplink. This will allow you to utilize multiple NICs for vMotion.

And remember, you may not be able to fully saturate a NIC interface (as stated above) with a single vmk adapter, so I highly recommend creating multiple vmk adapters on each of the Port groups above to make sure that you’re not only using multiple NICs, but that you can also fully saturate each of the NICs.

For more information, see VMware’s KB “Multiple-NIC vMotion in vSphere (2007467)“.

Use of the vMotion TCP Stack

VMware released the vMotion TCP Stack to provided added security to vMotion capabilities, as well as introduce vMotion over multiple subnets (routed vMotion over layer 3).

Using the vMotion TCP Stack, you can isolate and have the vMotion network using it’s own gateway separate from the other vmk adapters using the traditional TCP stack on the ESXi host.

This stack is optimized for vMotion.

Please note, that troubleshooting processes may be different when Troubleshooting vMotion using the vMotion TCP/IP Stack (click the link for my blog post on troubleshooting).

For more information, see VMware’s Documentation on “vMotion TCP/IP Stack“.

Additional resources:

VMware – How to Tune vMotion for Lower Migration Times?