May 312021
 
Office 365 Logo

After you Deploy Remote Desktop Services (RDS) for employee remote access and Install Office 365 in a Remote Desktop Services Environment, your next step will be to configure it by deploying Group Policy Objects to configure Office 365 in a Remote Desktop Services Environment.

By deploying a Group Policy Objects to configure Office 365, you’ll be able to configure Office 365 for first time use, activate the product, roll out pre-defined configuration, and even automatically configure Outlook mail profiles.

Following these steps will help you provide a zero-configuration experience for your end users so that everything is up and running for them when they connect the first time. I will also provide a number of GPO settings which will enhance the user experience.

What’s Required

To Configure Microsoft Office 365 on a Remote Desktop Services Server, you’ll need:

  • A Remote Desktop Services Server (Configured and Running)
  • Microsoft 365 Apps for Enterprise (formerly named as Office 365 ProPlus)
  • Office 365 Installed with SCA (Shared Computer Activation, as per “Install Office 365 in a Remote Desktop Services Environment“)
  • Microsoft 365 Apps for Enterprise ADMX GPO Administrative Templates (Download here)

Shared Computer Activation

In order to properly configure and activate Office 365 in a Remote Desktop Services Environment, you will need to Install Office 365 with Shared Computer Activation. You can read my guide by clicking on the link.

Configure Office 365

Once you’re ready to go, you can begin configuration.

To make things as simple as possible and centrally manage every aspect of your O365 deployment, we want to configure everything via GPO (Group Policy Objects). This will allow us to configure everything including “first run configuration” and roll out a standardized configuration to users.

In order to modify GPOs, you’ll need to either launch the Group Policy Management MMC from a domain controller, or Install RSAT (Remote Server Administration Tools) on Windows 10 to use the MMC from your local computer or workstation.

You’ll probably want to create an OU (Organizational Unit) inside of Active Directory for your RDS farm, and then create a new Group Policy Object and apply it to that OU. In that new GPO, we’ll be configuring the following:

We’ll be configuring the following “Computer Configuration” items:

  1. Microsoft Office – Licensing Configuration
  2. Microsoft Office – Update Configuration
  3. Microsoft OneDrive – Known Folders, Use OneDrive Files On-Demand
  4. Windows – Group Policy Loopback Processing Mode

We’ll also be configuring the following “User Configuration” items:

  1. Microsoft Office – First Run Configuration
  2. Microsoft Office – Block Personal Microsoft Account Sign-in
  3. Microsoft Office – Subscription/Licensing Activation
  4. Microsoft Outlook – Disable E-Mail Account Configuration
  5. Microsoft Outlook – Exchange account profile configuration
  6. Microsoft Outlook – Disable Cached Exchange Mode

Let’s start!

Microsoft Office – Licensing Configuration

Since we’re using SCA (Shared Computer Activation) for licensing, we need to specify where to store the users activation tokens. You may have configured a special location for these, or may just store them with your user profiles.

First we need to activate Shared Computer Activation. Navigate to:

Computer Configuration -> Policies -> Administrative Templates -> Microsoft Office 2016 (Machine) -> Licensing Settings

And set “Use shared computer activation” to Enabled.

Next we’ll set “Specify the location to save the licensing token used by shared computer activation” to the location where you’d like to store the activation tokens. As an example, to store to the User Profile share, I’d use the following:

\\PROFILE-SERVER\UserProfiles$\%USERNAME%

Microsoft Office – Update Configuration

Because this is a Remote Desktop Services server, we want automatic updating disabled since IT will manage the updates.

We’ll want to disable updated by navigating to:

Computer Configuration -> Policies -> Administrative Templates -> Microsoft Office 2016 (Machine) -> Updates

And set “Enable Automatic Updates” to Disabled.

We’ll also set “Hide option to enable or disable updates” to Enabled to hide it from the users.

Microsoft OneDrive – Known Folders, Use OneDrive Files On-Demand

There’s some basic configuration for OneDrive that we’ll want to configure as we don’t want our users profile folders being copied or redirected to OneDrive, and we also want OneDrive to be used with Files On-Demand so that users OneDrive contents aren’t cached/copied to the RDS Server.

We’ll navigate over to:

Computer Configuration -> Policies -> Administrative Templates -> OneDrive

And set the following GPO objects:

  • “Prevent users from moving their Windows known folders to OneDrive” to Enabled
  • “Prevent users from redirecting their Windows known folders to their PC” to Enabled
  • “Prompt users to move Windows known folders to OneDrive” to Disabled
  • “Use OneDrive Files On-Demand” to Enabled

We’ve new configured OneDrive for RDS Users.

Windows – Group Policy Loopback Processing Mode

Since we’ll be applying the above “Computer Configuration” GPO settings to users when they log on to the RDS Server, we’ll need to activate Loopback Processing of Group Policy (click the link for more information). This will allow use to have the “Computer Configuration” applied during User Logon and have higher precedence over their existing User Settings.

We’ll navigate to the following:

Computer Configuration -> Policies -> Administrative Templates -> System -> Group Policy

And set “Configure user Group Policy loopback processing mode” to Enabled, and “Mode” to Merge.

Microsoft Office – First Run Configuration

As most of you know, when running Microsoft Office 365 for the first time, there are numerous windows, movies, and wizards for the first time run. We want to disable all of this so it appears that Office is pre-configured to the user, this will allow them to just log on and start working.

We’ll head over to:

User Configuration -> Policies -> Administrative Templates -> Microsoft Office 2016 -> First Run

And set the following items:

  • “Disable First Run Movie” to Enabled
  • “Disable Office First Run on application boot” to Enabled

Microsoft Office – Block Personal Microsoft Account Sign-in

Since we’re paying for and want the user to use their Microsoft 365 account and not their personal, we’ll stop them from being able to add personal Microsoft Accounts to Office 365.

Head over to:

User Configuration -> Policies -> Administrative Templates -> Microsoft Office 2016 -> Miscellaneous

And set “Block signing into Office” to Enabled, and then set the additional option to “Organization ID only”

Microsoft Office – Subscription/Licensing Activation

Earlier in the post we configured Office 365 to use SCA, now we’ll need to configure how it’s activated. We don’t want the activation window being shown to the user, nor the requirement for it to be configured, so we’ll configure Office 365 to automatically active using SSO (Single Sign On).

Navigate to:

User Configuration -> Policies -> Administrative Templates -> Microsoft Office 2016 -> Subscription Activation

And then set “Automatically activate Office with federated organization credentials” to Enabled.

Microsoft Outlook – Disable E-Mail Account Configuration

We’ll be configuring the e-mail profiles for the users so that no initial configuration will be needed. Again, just another step to let them log in and get to work right away.

Inside of:

User Configuration -> Policies -> Administrative Templates -> Microsoft Outlook 2016 -> Account Settings -> E-mail

And we’ll set the following:

  • “Prevent Office 365 E-mail accounts from being configured within a simplified Interface” to Disabled
  • “Prevent Outlook from interacting with the account settings detection service” to Enabled

Microsoft Outlook – Exchange account profile configuration

We’ll want your users Outlook Profile to be auto-configured for their Exchange account so we’ll need to configure the following setting.

Navigate to:

User Configuration -> Policies -> Administrative Templates -> Microsoft Outlook 2016 -> Account Settings -> Exchange

And set “Automatically configure profile based on Active Directory Primary SMTP address” to Enabled.

After setting this, it will automatically add the Exchange Account when they open Outlook and they’ll be ready to go! Note, that there is an additional setting with a similar name appended with “One time Only”. Using the One time Only will not try to apply the configuration on all subsequent Outlook runs.

Microsoft Outlook – Disable Cached Exchange Mode

Since we’ll have numerous users using the RDS server or servers, we don’t want users cached Outlook mailboxes (OST files) stored on the RDS server. We can stop this by disabling Exchange caching.

Navigate to:

User Configuration -> Policies -> Administrative Templates -> Microsoft Outlook 2016 -> Account Settings -> Exchange -> Cached Exchange Mode

And we’ll set the two following settings:

  • “Cached Exchange Mode (File | Cached Exchange Mode)” to Disabled
  • “Use Cached Exchange Mode for new and existing Outlook profiles” to Disabled
May 152021
 
Image of an AMD S7150 X2 MxGPU GPU Graphics Card

The AMD S7150 x2 PCIe MxGPU is a Graphics card designed for multi-user (MxGPU) virtualized environments (VDI). Installing an AMD S7150 x2 MxGPU allows you to provision virtual GPUs to Virtual workstations to enable 3D acceleration for applications like engineering, gaming, or pretty much anything that requires accelerated graphics.

Being a big fan of VDI and having my own VDI homelab, I just had to get my hands on one of these cards to experiment with, and learn. It’s an older card that was released in February of 2016, but it’s perfect for the homelab enthusiast.

I secured one and here’s a story about how I got it working on an unsupported 1U HPE DL360p Gen8 Server.

AMD S7150 x2 Specifications

The S7150x2 features 2 physical GPUs, each with 8GB of Video RAM, while the little brother “S7150”, has one GPU and 8GB of Video RAM.

For cooling, the S7150x2 requires the server to cool the card (it has no active cooling or fans), whereas the S7150 is available as both active (with fan) cooling, and passive cooling.

This card supports older versions of VMware ESXi 6.5 and also some versions of Citrix XenServer.

AMD MxGPU Overview

A picture of an AMD S7150 x2 PCIe mxGPU Card
AMD S7150 x2 PCIe mxGPU Card

The AMD MxGPU technology, uses a technology called SR-IOV to create Virtual Functions (VFs) that can be attached to virtual machines.

The S7150 x2, with it’s 2GPUs can actually be carved up in to 32 (16 per GPU) VFs, providing 32 users with 3D accelerated graphics.

Additionally, you can simply passthrough the individual GPUs to VMs themselves without using SR-IOV and VFs, providing 2 users with vDGA PCIe Passthrough 3D Accelerated graphics. vDGA stands for “Virtual Dedicated Graphics Acceleration”.

Please Note: In order to use MxGPU capabilities, you must have a server that supports SR-IOV and be using a version of VMware that is compatible with the MxGPU drivers and configuration utility.

The AMD FirePro S7150 x2 does not have any video-out connectors or ports, this card is strictly designed to be used in virtual environments.

The AMD S7150 x2 connected to a HPE DL360p Gen8 Server

As most of you know, I maintain a homelab for training, learning, testing, and demo purposes. I’ve had the S7150 x2 for about 7 months or so, but haven’t been able to use it because I don’t have the proper server.

Securing the proper server is out of the question due to the expense as I fund the majority of my homelab myself, and no vendor has offered to provide me with a server yet (hint hint, nudge nudge).

I do have a HPE ML310e Gen8 v2 server that had an NVidia Grid K1 card which can physically fit and cool the S7150 x2, however it’s an entry-level server and there’s bugs and issues with PCIe passthrough. This means both vDGA and MxGPU are both out of the question.

Image of a AMD S7150 X2 side by side with an Nvidia GRID K1 GPU Graphics Card
AMD S7150 X2 side by side with an Nvidia GRID K1 GPU Graphics Card

All I have left are 2 x HPE DL360P Gen 8 Servers. They don’t fit double width PCIe cards, they aren’t on the supported list, and they can’t power the card, but HEY, I’m going to make this work!

Connecting the Card

To connect to the Server, I purchased a “LINKUP – 75cm PCIe 3.0 x16 Shielded PCI Express Extension Cable”. This is essentially just a really, very long PCIe extension ribbon cable.

I connected this to the inside of the server, gently folded the cable and fed it out the back of the server.

Picture of a Server with PCIe Extension Ribbon Cable to an external GPU
Server with PCIe Extension Ribbon Cable to an external GPU

I realized that when the cable came in contact with the metal frame, it actually peeled the rubber off the ribbon cable (very sharp), so be careful if you attempt this. Thankfully the cable is shielded and I didn’t cause any damage.

Cooling the Card

Cooling the card was one of the most difficult tasks. I couldn’t actually even test this card when I first purchased it, because after powering up a computer, the card would instantly get up to extremely hot temperatures. This forced me to power down the system before the OS even booted.

I purchased a couple 3D printed cooling kits off eBay, but unfortunately none worked as they were for Nvidia cards. Finally one day I randomly checked, and I finally found a 3D printed cooling solution specifically for the AMD S7150 x2.

Image of a AMD S7150 X2 Cooling Shroud and Fan
AMD S7150 X2 Cooling Shroud and Fan

As you can see, the kit included a 3D printed air baffle and a fan. I had to remove the metal holding bracket to install the air baffle.

I also had to purchase a PWM fan control module, as the fan included with the kit runs at 18,000 RPM. The exact item I purchased was a “Noctua NA-FC1, 4-Pin PWM Fan Controller”.

Image of an CFM Fan Control Module
CFM Fan Control Module

Once I installed the controller, I was able to run some tests adjusting the RPM while monitoring the temperatures of the card, and got the fan to a speed where it wasn’t audible, yet was able to cool and keep the GPUs between 40-51 degrees Celsius.

Powering the Card

The next problem I had to overcome was to power the card with it being external.

To do this, I purchased a Gigabyte P750GM Modular Power Supply. I chose this specific PSU because it’s modular and I only had to install the cables I required (being the 6-pin power cable, 8-pin power cable, ATX Power Cable (for PSU on switch), and a CFM fan power connector).

Picture of a Gigabyte P750GM Modular Power Supply (PSU)
Gigabyte P750GM Modular Power Supply (PSU)

As you can see in the picture below, I did not install all the cabling in the PSU.

Image of a Modular PSU Connected to AMD S7150 x2
Modular PSU Connected to AMD S7150 x2

As you can see, if came together quite nicely. I also had to purchase an ATX power on adapter, to short certain pins to power on the PSU.

Picture of ATX PSU Jump Adapter
ATX PSU Jump Adapter

I fed this cable under the PSU and it is hanging underneath the desk out of the way. Some day I might make my own adapter, so I can remove the ATX power connector but unfortunately the PIN-outs on the PSU don’t match the end of the ATX connector cable.

Image of Side view of external S7150 x2 GPU on Server
Side view of external S7150 x2 GPU on Server

It’s about as neat and tidy as it can be, being a hacked up solution.

Using the card

Overall, by the time I was done connecting it to the server, I was pretty happy with the cleaned up final result.

AMD S7150 x2 connected to HPE Proliant DL360p Gen8 Server
AMD S7150 x2 connected to HPE Proliant DL360p Gen8 Server

After booting the system, I noticed that VMware ESXi 6.5 detected the card and both GPUs.

Screenshot of AMD S7150 X2 PCIe Passthru ESXi 6.5
AMD S7150 X2 PCIe Passthru ESXi 6.5

You’ll notice that on the server, the GPUs show up as an “AMD Tonga S7150”.

Before I started to play around with the MxGPU software, I wanted to simply pass through an entire GPU to a VM for testing. I enabled ESXi Passthru on both GPUs, and restarted the server.

So far so good!

I already had a persistent VDI VM configured and ready to go, so I edited the VM properties, and attached one of the AMD S7150 x2 GPUs to the VM.

Screenshot of Attached S7150 x2 Tonga GPU to vSphere VDI VM PCIe Passthru
Attached S7150 x2 Tonga GPU to vSphere VDI VM PCIe Passthru

Booting the VM I was able to see the card and I installed the AMD Radeon FirePro drivers. Everything just worked! “dxdiag” was showing full 3D acceleration, and I confirmed that hardware h.264 offload with the VMware Horizon Agent was functioning (confirmed via BLAST session logs).

That was easy! 🙂

Issues

Now on to the issues. After spending numerous days, I was unable to get the MxGPU features working with the AMD Radeon FirePro drivers for VMware ESXi. However, thanks for a reader named TonyJr, I was able to solve this, but more on that later (keep reading).

Even though I had the drivers and the scripts installed, it was unable to create the VFs (Virtual Functions) with SR-IOV. From research on the internet with the limited amount of information there is, I came to believe that this is due to an SR-IOV bug on the Gen8 platform that I’m running (remember, this is completely and utterly NOT SUPPORTED).

If anyone is interested, the commands worked and the drivers loaded, but it just never created the functions on reboot. I also tried using the newer drivers for the V340 card, with no luck as the module wouldn’t even load.

Here is an example of the configuration script:

[root@DA-ESX03:/vmfs/volumes/5d40aefe-030ee1d6-df44-ecb1d7f30334/files/mxgpu] sh mxgpu-install.sh -c
Detected 2 SR-IOV GPU
0000:06:00.0 Display controller VGA compatible controller: AMD Tonga S7150 [vmgfx0]
0000:08:00.0 Display controller VGA compatible controller: AMD Tonga S7150 [vmgfx1]
Start configuration....
Do you plan to use the Radeon Pro Settings vSphere plugin to configure MxGPU? ([Y]es/[N]o, default:N)n
Default Mode
Enter the configuration mode([A]uto/[H]ybrid,default:A)a
Auto Mode Selected
Please enter number of VFs:(default:4): 2
Configuring the GPU 1 ...
0000:06:00.0 VGA compatible controller: AMD Tonga S7150 [vmgfx0]
GPU1=2,B6
Configuring the GPU 2 ...
0000:08:00.0 VGA compatible controller: AMD Tonga S7150 [vmgfx1]
GPU2=2,B8
Setting up SR-IOV settings...
Done
pciHole.start = 2048
pciHole.end = 4543
Eligible VMs:
DA-VDIWS01
DA-VDIWS02
DA-VDIUbuntu01
DA-MxGPU
PCI Hole settings will be added to these VMs. Is this OK?[Y/N]n
User Exit
The configuration needs a reboot to take effect

To automatically assign VFs, please run "sh mxgpu-install.sh -a" after system reboot
[root@DA-ESX03:/vmfs/volumes/5d40aefe-030ee1d6-df44-ecb1d7f30334/files/mxgpu]

And as mentioned, on reboot I would only be left with the actual 2 physical GPUs available for passthru.

I also tried using “esxcfg-module” utility to configure the driver, but that didn’t work either.

esxcfg-module -s "adapter1_conf=9,0,0,4,2048,4000" amdgpuv
esxcfg-module -s "adapter1_conf=9,0,0,2,4096,4000 adapter2_conf=11,0,0,2,4096,4000" amdgpuv

Both combinations failed to have any effect on creating the VFs. It was unfortunate, but I still had 2 separate GPUs that I could able to passthrough to 2 VDI VMs which is more than enough for me.

Issues (Update June 19 2022)

Thanks to “TonyJr” leaving a comment, I was able to get the MxGPU drivers functioning on the ESXi host.

To get SR-IOV and the drivers to function, I had to perform the following:

  1. Log on to the BIOS
  2. Press Ctrl+A which unlocked a secret Menu called “SERVICE OPTIONS”
  3. Open “SERVICE OPTIONS”
  4. Select “PCI Express 64-Bit BAR Support”, choose “Enable” and then reboot the server.

Upon reboot, the ESXi instance had actually already sliced up the S7150 MxGPU using the options I tried configuring above. It’s all working now!

Ultimately I tweaked the settings to only slice one of the two GPUs in to 2 VFs, leaving me with a full GPU for passthrough, as well as 2 VFs from the other GPU. Thanks TonyJr!

Horizon View with the S7150 x2

Right off the bat, I have to say this works AMAZING! I’ve been using this for about 4 weeks now without any issues (and no fires, lol).

As mentioned above, because of my issues with SR-IOV on the server I couldn’t utilize MxGPU, but I do have 2 full GPUs each with 8GB of VRAM each that I can passthrough to VDI Virtual Machines using vDGA. Let’s get in to the experience…

Similar to the experience with the Nvidia GRID K1 card, the S7150 x2 provides powerful 3D acceleration and GPU functionality to Windows VDI VMs. Animations, rendering, gaming, it all works and it’s all 3D accelerated!

I’ve even tested the S7150 x2 with my video editing software to edit and encode videos. No complaints and it works just like a desktop system with a high performance GPU would. Imagine video editing on the road with nothing but a cheap laptop and the VMware Horizon client software!

The card also offloads encoding of the VMware BLAST h.264 stream from the CPU to the GPU. This is what actually compresses the video display feed that goes from the VM to your VMware View client. This provides a smoother experience with no delay or lag, and frees up a ton of CPU cycles. Traditionally without a GPU to offload the encoding, the h.264 BLAST stream uses up a lot of CPU resources and bogs down the VDI VM (and the server it’s running on).

Unfortunately, I don’t have any engineering, mapping, or business applications to test with, that this card was actually designed for, but you have to remember this card was designed to provide VDI users with a powerful workstation experience.

It would be amazing if AMD (and other vendors) released more cards that could provide these capabilities, both for the enterprise as well as enthusiasts and their homelab.

May 142021
 

Welcome to Episode 02 of The Tech Journal Vlog at StephenWagner.com

In this episode

What I’ve done this week

  • 10ZiG Unboxing (10ZiG 4610q and 10ZiG 6110)
  • Thin Client Blogging and Video Creation
  • VDI Work (Instant Clones, NVME Flash Storage Server)

Fun Stuff

  • HPE Discover 2021 – June 22 to June 24 – Register for HPE Discover at https://infl.tv/jtHb
  • Firewall with 163 day uptime and no updates?!?!?
  • Microsoft Exchange Repeated Pending Reboot Issue
  • Microsoft Exchange Security Update KB5001779 (and CU18 to CU20)

Life Update

  • Earned VMware vExpert Status in February!
  • Starlink in Saskatchewan, Alberta (Canada)
    • VDI over Starlink, low latency!!!
    • Use Cases (Oil and Gas Facilities, etc)

Work Update

  • HPE Simplivity Upgrade (w/Identity Store Issues, Mellanox Firmware Issues)

New Blog Posts

Current Projects

  • 10ZiG 4610q Thin Client Content
  • 10ZiG 6110 Thin Client Content
  • VMware Horizon Instant Clones Guides and Content

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!

May 122021
 

When attempting to install a Microsoft Exchange Cumulative Update, the readiness checker may fail and stop you from proceeding with the upgrade and installation.

You will be presented with the following error, or one similar:

There is a pending reboot from a previous installation of a Windows Server role or feature. Please restart the computer and then run Setup again.

After restarting the server, and re-attempting to install the Exchange CU, it will continue to present this and stop you from proceeding with the installation.

The Problem

There’s a few different things that can cause this. I experienced this issue when trying to upgrade Exchange 2016 CU18 to Exchange 2016 CU20. This issue can also happen when upgrading from Microsoft Exchange 2019 CU versions, as well as earlier versions of Exchange 2013.

I found a few posts online referencing to delete two registry keys, “UpdateExeVolatile” and “PendingFileRenameOperations”, however these didn’t exist for me.

The Fix

I figured I’d try to install a feature, specifically something small that I may or may not ever use, to see if it would work and to see if it would clear whatever flag had been set for the pending restart.

First, I left the Exchange CU installer window open on the prerequisite check, opened the Server Manager and installed the TFTP Client. After finishing, I hit retry and it continued to fail.

I restarted the server, ran the CU installer again which got stuck on the pending restart. This time I closed the Exchange CU upgrade, installed the “Telnet Client” feature, opened the CU upgrade again, and it finally worked and proceeded!

Screenshot of Exchange Pending Reboot Feature Install workaround
Exchange Pending Reboot Feature Install workaround

So with the above in mind, to bypass this issue you must:

  1. Restart Server
  2. Launch Exchange CU Installer
  3. Wait for readiness check to fail (warning of a pending reboot), close installer
  4. Install a feature with the Server Manager, such as “TFTP Client” or “Telnet Client”
  5. Open Exchange CU Installer
  6. Install Microsoft Exchange Cumulative Update successfully!

Hope this helps! Leave a comment and let me know if it worked for you!

May 102021
 

Welcome to Episode 01 of The Tech Journal Vlog at StephenWagner.com

In this episode

Life Update

  • Tons of work
  • Staycations (Banff, Jasper, Kananaskis, Panorama)
  • More time working on the blog! 🙂

Work Update

  • Tons of VDI, non-stop…

New Blog Posts

Current Projects

  • AMD S7150 x2 MxGPU
  • 10ZiG Thin Clients

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!

May 042021
 
Zoom Logo

Looking at setting up Zoom for VDI in your Virtual Desktop Infrastructure?

In this post, I will guide you on how to deploy Zoom for VDI and the Zoom VDI Plugin in your VMware Horizon View VDI Infrastructure. There is also a Zoom VDI Plugin for Citrix XenDesktop and WVD (Windows Virtual Desktop) in addition to VMware Horizon.

While these instructions are targeted for VMware Horizon VDI environments, the process is very similar for Citrix XenDesktop.

Please make sure to read Zoom’s documentation on “Getting started with VDI“, and Zoom’s “VDI Client Features Comparison“, to understand the differences in the Zoom clients.

Requirements

To get started, you’ll need the following:

  • Zoom for VDI MSI Installer (Available here)
  • Zoom VDI Plugin Installer (Available here)
  • Zoom Active Directory GPO ADMX Template (Available here)
  • Zoom VDI Registry Settings (Available here)
  • VMware Horizon client on Windows or compatible Thin Client
  • VDI Desktop or Base Image
  • Endpoints must have internet access

Background

Just like with Microsoft Teams, before Zoom’s VDI client, VMware’s RTAV (Real-time Audio-Video) was used to handle multimedia. This offloaded audio and video to the VMware Horizon Client utilizing a dedicated channel over the connection to optimize the data exchange. With minor tweaks (check out my post on enhancing RTAV webcam with VMware Horizon), this actually worked quite well with the exception of microphone quality on the end-users side, and high bandwidth requirements.

Using Zoom for VDI and the Zoom VDI Plugin, Zoom will offload (and a more optimized way than RTAV) video encoding and decoding from the VDI Virtual Machine and the endpoint will directly communicate with Zoom’s infrastructure. And, just like Microsoft Teams Optimization, this is one less hop for data, one less processing point, and one less load off your server infrastructure.

When using Zoom for VDI, there are some limitations. Please review Zoom’s application comparison.

Deploying Zoom for VDI

There are two components involved in deploying Zoom for VDI.

  • Zoom for VDI Application on VDI Virtual Machine (or Image)
  • Zoom VDI Plugin installed on the client system connecting to the VDI session (Computer, Thin Client, Zero Client)

It’s pretty straight forward. We just need to have the Zoom for VDI application installed on the VDI Virtual Machine (and/or base image), and have the plugin installed on the computer or thin client that we are connecting with.

Zoom for VDI About Screenshot
Zoom for VDI About Screenshot

Zoom is highly configurable both with a GPO (Group Policy Object) and registry settings. Please make sure you load up the Zoom Active Directory ADMX Templates and configure them appropriately for your environment and deployment.

More information on the Zoom Active Directory ADMX Template is available at Zoom’s “Group Policy Options for the Windows desktop client and Zoom Rooms“. You can also find information on Zoom’s VDI Client Registry settings here.

These GPOs are needed especially for non-persistent VDI (Instant Clones) for autoconfiguration and SSO (Single Sign On) when the user opens the application and to tweak numerous other configurables.

Zoom for VDI Application Installation on VDI VM or Base Image

For the first part of deployment, we’ll need to install the Zoom for VDI application inside of our VDI VM or bundle it inside of our Base Image (if you’re using instant clones).

Since this is an MSI file, it’s easy to deploy. For a list of full MSI switches, please visit Zoom’s “Mass Installation and Configuration for Windows” document.

Installation

To deploy in your existing infrastructure using persistent desktop pools, you can deploy the MSI via Group Policy Objects.

To deploy in your existing infrastructure using non-persistent desktop pools (Instant Clones), you can install Zoom for VDI in your base image, and then re-push the image/snapshot.

To manually install on an existing VDI Virtual Machine, you can double click the MSI, or run the following command:

msiexec /package ZoomInstallerVDI.msi

And that’s it! Make sure you have your Zoom GPO and/or registry settings configured as well.

Zoom VDI Plugin Installation on Client Computer or Thin Client

For the second part of deployment, we need to load the Zoom VDI Plugin on the connecting client computer and/or thin client.

The Zoom for VDI plugin is available for numerous different operating system and thin clients such as Windows, Mac, Mac (ARM), Linux (CentOS, Ubuntu), HP ThinPro Thin clients, Dell ThinOS Thin clients, and more!

Client Plugin Installation

The steps will vary depending on the computer or device you’re connecting with so you’ll want to download the appropriate plugin and install it.

As an example, to install the Zoom VDI Plugin manually on a Windows Client running VMware Horizon View Client:

  1. Download the appropriate Zoom for VDI plugin
  2. Install
  3. Restart

It’s actually that easy. You can also deploy the MSI file via Active Directory GPO or your application and infrastructure management platform if you’re installing it on to a large number of systems.

Conclusion

As you can see, it’s pretty easy to get up and running with Zoom for VDI. When deploying VDI, make sure you give your users the tools and applications they need to be productive. Including Zoom for VDI in your deployment is a no-brainer!

One last thing I want to mention is that you can have both the traditional Zoom Desktop and Zoom for VDI application installed at the same time. In my own high performance environment, I chose to have and use both due to the limitation of the Zoom for VDI application. When using the traditional Zoom Desktop application, VMware RTAV will be used if configured, and still works great!

Leave a comment!

May 032021
 

This guide will show you to install Microsoft (Classic) Teams and deploy Microsoft Teams VDI Optimization on VMWare Horizon for Manual Pools, Automated Pools, and Instant Clone Pools, for use with both persistent and non-persistent VDI. This guide works for Microsoft Teams on Windows 10 and Windows 11, including the new Windows 11 22H2.

Please see my post Deploy and install the New Teams for VDI to learn how to deploy the new Teams client for VDI. The Classic client will go end of support on June 30, 2024.

Please make sure to check out Microsoft’s documentation on “Teams for Virtualized Desktop Infrastructure“, and VMware’s document “Microsoft Teams Optimization with VMware Horizon” for more information.

I also have a guide on how to Deploy, Install, and Configure Microsoft Office 365 in a VDI Environment, so make sure you check it out!

Requirements

To get started, you’ll need the following:

  • Microsoft Teams MSI Installer (Available here: 64-Bit, 32-Bit)
  • VMware Horizon Client (Available here)
  • VDI Desktop or VDI Base Image
  • Ability to create and/or modify GPOs on domain
  • VMware Horizon GPO Bundle

Background

Before Microsoft Teams VDI Optimization, VMware’s RTAV (Real-Time Audio-Video) was generally used. This offloaded audio and video to the VMware Horizon Client utilizing a dedicated channel over the connection to optimize the data exchange. With minor tweaks (check out my post on enhancing RTAV webcam with VMware Horizon), this actually worked quite well with the exception of microphone quality on the end-users side, and high bandwidth requirements.

Starting with Horizon View 7.13 and Horizon View 8 (2006), VMware Horizon now supports Microsoft Teams Optimization. This technology offloads the Teams call directly to the endpoint (or client device), essentially drawing over the VDI VM’s Teams visual interface and not involving the VDI Virtual Machine at all. The client application (or thin client) handles this and connects directly to the internet for the Teams Call. One less hop for data, one less processing point, and one less load off your server infrastructure.

Microsoft Teams Optimization uses WebRTC to function.

Deploying Microsoft Teams Optimization on VMware Horizon VDI

There are two components required to deploy Microsoft Teams Optimization for VDI.

  • Microsoft Specific Setup and Configuration of Microsoft Teams
  • VMware Specific Setup and Configuration for Microsoft Teams

We’ll cover both in this blog post.

Microsoft Specific Setup and Configuration of Microsoft Teams Optimization

First and foremost, do NOT bundle the Microsoft Teams install with your Microsoft 365 (Office 365) deployment, they should be installed separately.

We’re going to be installing Microsoft Teams using the “per-machine” method, where it’s installed in the Program Files of the OS, instead of the usual “per-user” install where it’s installed in the user “AppData” folder.

Non-persistent (Instant Clones) VDI requires Microsoft Teams to be installed “Per-Machine”, whereas persistent VDI can use both “Per-Machine” and “Per-User” for Teams. I use the “Per-Machine” for almost all VDI deployments. This allows you to manage versions utilizing MSIs and GPOs.

Please Note that when using “Per-Machine”, automatic updates are disabled. In order to upgrade Teams, you’ll need to re-install the newer version. Take this in to account when planning your deployment. If you use the per-user, it will auto-update.

For Teams Optimization to work, your endpoints and/or clients MUST have internet access.

Let’s Install Microsoft Teams (VDI Optimized)

For Per-Machine (Non-Persistent Desktops) Install, use the following command:

msiexec /i C:\Location\Teams_windows_x64.msi ALLUSER=1 ALLUSERS=1

For Per-User (Persistent VDI) Install, you can use the following command:

msiexec /i C:\Location\Teams_windows_x64.msi ALLUSERS=1

If in the event you need to uninstall Microsoft Teams to deploy an upgrade, you can use the following command:

msiexec /passive /x C:\Location\Teams_windows_x64.msi

And that’s it for the Microsoft Specific side of things!

VMware Specific Setup and Configuration for Microsoft Teams Optimization

When it comes to the VMware Specific Setup and Configuration for Microsoft Teams Optimization, it’s a little bit more complex.

VMware Horizon Client Installation

When installing the VMware Horizon Client, the Microsoft Teams optimization feature should be installed by default. However, doing a custom install, make sure that “Media Optimization for Microsoft Teams” is enabled (as per the screenshot below):

Screenshot of VMware View Client Install with Microsoft Teams Optimization
VMware View Client Install with Microsoft Teams Optimization

Group Policy Object to enable WebRTC and Microsoft Teams Optimization

You’ll only want to configure GPOs for those users and sessions where you plan on actually utilizing Microsoft Teams Optimization. Do not apply these GPOs to endpoints where you wish to use RTAV and don’t want to use Teams optimization, as it will enforce some limitations that come with the technology (explained in Microsoft’s documentation).

We’ll need to enable VMware HTML5 Features and Microsoft Teams Optimization (WebRTC) inside of Group Policy. Head over and open your existing VDI GPO or create a new GPO. You’ll need to make sure you’ve installed the latest VMware Horizon GPO Bundle. There are two switches we need to set to “Enabled”.

Expand the following, and set “Enable HTML5 Features” to “Enabled”:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> Enable VMware HTML5 Features

Next, we’ll set “Enable Media Optimization for Microsoft Teams” to “Enabled”. You’ll find it in the following:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> VMware WebRTC Redirection Features -> Enable Media Optimization for Microsoft Teams

And that’s it, you’re GPOs are now configured.

If you’re running a persistent desktop, run “gpupdate /force” in an elevated command prompt to grab the updated GPOs. If you’re running a non-persistent desktop pool, you’ll need to push the base image snapshot again so your instant clones will have the latest GPOs.

Confirming Microsoft Teams Optimization for VDI

There’s a simple and easy way to test if you’re currently running Microsoft Teams Optimized for VDI.

  1. Open Microsoft Teams
  2. Click on your Profile Picture to the right of your Company Name
  3. Expand “About”, and select “Version”
Screenshot of Microsoft Teams - About and Version to check Teams Optimization for VDI
Microsoft Teams – About and Version to check Teams Optimization for VDI

After selecting this, you’ll see a toolbar appear horizontally underneath the search, company name, and your profile picture with some information. Please see the below examples to determine if you’re running in 1 of 3 modes.

The following indicates that Microsoft Teams is running in normal mode (VDI Teams Optimization is Disabled). If you have configured VMware RTAV, then it will be using RTAV.

Screenshot indicator of Microsoft Teams VDI Optimization disabled
Microsoft Teams VDI Optimization disabled

The following indicates that Microsoft Teams is running in VDI Optimized mode (VDI Teams Optimization is Enabled showing “VMware Media Optimized”).

Screenshot indicator of Microsoft Teams VDI Optimization enabled
Microsoft Teams VDI Optimization enabled

The following indicates that Microsoft Teams is configured for VDI Optimization, however is not functioning and running in fallback mode. If you have VMware RTAV configured, it will be falling back to using RTAV. (VDI Teams Optimization is Enabled but not working showing “VMware Media Not Connected”, and is using RTAV if configured).

Screenshot of Microsoft Teams VDI Optimization Fallback
Microsoft Teams VDI Optimization Fallback

If you’re having issues or experiencing unexpected results, please go back and check your work. You may also want to review Microsoft’s and VMware’s documentation.

Conclusion

This guide should get you up and running quickly with Microsoft Teams Optimization for VDI. I’d recommend taking the time to read both VMware’s and Microsoft’s documentation to fully understand the technology, limitations, and other configurables that you can use and fine-tune your VDI deployment.

May 022021
 
Ubuntu Orange Logo

In this post, I’m going to provide instructions and a guide on how to install the Horizon Agent for Linux on Ubuntu 20.04 LTS. This will allow you to run and connect to an Ubuntu VDI VM with VMware Horizon View.

In the past I’ve created instructions on how to do this on earlier versions of Ubuntu, as well as RedHat Linux, but it’s getting easier than ever and requires less steps than previous guides.

I decided to create the updated tutorial after purchasing an AMD S7150 x2 and wanted to get it up and running with Ubuntu 20.04 LTS and see if it works.

Screenshot of VMware Horizon for Linux on Ubuntu 20.04 LTS
VMware Horizon for Linux on Ubuntu 20.04 LTS

I also highly recommend reading the documentation made available for VMware Horizon: Setting Up Linux Desktops in Horizon.

Requirements

  • VMware Horizon View 8 (I’m running version 2103)
  • Horizon Enterprise or Horizon for Linux Licensing
  • Horizon VDI environment that’s functioning and working
  • Ubuntu 20.04 LTS Installer ISO (download here)
  • Horizon Agent for Linux (download here)
  • Functioning internal DNS

Instructions

  1. Create a VM on your vCenter Server, attached the Ubuntu 20.04 LTS ISO, and install Ubuntu
  2. Install any Root CA’s or modifications you need for network access (usually not needed unless you’re on an enterprise network)
  3. Update Ubuntu as root
    apt update
    apt upgrade
  4. Install software needed for VMware Horizon Agent for Linux as root
    apt install openssh-server python python-dbus python-gobject open-vm-tools-desktop
  5. Install your software (Chrome, etc.)
  6. Install any vGPU or GPU Drivers you need before installing the Horizon Agent
  7. Install the Horizon Agent For Linux as root (Enabling Audio, Disabling SSO)
    ./install_viewagent.sh -a yes -S no
  8. Reboot the Ubuntu VM
  9. Log on to your Horizon Connection Server
  10. Create a manual pool and configure it
  11. Add the Ubuntu 20.04 LTS VM to the manual desktop pool
  12. Entitle the User account to the desktop pool and assign to the VM
  13. Connect to the Ubuntu 20.04 Linux VDI VM from the VMware Horizon Client

And that’s it, you should now be running.

As for the AMD S7150 x2, I noticed that Ubuntu 20.04 LTS came with the drivers for it called “amdgpu”. Please note that this driver does not work with VMware Horizon View. After installing “mesa-utils”, running “glxgears” and “glxinfo” it did appear that 3D Acceleration was working, however after further investigation it turned out this is CPU rendering and not using the S7150 x2 GPU.

You now have a VDI VM running Ubuntu Linux on VMware Horizon View.

May 012021
 

Do you have a VMware Horizon View VDI environment and some power users you’d like to optimize? I’ve got some optimizations that you can easily apply via the VMware Horizon GPO (Group Policy Object) bundle.

These are performance optimizations and configurations that I have rolled out for my own persistent desktop to optimize the experience for myself. These optimizations may use more resources to provide a better experience for power users.

Please note that these optimizations are not meant to be deployed for large numbers of users unless you have the resources to handle it. Always test these settings before rolling out in to production.

VMware Horizon GPO Bundle

As part of any VMware Horizon View deployment, you should have installed the VMware Horizon GPO Bundle. This is a collection of ADMX GPO (Group Policy Object) templates that you can upload to your domain controllers and use to configure various aspects of your VMware Horizon deployment.

These GPOs can be used to configure both the server, VDI VMs, VMware Horizon Clients, and various configurables with the protocols (including VMware Blast) being used in your deployment such as VMware BLAST, PCoIP, and RDP.

Below, you’ll find some of my favorite customizations and optimizations that I use in my own environment to enhance my experience.

For more information on the VMware Horizon GPO Bundle, please visit the VMware Horizon Documentation – Using Horizon Group Policy Administrative Template Files.

In this post, I’ll be covering the following:

  1. VMware Blast: Framerate
  2. VMware Blast: H. 264 Quality
  3. VMware Blast: Max Session Bandwidth kbit/s Megapixel Slope
  4. VMware Horizon Client Configuration: Allow display scaling
  5. VMware Horizon Client Configuration/View USB Configuration: Allow keyboard and Mouse (HID) Devices
  6. VMware View Agent Configuration/View RTAV Configuration/View RTAV Webcam Settings
  7. VMware View Agent Configuration/VMware HTML5 Features/Enable VMware HTML5 Features
  8. VMware View Agent Configuration/VMware HTML5 Features/VMware HTML5 Multimedia Redirection
  9. VMware View Agent Configuration/VMware HTML5 Features/VMware WebRTC Redirection Features

Let’s begin!

VMware Blast: Framerate

Do you have a GPU for your VDI session and extra bandwidth? If so, let’s crank that framerate up for a smoother experience! Configuring this variable will increase the default framerate to 60 fps (frames per second).

Computer Configuration -> Policies -> Administrative Templates -> VMware Blast -> Max Frame Rate

Let’s set this to “Enabled” and set it to 60.

VMware Blast: H. 264 Quality

If you have a GPU to offload H. 264 and the available bandwidth, you can change this setting to reduce the

Computer Configuration -> Policies -> Administrative Templates -> VMware Blast -> H. 264 Quality

There are two values for this setting, “H. 264 Maximum QP” and “H. 264 Minimum QP”. These control how much processing and compression is used on the VMware Blast h. 264 session.

To increase the quality (and bandwidth usage) of the session, you can decrease these to reduce the amount of compression. In my case I reduced both by “5” from their default values which made a big change.

VMware Blast: Max Session Bandwidth kbit/s Megapixel Slope

This setting will increase the amount of available bandwidth for the Horizon Blast h.264 video stream.

Computer Configuration -> Policies -> Administrative Templates -> VMware Blast -> Max Session Bandwidth kbit/s Megapixel Slope

The default is “6200” and I recommend playing with this a little to find out what suits you best depending on the other changes you made, especially with the 2 items above.

You can try doubling, tripling, or quadrupling this value depending on what’s required and how much available bandwidth you have.

VMware Horizon Client Configuration: Allow display scaling

Users are usually connecting from all sorts of devices, including laptops, tablets, and more. When connecting to a VDI session with a laptop or tablet that is using display scaling because it has a high native resolution, it may be extremely difficult to read any text because scaling is disabled on the VDI session.

To allow display scaling in the VDI session, we need to enable it via GPO on both the “Computer Configuration” and “User Configuration”.

Computer Configuration -> Policies -> Administrative Templates -> VMware Horizon Client Configuration -> Allow display scaling

And we’ll set “Allow Display Scaling” to “Enabled”.

User Configuration -> Policies -> Administrative Templates -> VMware Horizon Client Configuration -> Allow display scaling

And we’ll also set that “Allow Display Scaling” to “Enabled”.

Configuring this will allow you to configure display scaling on the VMware Horizon View client. After enabling this, it automatically configures scaling to match what I have configured on my connecting workstation (such as my Microsoft Surface Tablet, or my Lenovo X1 Carbon laptop). You also have the ability to manually configure the scaling on the session.

VMware Horizon Client Configuration/View USB Configuration: Allow keyboard and Mouse Devices

While you never want to use USB Redirection for keyboards and mice, you may need to use USB redirection for various HID (Human Interface Devices) that appear as keyboards or mice. You may need to enable this to make the following devices work:

  • 2FA/MFA Security Tokens
  • Security Keys
  • One Touch Tokens

In my case, I had a Yubico Yubikey security key that I needed passed through using USB Redirection (more on that here) to authenticate 2FA sessions inside of my VDI session.

To enable the passthrough of keyboards and mice (HID) devices, change the following.

Computer Configuration -> Policies -> Administrative Templates -> VMware Horizon Client Configuration -> View USB Configuration -> Allow keyboard and Mouse Devices

We’re going to go ahead and set “Allow keyboard and Mouse Devices” to “Enabled”.

VMware View Agent Configuration/View RTAV Configuration/View RTAV Webcam Settings

Using a webcam with VMware Horizon and RTAV (Real Time Audio Video), you may notice a slow frame rate and low resolution on your webcam going through the VDI session.

Here, we’re going to increase the fps (frames per second) and resolution of RTAV for VMware Horizon.

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> View RTAV Configuration -> View RTAV Webcam Settings

We’re going to “Enable” the following and set the values below:

Max frames per second = 25
Resolution - Default image resolution height in pixels = 600
Resolution - Default image resolution width in pixels = 800
Resolution - Max image height in pixels = 720
Resolution - Max image width in pixels = 1280

You’ll now notice a clearer and higher resolution webcam running at a faster framerate.

VMware View Agent Configuration/VMware HTML5 Features/Enable VMware HTML5 Features

There’s numerous HTML5 optimizations that VMware has incorporated in to the latest versions of VMware Horizon View. These include, but are not limited to:

  • HTML5 Multimedia Redirection
  • Geolocation Redirection
  • Browser Redirection
  • Media Optimization for Microsoft Teams

We want all this good stuff, so we’ll head over to the following:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> Enable VMware HTML5 Features

We’ll set “Enable VMware HTML5 Features” to “Enabled”.

I highly recommend reading up and briefing yourself on HTML5 Multimedia Redirection, along with over Remote Desktop Features over on the VMware Horizon 2013 Documentation – Configurating Remote Desktop Features.

VMware View Agent Configuration/VMware HTML5 Features/VMware HTML5 Multimedia Redirection

So there’s this little thing called “HTML5 Multimedia Redirection”, where when configured and the plugins are installed, VMware Horizon will essentially redirect HTML5 based multimedia from the VDI session to your local system to handle.

This offload makes video extremely crisp and smooth, however comes with some concerns, security risks, and learning on your part. When you enable this, you only want to do so for trusted websites.

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> VMware HTML5 Multimedia Redirection

In this location, we need to set “Enable VMware HTML5 Multimedia Redirection” to “Enabled”. After this, we need to configure the URL list for domains and websites that we will allow HTML5 Multimedia Redirection to work with.

To do this, we’ll set “Enable URL list for VMware HTML5 Multimedia Redirection” to “Enabled”, and then add YouTube to the exception list to allow HTML5 Multimedia Redirection for YouTube. In the URL list, we will add:

https://www.youtube.com/*

And that’s it!

VMware View Agent Configuration/VMware HTML5 Features/VMware WebRTC Redirection Features

We’re all using Microsoft Teams these days, and while Microsoft Teams does have VDI optimization, you need to enable what’s needed on the VMware Horizon side of things to make it work.

To do this, head over to:

Computer Configuration -> Policies -> Administrative Templates -> VMware View Agent Configuration -> VMware HTML5 Features -> VMware WebRTC Redirection Features

We’ll set “Enable Media Optimization for Microsoft Teams” to “Enabled”.

In order for Microsoft Teams VDI optimization to function, there are steps involved with the installation which aren’t covered in this post. For these steps, make sure you check out my guide on Microsoft Teams VDI Optimization for VMware Horizon.

Conclusion

Leave a comment and let me know if these helped you, or if you have any optimizations or tweaks you’d like to share with the community!

May 012021
 
Picture of NVMe Storage Server Project

For over a year and a half I have been working on building a custom NVMe Storage Server for my homelab. I wanted to build a high speed storage system similar to a NAS or SAN, backed with NVMe drives that provides iSCSI, NFS, and SMB Windows File Shares to my network.

The computers accessing the NVMe Storage Server would include VMware ESXi hosts, Raspberry Pi SBCs, and of course Windows Computers and Workstations.

The focus of this project is on high throughput (in the GB/sec) and IOPS.

The current plan for the storage environment is for video editing, as well as VDI VM storage. This can and will change as the project progresses.

The History

More and more businesses are using all-flash NVMe and SSD based storage systems, so I figured there’s no reason why I can’t have build and have my own budget custom all NVMe flash NAS.

This is the story of how I built my own NVMe based Storage Server.

The first version of the NVMe Storage Server consisted of the IO-PEX40152 card with 4 x 2TB Sabrent Rocket 4 NVMe drives inside of an HPE Proliant DL360p Gen8 Server. The server was running ESXi with TrueNAS virtualized, and the PCIe card passed through to the TrueNAS VM.

The results were great, the performance was amazing, and both servers had access to the NFS export via 2 x 10Gb SFP+ networking.

There were three main problems with this setup:

  1. Virtualized – Once a month I had an ESXi PSOD. This was either due to overheating of the IO-PEX40152 card because of modifications I made, or bugs with the DL360p servers and PCIe passthrough.
  2. NFS instead of iSCSI – Because TrueNAS was virtualized inside of the host that was using it for storage, I had to use NFS since the host virtualizing TrueNAS would also be accessing the data on the TrueNAS VM. When shutting down the host, you need to shut down TrueNAS first. NFS disconnects are handled way healthier than iSCSI disconnects (which can cause corruption even if no files are being used).
  3. CPU Cores maxed on data transfer – When doing initial testing, I was maxing out the CPU cores assigned to the TrueNAS VM because the data transfers were so high. I needed a CPU and setup that was better fit.

Version 1 went great, but you can see some things needed to be changed. I decided to go with a dedicated server, not virtualize TrueNAS, and go for a newer CPU with a higher Ghz speed.

And so, version 2 was born (built). Keep reading and scrolling for pictures!

The Hardware

On version 2 of the project, the hardware includes:

Notes on the Hardware:

  • While the ML310e Gen8 v2 server is a cheap low entry server, it’s been a fantastic team member of my homelab.
  • HPE Dual 10G Port 560SFP+ adapters can be found brand new in unsealed boxes on eBay at very attractive prices. Using HPE Parts inside of HPE Servers, avoids the fans from spinning up fast.
  • The ML310e Gen8 v2 has some issues with passing through PCIe cards to ESXi. Works perfect when not passing through.

The new NVMe Storage Server

I decided to repurpose an HPE Proliant ML310e Gen8 v2 Server. This server was originally acting as my Nvidia Grid K1 VDI server, because it supported large PCIe cards. With the addition of my new AMD S7150 x2 hacked in/on to one of my DL360p Gen8’s, I no longer needed the GRID card in this server and decided to repurpose it.

Picture of an HPe ML310e Gen8 v2 with NVMe Storage
HPe ML310e Gen8 v2 with NVMe Storage

I installed the IOCREST IO-PEX40152 card in to the PCIe 16x slot, with 4 x 2TB Sabrent Rocket 4 NVME drives.

Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

While the server has a PCIe 16x wide slot, it only has an 8x bus going to the slot. This means we will have half the capable speed vs the true 16x slot. This however does not pose a problem because we’ll be maxing out the 10Gb NICs long before we max out the 8x bus speed.

I also installed an HPE Dual Port 560SFP+ NIC in to the second slot. This will allow a total of 2 x 10Gb network connections from the server to the Ubiquiti UniFi US-16-XG 10Gb network switch, the backbone of my network.

The Server also have 4 x Hot Swappable HD bays on the front. When configured in HBA mode (via the BIOS), these are accessible by TrueNAS and can be used. I plan on populating these with 4 x 4TB HPE MDL SATA Hot Swappable drives to act as a replication destination for the NVMe pool and/or slower magnetic long-term storage.

Front view of HPE ML310e Gen8 v2 with Hotswap Drive bays
HPE ML310e Gen8 v2 with Hotswap Drive bays

I may also try to give WD RED Pro drives a try, but I’m not sure if they will cause the fans to speed up on the server.

TrueNAS Installation and Configuration

For the initial Proof-Of-Concept for version 2, I decided to be quick and dirty and install it to a USB stick. I also waited until I installed TrueNAS on to the USB stick and completed basic configuration before installing the Quad NVMe PCIe card and 10Gb NIC. I’m using a USB 3.0 port on the back of the server for speed, as I can’t verify if the port on the motherboard is USB 2 or USB 3.

Picture of a TrueNAS USB Stick on HPE ML310e Gen8 v2
TrueNAS USB Stick on HPE ML310e Gen8 v2

TrueNAS installation worked without any problems whatsoever on the ML310e. I configured the basic IP, time, accounts, and other generic settings. I then proceeded to install the PCIe cards (storage and networking).

Screenshot of TrueNAS Dashboard Installed on NVMe Storage Server
TrueNAS Installed on NVMe Storage Server

All NVMe drives were recognized, along with the 2 HDDs I had in the front Hot-swap bays (sitting on an HP B120i Controller configured in HBA mode).

Screenshot of available TrueNAS NVMe Disks
TrueNAS NVMe Disks

The 560SFP+ NIC also was detected without any issues and available to configure.

Dashboard Screenshot of TrueNAS 560SFP+ 10Gb NIC
TrueNAS 560SFP+ 10Gb NIC

Storage Configuration

I’ve already done some testing and created a guide on FreeNAS and TrueNAS ZFS Optimizations and Considerations for SSD and NVMe, so I made sure to use what I learned in this version of the project.

I created a striped pool (no redundancy) of all 4 x 2TB NVMe drives. This gave us around 8TB of usable high speed NVMe storage. I also created some datasets and a zVOL for iSCSI.

Screenshot of NVMe TrueNAS Storage Pool with Datasets and zVol
NVMe TrueNAS Storage Pool with Datasets and zVol

I chose to go with the defaults for compression to start with. I will be testing throughput and achievable speeds in the future. You should always test this in every and all custom environments as the results will always vary.

Network Configuration

Initial configuration was done via the 1Gb NIC connection to my main LAN network. I had to change this as the 10Gb NIC will be directly connected to the network backbone and needs to access the LAN and Storage VLANs.

I went ahead and configured a VLAN Interface on VLAN 220 for the Storage network. Connections for iSCSI and NFS will be made on this network as all my ESXi servers have vmknics configured on this VLAN for storage. I also made sure to configure an MTU of 9000 for jumbo frames (packets) to increase performance. Remember that all hosts must have the same MTU to communicate.

Screenshot of 10Gb NIC on Storage VLAN
10Gb NIC on Storage VLAN

Next up, I had to create another VLAN interface for the LAN network. This would be used for management, as well as to provide Windows File Share (SMB/Samba) access to the workstations on the network. We leave the MTU on this adapter as 1500 since that’s what my LAN network is using.

Screenshot of 10Gb NIC on LAN VLAN
10Gb NIC on LAN VLAN

As a note, I had to delete the configuration for the existing management settings (don’t worry, it doesn’t take effect until you hit test) and configure the VLAN interface for my LANs VLAN and IP. I tested the settings, confirmed it was good, and it was all setup.

At this point, only the 10Gb NIC is now being used so I went ahead and disconnected the 1Gb network cable.

Sharing Setup and Configuration

It’s now time to configure the sharing protocols that will be used. As mentioned before, I plan on deploying iSCSI, NFS, and Windows File Shares (SMB/Samba).

iSCSI and NFS Configuration

Normally, for a VMware ESXi virtualization environment, I would always usually prefer iSCSI based storage, however I also wanted to configure NFS to test throughput of both with NVMe flash storage.

Earlier, I created the datasets for all my my NFS exports and a zVOL volume for iSCSI.

Note, that in order to take advantage of the VMware VAAI storage directives (enhancements), you must use a zVOL to present an iSCSI target to an ESXi host.

For NFS, you can simply create a dataset and then export it.

For iSCSI, you need to create a zVol and then configure the iSCSI Target settings and make it available.

SMB (Windows File Shares)

I needed to create a Windows File Share for file based storage from Windows computers. I plan on using the Windows File Share for high-speed storage of files for video editing.

Using the dataset I created earlier, I configured a Windows Share, user accounts, and tested accessing it. Works perfect!

Connecting the host

Connecting the ESXi hosts to the iSCSI targets and the NFS exports is done in the exact same way that you would with any other storage system, so I won’t be including details on that in this post.

We can clearly see the iSCSI target and NFS exports on the ESXi host.

Screenshot of TrueNAS NVMe iSCSI Target on VMware ESXi Host
TrueNAS NVMe iSCSI Target on VMware ESXi Host
Screenshot of NVMe iSCSI and NFS ESXi Datastores
NVMe iSCSI and NFS ESXi Datastores

To access Windows File Shares, we log on and map the network share like you would normally with any file server.

Testing

For testing, I moved (using Storage vMotion) my main VDI desktop to the new NVMe based iSCSI Target LUN on the NVMe Storage Server. After testing iSCSI, I then used Storage vMotion again to move it to the NFS datastore. Please see below for the NVMe storage server speed test results.

Speed Tests

Just to start off, I want to post a screenshot of a few previous benchmarks I compiled when testing and reviewing the Sabrent Rocket 4 NVMe SSD disks installed in my HPE DL360p Gen8 Server and passed through to a VM (Add NVMe capability to an HPE Proliant DL360p Gen8 Server).

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Note, that when I performed these tests, my CPU was maxed out and limiting the actual throughput. Even then, these are some fairly impressive speeds. Also, these tests were directly testing each NVMe drive individually.

Moving on to the NVMe Storage Server, I decided to test iSCSI NVMe throughput and NFS NVMe throughput.

I opened up CrystalDiskMark and started a generic test, running a 16GB test file a total of 6 times on my VDI VM sitting on the iSCSI NVMe LUN.

Screenshot of NVMe Storage Server iSCSI Benchmark with CrystalDiskMark
NVMe Storage Server iSCSI Benchmark with CrystalDiskMark

You can see some impressive speeds maxing out the 10Gb NIC with crazy performance of the NVME storage:

  • 1196MB/sec READ
  • 1145.28MB/sec WRITE (Maxing out the 10GB NIC)
  • 62,725.10 IOPS READ
  • 42,203.13 IOPS WRITE

Additionally, here’s a screenshot of the ix0 NIC on the TrueNAS system during the speed test benchmark: 1.12 GiB/s.

Screenshot of TrueNAS NVME Maxing out 10Gig NIC
TrueNAS NVME Maxing out 10Gig NIC

And remember this is with compression. I’m really excited to see how I can further tweak and optimize this, and also what increases will come with configuring iSCSI MPIO. I’m also going to try to increase the IOPS to get them closer to what each individual NVMe drive can do.

Now on to NFS, the results were horrible when moving the VM to the NFS Export.

Screenshot of NVMe Storage Server NFS Benchmark with CrystalDiskMark
NVMe Storage Server NFS Benchmark with CrystalDiskMark

You can see that the read speed was impressive, but the write speed was not. This is partly due to how writes are handled with NFS exports.

Clearly iSCSI is the best performing method for ESXi host connectivity to a TrueNAS based NVMe Storage Server. This works perfect because we’ll get the VAAI features (like being able to reclaim space).

iSCSI MPIO Speed Test

This is more of an update… I was finally able to connect, configure, and utilize the 2nd 10Gbe port on the 560SFP+ NIC. In my setup, both hosts and the TrueNAS storage server all have 2 connections to the switch, with 2 VLANs and 2 subnets dedicated to storage. Check out the before/after speed tests with enabling iSCSI MPIO.

As you can see I was able to essentially double my read speeds (again maxing out the networking layer), however you’ll notice that the write speeds maxed out at 1598MB/sec. I believe we’ve reached a limitation of the CPU, PCIe bus, or something else inside of the server. Note, that this is not a limitation of the Sabrent Rocket 4 NVME drives, or the IOCREST NVME PCIe card.

Moving Forward

I’ve had this configuration running for around a week now with absolutely no issues, no crashes, and it’s been very stable.

Using a VDI VM on NVMe backed storage is lightning fast and I love the experience.

I plan on running like this for a little while to continue to test the stability of the environment before making more changes and expanding the configuration and usage.

Future Plans (and Configuration)

  • Drive Bays
    • I plan to populate the 4 hot-swappable drive bays with HPE 4TB MDL drives. Configured with RaidZ1, this should give me around 12TB usable storage. I can use this for file storage, backups, replication, and more.
  • NVMe Replication
    • This design was focused on creating non-redundant extremely fast storage. Because I’m limited to a total of 4 NVMe disks in this design, I chose not to use RaidZ and striped the data. If one NVMe drive is lost, all data is lost.
    • I don’t plan on storing anything important, and at this point the storage is only being used for VDI VMs (which are backed up), and Video editing.
    • If I can populate the front drive bays, I can replicate the NVMe storage to the traditional HDD storage on a frequent basis to protect against failure to some level or degree.
  • Version 3 of the NVMe Storage Server
    • More NVMe and Bigger NVMe – I want more storage! I want to test different levels of RaidZ, and connect to the backbone at even faster speeds.
    • NVME Drives with PLP (Power Loss Prevention) for data security and protection.
    • Dual Power Supply

Let me know your thoughts and ideas on this setup!