Jun 122019
 
VMware vSphere Mobile Watchlist Logo

It’s finally here! VMware has released the alpha (test) of the vSphere Mobile Client for Android Devices. This will allow you to manage your vSphere instance via your Android mobile device.

Some of you may remember the vSphere Mobile Watchlist app for android. It was great because it allowed you to manage your vSphere environment (and I still use it), but one day it was abruptly removed from the Google Play store and no longer available. However, those that had it installed could keep using it.

This new version of the vSphere Mobile Client is only available for Android as of the time of this post.

vSphere Mobile Client Fling

The VMware fling is here: https://labs.vmware.com/flings/vsphere-mobile-client

While there is a tarball download, you’ll actually want to forget that and follow the instructions for a proper install. The tarball is only needed if you want to deploy the notification service.

Installing the vSphere Mobile Client for Android

First, you need to join the alpha testers group here: https://groups.google.com/forum/#!forum/vsphere-mobile-client/join

Second, you need to opt-in to the Google Play Test app here: https://play.google.com/apps/testing/com.vmware.vsphere.cloudsmith

Then simply follow the instruction after the opt-in and download it for your device.

Using the vSphere Mobile Client for Android

The app is a slick but simple one. Since it’s alpha, functionality is limited, but gives you the ability shutdown, restart, view performance and do a couple other things.

Bugs and Annoyances

Shortly after using the app, I noticed that I couldn’t log in subsequent tries due to an “incorrect user name or password”. I know I was typing it right, so I’m assuming this is a bug. To resolve this, you have to delete the app cache, then you will be able to log in properly.

Unfortunately the app also doesn’t allow you to save your password, like the previous watchlist app.

Screenshots

See below for some screenshots:

Conclusion

All in all, it’s pretty exciting that VMware is finally working on an official mobile app. I still use watchlist almost daily, so I see tremendous value in this!

Leave a comment below and let me know what you think of the new app!

May 182019
 
VMware Horizon View Mobile Client Android Windows 10 VDI Desktop

Since I’ve installed and configured my Nvidia GRID K1, I’ve been wanting to do a graphics quality demo video. I finally had some time to put a demo together.

I wanted to highlight what type of graphics can be achieved in a VDI environment. Even using an old Nvidia GRID K1 card, we can still achieve amazing graphical performance in a virtual desktop environment.

This demo outlines 3D accelerated graphics provided by vGPU.

Demo Video

Please see below for the video:

Information

Demo Specifications

  • VMware Horizon View 7.8
  • NVidia GRID K1
  • GRID vGPU Profile: GRID K180q
  • HPe ML310e Gen8 V2
  • ESXi 6.5 U2
  • Virtual Desktop: Windows 10 Enterprise
  • Game: Steam – Counter-Strike Global Offensive (CS:GO)

Please Note

  • Resolution of the Virtual Desktop is set to 1024×768
  • Blast Extreme is the protocol used
  • Graphics on game are set to max
  • Motion is smooth in person, screen recorder caused some jitter
  • This video was then edited on that VM using CyberLink PowerDirector
  • vGPU is being used on the VM
May 082019
 
vSphere Logo Image

We’ve all been in the situation where we need to install a driver, vib file, or check “esxtop”. Many advanced administration tasks on ESXi need to be performed via shell access, and to do this you either need a console on the physical ESXi host, an SSH session, or use the Remote vCLI.

In this blog post, I’m going to be providing a quick “How to” enable SSH on an ESXi host in your VMware Infrastructure using the vCenter flash-based web administration interface. This will allow you to perform the tasks above, as well as use the “esxcli” command which is frequently needed.

This method should work on all vCenter versions up to 6.7, and ESXi versions up to 6.7.

How to Enable SSH on an ESXi Host Server

  1. Log on to your vCenter server.
    vCenter Server Login Window
  2. On the left hand “Navigator” pane, select the ESXi host.
    ESXi Navigator Pane on vCenter Web Interface
  3. On the right hand pane, select the “Configure” tab, then “Security Profile” under “System.
    ESXi Host Configuration under Configure Tab in Web Interface
  4. Scroll down and look for “Services” further to the right and select “Edit”.
    ESXi Host Services in Host Configuration
  5. In the “Edit Security Profile” window, select and highlight “SSH” and then click “Start”.
    ESXi Services List on vCenter web interface
  6. Click “Ok”.

This method can also be used to stop, restart, and change the startup policy to enable or disable SSH starting on boot.

Congratulations, you can now SSH in to your ESXi host!

May 072019
 
VMware Horizon View Icon

So you’ve started to use or test Duo Security’s MFA/2FA technology on your network. You’ve been happy so far and you now want to begin testing or rolling out DUO MFA on your VMware Horizon View server.

VMware Horizon is great at providing an end user computing solution for your business, a byproduct of which is an amazing remote access system. With any type of access, especially remote, comes numerous security challenges. DUO Security’s MFA solution is great at provided multi-factor authentication for your environment, and fully supports VMware Horizon View.

In this guide, I’ll be providing a quick how to guide on how to get setup and configured with DUO MFA on your Horizon Server to authenticate View clients.

DUO Security Login VMware View Client Dialog Box
DUO Security Login VMware View Client

Enabling DUO MFA on VMWare View will require further authentication from your users via one of the following means:

  • DUO Push (Push auth request to mobile app)
  • Phone call (On user’s pre-configured phone number)
  • SMS Passcode (Texted to users pre-configured phone number)
  • PIN code from a Hardware Token

For more information on the DUO technology and authentication methods, please visit
https://www.digitallyaccurate.com/blog/2018/06/12/secure-business-enterprise-it-systems-multi-factor-authentication-duo-mfa/

Prerequisites

  • VMware Horizon View Connection Server (Configured and working)
  • VMware View Client (for testing)
  • DUO Authentication Proxy installed, configured, and running (integrated with Active Directory)
  • Completed DUO Auth Proxy config along with “[ad_client]” as primary authentication.

Please Note: For this guide, we’re going to assume that you already have a Duo Authentication Proxy installed and fully configured on your network. The authentication proxy server acts as a RADIUS server that your VMware Horizon View Connection Server will use to authenticate users against.

Instructions

The instructions will be performed in multiple steps. This includes adding the application to your DUO account, configuring the DUO Authentication Proxy, and finally configuring the VMware View Connection Server.

Add the application to your DUO account

  1. Log on to your DUO account, on the left pane, select “Applications”.
  2. Click on the Blue button “Protect an Application”.
  3. Using the search, look for “VMware View”, and then select “Protect this Application”.
  4. Record the 3 fields labelled “Integration key”, “Security key”, and “API hostname”. You’ll need these later on your authentication proxy.
  5. Feel free to modify the Global Policy to the settings you require. You can always change and modify these later.
  6. Under Settings, we’ll give it a friendly name, choose “Simple” for “Username normalization”, and optionally configure the “Permitted Groups”. Select “Save”.

Configure the DUO Authentication Proxy

  1. Log on to the server that is running your DUO Authentication Proxy.
  2. Open the file explorer and navigate to the following directory.
    C:\Program Files (x86)\Duo Security Authentication Proxy\conf
  3. Before any changes I always make a backup of the existing config file. Copy and paste the “authproxy.cfg” file and rename the copy to “authproxy.cfg.bak”.
  4. Open the “authproxy.cfg” file with notepad.
  5. Add the following to the very end of the file:
    ;vmware-view
    [radius_server_challenge]
    ikey=YOUR_INTEGRATION_KEY
    skey=YOUR_SECRET_KEY
    api_host=YOUR-API-ADDRESS.duosecurity.com
    failmode=safe
    client=ad_client
    radius_ip_1=IP-ADDY-OF-VIEW-SERVER
    radius_secret_1=SECRETPASSFORDUOVIEW
    port=1813
    Using the values from the “Protect an Application”, replace the “ikey” with your “integration key”, “skey” with your “secret key”, and “api_host” with the API hostname that was provided. Additionally “radius_ip_1” should be set to your View Connection Server IP, and “radius_secret_1” is a secret passphrase shared only by DUO and the View connection server.
  6. Save the file.
  7. Restart the DUO Authentication Proxy either using Services (services.msc), or run the following from a command prompt:
    net stop DuoAuthProxy & net start DuoAuthProxy

Configure the VMware View Connection Server

  1. Log on to your server that runs your VMware View Connection Server.
  2. Open the VMware Horizon 7 Administrator web interface and log on.
    Horizon 7 Administrator Launch Icon Screenshot
  3. On the left hand side, under “Inventory”, expand “View Configuration” and select “Servers”.
    View Configuration and Servers highlighted on Left Pane
  4. On the right hand side in the “Servers” pane, click on the “Connection Servers” tab, then select your server, and click “Edit”.
    Select Connection Server in Server Pane Window
  5. On the “Edit Connection Server Settings” window, click on the “Authentication” tab.
    Authentication under Edit Connection Settings Window
  6. Scroll down to the “Advanced Authentication” section, and change the “2-factor authentication” drop down, to “RADIUS”. Check both check boxes for “Enforce 2-factor and Windows user name matching”, and “Use the same user name and password for RADIUS and Windows Authentication”.
    Advanced Auth Settings for DUO in Authentication Tab Dialog Window
  7. Below the check boxes you will see “Authenticator”. Open the drop down, and select “Create New Authenticator”.
  8. In the “Add RADIUS Authenticator” window, give it a friendly name, friendly description, and populate the fields as specified in the screenshot below. You’ll be using the shared RADIUS/DUO secret we created above in the config file for the proxy auth.
    Edit RADIUS Authenticator VMware View Window
    Please Note that I changed the default RADIUS port in my config to 1813.
  9. Click “Ok”, then make sure the newly created authenticator is select in the drop down. Proceed to click “Ok” on the remaining windows, and close out of the web interface.

That’s it!

You have now completely implemented DUO MFA on your Horizon deployment. Now when users attempt to log on to your VMware View Connection server, after entering their credentials they will be prompted for a second factor of authentication as pictured below.

DUO Security MFA authenticate VMware View Client dialog box
DUO Security MFA authenticate VMware View Client

VMware Horizon View is now fully using MFA/2FA.

Leave a comment!

May 032019
 
Ubuntu Enterprise Logo

So here we’re going to talk about Installing the VMware Horizon Agent for Linux on an Ubuntu 18.04 Guest VM for use with VDI. Ultimately you’ll be setting up Horizon 7 for Linux Desktops.

This will allow you to add Linux VMs to your VDI environment.

Ubuntu 18.04 LTS running on VMware Hoirzon Client using Horizon for Linux
Horizon for Linux on Ubuntu 18.04 LTS

VMware has the documentation here, but I’ve condensed this down for your to get up and running quickly, as well as deal with a few bugs I noticed.

Requirements

You’ll need the following to get started

  • VMware Horizon View 7 (I’m, using 7.8)
  • Horizon Enterprise or Horizon for Linux Valid Licensing
  • Horizon VDI environment that’s functioning and working
  • Ubuntu 18.04 LTS Installer ISO (download here)
  • Horizon Agent for Linux (download here)
  • Functioning internal DNS

Once you have the above, we can get going!

Instructions

  1. Create a static DHCP reservation, or note the IP you will set statically to the Ubuntu VM
  2. Create a host entry on your internal DNS Server so that the IP of your Ubuntu VM will have a functioning internal FQDN
  3. Create a VM and Install Ubuntu 18.04 TLS using the ISO you downloaded above
  4. If you’re using a static IP, set this now. If you’re using a DHCP reservation, make sure it’s working
  5. Install any Root CA’s or modifications you need for network access (usually not needed unless you’re on an enterprise network)
  6. Open a terminal, and type “sudo su” to get a root console
  7. Run the following command
    apt install open-vm-tools-desktop openssh-server python python-dbus python-gobject lightdm
    You can skip the “openssh-server” package if you don’t want to enable SSH. A display manager configuration prompt will present itself, choose “gdm”
  8. Now we need to add the internal FQDN to the hosts file. Run “nano /etc/hosts” to open the hosts file. Create a new line at the top and enter
    127.0.0.1 compname.domain.com compname
    Modify “compname.domain.com” and “compname” to reflect your FQDN and computer name.
  9. Restart the Guest VM
  10. Open terminal, “sudo su” to get a root console
  11. Extract the Horizon Agent tarball with
    tar zxvf VMware-horizonagent-linux-x86_64-7.8.0-12610615.tar.gz
    Please note that if your version is different, your file name may be different. Please adjust accordingly.
  12. Change directory in to the VMware Horizon Agent that we just extracted.
  13. Run the installer for the horizon agent with
    ./install_viewagent.sh
  14. Follow the prompts, restart the host
  15. Log on to your View Connection Server
  16. Create a manual pool, and configure it accordingly
  17. Add the Ubuntu Linux VM to the pool
  18. Entitle the users to the pool, and assign the users to the host under inventory

Final Notes

In the VMware documentation, it states to select “lightdm” on the Display MAnager configuration window that presents itself in step 7. However if you choose this, the VMware Horizon Agent for Linux will not install. Choosing “gdm” allows it to install and function.

I have noticed audio issues when using the Spotify snapd. I believe this is caused by timer-based audio scheduling in PulseAudio. I have tried using the “tsched=0” flag in the PulseAudio config, however this has no effect and I haven’t been able to resolve this yet. Audio in Chrome and other audio players works fine. A workaround is to install “pavucontrol” and have it open while using Spotify and the audio issues will temporarily be resolved. I also tried using the VMware Tools (deprecated) instead of OpenVM Tools to see if this helped with the audio issues, but it did not.

If you have 3D Acceleration with a GRID card, the Linux VDI VM will be able to utilize 3D accelerated vSGA as long as you have it configured on the ESXi host.

May 022019
 
Nvidia GRID Logo

I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!

The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPe server running ESXi 6.5 and VMware Horizon View 7.8.

Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPe, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.

Nvidia GRID K1 Image

For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.

Essentially, I have 3 Servers:

  1. HPe DL360p Gen8 (Dual Proc, 128GB RAM)
  2. HPe DL360p Gen8 (Dual Proc, 128GB RAM)
  3. HPe ML310e Gen8 v2 (Single Proc, 32GB RAM)

For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.

As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.

Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.

The Purchase

Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.

We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.

NVIDIA GRID K1 and K2 Specifications
NVIDIA GRID K1 and K2 Specification Table

The above chart is a capture from:
https://www.nvidia.com/content/cloud-computing/pdf/nvidia-grid-datasheet-k1-k2.pdf

We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…

These are the adapters I purchased:

Preparation and Software Installation

I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from:
https://www.nvidia.com/Download/driverResults.aspx/144909/en-us

To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib” to a datastore, enabled SSH, and then ran the following command to install:

esxcli software vib install -v /path/to/file/NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib

The command completed successfully and I shut down the host. Now I waited to meet.

We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.

Hardware Installation

Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.

For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.

I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.

Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.

Host Configuration

Powering on the server was a tense moment for me. A few things could have happened:

  1. Server won’t power on
  2. Server would power on but hang & report health alert
  3. Nvidia GRID card could overheat
  4. Nvidia GRID card could overheat and become damaged
  5. Nvidia GRID card could overheat and catch fire
  6. Server would boot but not recognize the card
  7. Server would boot, recognize the card, but not work
  8. Server would boot, recognize the card, and work

With great suspense, the server powered on as per normal. No errors or health alerts were presented.

I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.

After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.

ESXi Graphics Settings for Host Graphics and Graphics Devices
ESXi Host Graphics Devices Settings

VM Configuration

I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.

Virtual Machine Edit Settings with NVIDIA GRID vGPU and grid_k180q profile selected
VM Settings to add NVIDIA GRID vGPU

After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.

Power On and Testing

I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.

After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.

Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.

DirectX Diagnostic Tool (dxdiag) showing Nvidia Grid K1 on VMware Horizon using vGPU k180q profile
dxdiag on GRID K1 using k180q profile

It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.

nvidia-smi command on ESXi host showing GRID K1 information, vGPU information, temperatures, and power usage
“nvidia-smi” command on ESXi Host

According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.

With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.

nvidia-smi with vgpu flag for vgpu information
“nvidia-smi vgpu” for vGPU Information
nvidia-smi with vgpu -q flag
“nvidia-smi vgpu -q” to Query more vGPU Information

Final Thoughts

Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.

I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.

The two coolest parts of this project are:

  • 3D Acceleration and Hardware h.264 Encoding on VMware Horizon
  • Getting a GRID K1 working on an HPe ML310e Gen8 v2

Leave a comment and let me know what you think! Or leave a question!

May 012019
 
VMware Horizon View Logo

One really cool feature that was released in VMware Horizon View 7.7, was the ability to install the Horizon Agent on to a Physical PC or Physical Workstation and use the Blast Extreme protocol. It even supports 3D Acceleration via a GPU!

As a system admin, I see value in having some Physical PCs managed by the View connection server.

I’ll be detailing some information about doing this, what’s required, what works, and what doesn’t below…

From the “What’s new in Horizon 7.7” doc at https://blogs.vmware.com/euc/2018/12/whats-new-horizon-7-7.html

You can now use the Blast Extreme display protocol to access physical PCs and workstations. Some limitations apply.

Additional information from the “Horizon 7.7 Release Notes” at https://docs.vmware.com/en/VMware-Horizon-7/7.7/rn/horizon-77-view-release-notes.html

Physical PCs and workstations with Windows 10 1803 Enterprise or higher can be brokered through Horizon 7 via Blast Extreme protocol.

Requirements

So here’s what’s required to get going:

  • Windows 10 Enterprise (Enterprise license is a must)
  • Physical PC or Workstation
  • VMware Horizon Licensing
  • VMware Horizon 7.7 Connection Server
  • VMware Horizon 7.7 Agent on Physical PC/Workstation
  • Manual Desktop Pool (Manual is required for Physical PCs to be added)

What Works

  • Blast Extreme
  • 3D Acceleration (via GPU with drivers)
  • 3D Acceleration with Consumer GPUs
  • Multiple Displays
  • Multiple GPUs

What Doesn’t Work

  • GPU Hardware h.264 encoding on consumer GPUs (h.264 encoding is still handled by the CPU)

Thoughts

I’ve been really enjoying this feature. Not only have I moved my desktop in to my server room and started remoting in using Blast, but I can think of many use cases for this (machines shops, sharing software licenses, etc.).

I’ve had numerous discussions with customers of mine who also say they see tremendous value in this after I brought it to their attention. I’ll update this post later on once I hear back about how some of my customers have deployed it.

3D Acceleration

One thing that is really cool, is the fact that 3D acceleration is enabled and working if the computer has a GPU installed (along with drivers). And no, you don’t need a fancy enterprise GPU. In my setup I’m running a GeForce 550 GTX TI, and a GeForce 640.

Horizon 3D Acceleration dxdiag
Horizon 3D Acceleration Enabled via dxdiag

While 3D acceleration is working, I have to note that the h.264 encoding for the Blast Extreme session is still being handled by the CPU. So while you are getting some great 3D accelerated graphics, depending on your CPU and screen resolution, you may be noticing some choppiness. If you have a higher end CPU, you should be able to get some pretty high resolutions. I’m currently running 2 displays at 1920×1080 on an extremely old Core 2 Quad processor.

H.264 encoding

I spent some time trying to enable the hardware h.264 encoder on the GPUs. Even when using the “NvFBCEnable.exe” (located in C:\Program Files\VMware\VMware Blast\) application to enable hardware encoding, I still notice that the encoding is being done on the CPU. I’m REALLY hoping they change this in future releases.

Hacks?

Another concept that this opens the door for is consumer GPUs providing 3D acceleration without all the driver issues. Technically you could use the CPU settings (to hide the fact the VM is being virtualized), and then install the Horizon Agent as a physical PC (even though it’s being virtualized). This should allow you to use the GPU that you’re passing through, but you still won’t get h.264 encoding on the GPU. This should stop the pesky black screen issue that’s normally seen when using this work around.

Bugs

Also, on a final note… I did find a bug where if any of the physical PCs are powered down or unavailable on the network, any logins from users entitled to that pool will time out and not work. When this issue occurs, a WoL packet is sent to the desktop during login, and the login will freeze until the physical PC becomes available. This occurs during the login phase, and will happen even if you don’t plan on using that pool. More information can be found here:
https://www.stephenwagner.com/2019/03/19/vmware-horizon-view-stuck-authenticating-logging-in/

Mar 192019
 
VMware Horizon View Logo

I noticed after upgrading to VMware Horizon View 7.8 and VMware Unified Access Gateway 3.5, when attempting to log in to a VMware Horizon View Connection Server via the Horizon Client, I would get stuck on “Authenticating”. If using the HTML client, it would get stuck on “Logging in”.

This will either timeout, or eventually (after numerous minutes) finally load. This occurs both with standard authentication, as well as 2FA/MFA/RADIUS authentication.

The Problem

Originally, I thought this issue was related to 2FA and/or RADIUS, however after disabling both, the issue was still present. In the VDM debug logs, you may find something similar to below:

2019-03-19T16:07:44.971-06:00 INFO  (1064-181C)  UnManagedMachineInformation Wake-on-LAN packet sent to machine comp.domain.com

2019-03-19T16:07:34.296-06:00 INFO (1064-17F0) UnManagedMachineInformation wait ended for startup update, returning false
2019-03-19T16:07:34.296-06:00 INFO (1064-17F0) UnManagedMachineInformation Could not wake up PM comp.domain.com within timeout

The Fix

The apparent delay “Authenticating” or “Logging In” is caused by a Wake On LAN packet being sent to an unmanaged physical workstation that has the VMware View Agent installed. This is occurring because the system is powered off.

After powering on all unmanaged View agents running on physical computers, the issue should be resolved.

Feb 182019
 
ESXi Fatal error: 8 (Device Error)

Unable to boot ESXi from USB or SD Card on HPe Proliant Server

After installing HPe iLO Amplifier on your network and updating iLO 4 firmware to 2.60 or 2.61, you may notice that your HPe Proliant Servers may fail to boot ESXi from a USB drive or SD-Card.

This was occuring on 2 ESXi Hosts. Both were HPe Proliant DL360p Gen8 Servers. One server was using an internal USB drive for ESXi, while the other was using an HPe branded SD Card.

The issue started occuring on both hosts after a planned InfoSight implementation. Both hosts iLO controllers firmware were upgraded to 2.61, iLO Amplifier was deployed (and the servers added), and the amplifier was connected to an HPe InfoSight account.

Update – May 24th 2019: As an HPe partner, I have been working with HPe, the product manager, and development team on this issue. HPe has provided me with a fix to test that I have been able to verify fully resolves this issue! Stay tuned for more information!

Update – June 5th 2019: Great news! As Bob Perugini (WW Product Manager at HPe) put it: “HPE is happy to announce that this issue has been fixed in latest version of iLO Amplifier Pack, v1.40. To download iLO Amplifier Pack v1.40, go to http://www.hpe.com/servers/iloamplifierpack and click “download”.” Scroll to the bottom of the post for more information!

Please see below for errors:

Errors

ESXi Fatal error: 8 (Device Error)
ESXi Fatal error: 8 (Device Error)
Error loading /s.v00
Compressed MD5: 00000000000000000000000000
Decompressed MD5: 00000000000000000000000000
Fatal error: 8 (Device Error)
Error mboot.c32 attempted DOS system call
Error mboot.c32 attempted DOS system call
mboot.c32: attempted DOS system call INT 21 0d00 E8004391
boot:

Symptoms

This issue may occur intermittently, on the majority of boots, or on all boots. Re-installing ESXi on the media, as well as replacing the USB/SD Card has no effect. Installation will be successful, however you the issue is still experiences on boot.

HPe technical support was unable to determine the root of the issue. We found the source of the issue and reported it to HPe technical support and are waiting for an update.

The Issue and Fix

This issue occurs because the HPe iLO Amplifier is running continuous server inventory scans while the hosts are booting. When one inventory completes, it restarts another scan.

The following can be noted:

  • iLO Amplifier inventory percentage resets back to 0% and starts again numerous times during the server boot
  • Inventory scan completes, only to restart again numerous times during the server boot
  • Inventory scan resets back to 0% during numerous different phases of BIOS initialization and POST.
HPe iLO Amplifier Inventory
HPe iLO Amplifier Inventory

We noticed that once the HPe iLO Amplifier Virtual Machine was powered off, not only did the servers boot faster, but they also booted 100% succesfully each time. Powering on the iLO Amplifier would cause the ESXi hosts to fail to boot once again.

I’d also like to note that on the host using the SD-Card, the failed boot would actually completely lock up iLO, and would require physical intervention to disconnect and reconnect the power to the server. We were unable to restart the server once it froze (this did not happen to the host using the USB drive).

There are some settings on the HPe iLO amplifier to control performance and intervals of inventory scans, however we noticed that modifying these settings did not alter or stop the issue, and had no effect.

As a temporary workaround, make sure your iLO amplifier is powered off during any maintenance to avoid hosts freezing/failing to boot.

To fully resolve this issue, upgrade your iLO Amplifier to the latest version (1.40 as of the time of this update). The latest version can be downloaded at: http://www.hpe.com/servers/iloamplifierpack.

Update – April 10th 2019

I’ve attempted to try downgrading to the earliest supported iLo version 2.54, and the issue still occurs.

I also upgraded to the newest version 2.62 which presented some new issues.

On the first boot, the BIOS reported memory access issues on Processor 1 socket 1, then another error reporting memory access issues on Processor 1 socket 4.

I disconnected the power cables, reconnected, and restarted the server. This boot, the server didn’t even detect the bootable USB stick.

Again, after shutting down the iLo Amplifier, the server booted properly and the issue disappeared.

Update – May 24th 2019

As an HPe partner, I have been working with HPe, the product manager, and development team on this issue. HPe has provided me with a fix to test that I have been able to verify fully resolves this issue! Stay tuned for more information!

Update – June 5th 2019 – ITS FIXED!!!

Great news as the issue is now fixed! As Bob Perugini (WorldWide Product Manager at HPe) said it:

HPE is happy to announce that this issue has been fixed in latest version of iLO Amplifier Pack, v1.40.

To download iLO Amplifier Pack v1.40, go to http://www.hpe.com/servers/iloamplifierpack and click “download”.


Here’s what’s new in iLO Amplifier Pack v1.40:
─ Available as a VMware ESXi appliance and as a Hyper-V appliance (Hyper-V is new)
─ VMware tools have been added to the ESXi appliance
─ Ability to schedule the time of the daily transmission of Active Health System (AHS) data to InfoSight
─ Ability to opt-in and allow the IP address and hostname of the server to be transmitted to InfoSight and displayed
─ Test connectivity button to help verify iLO Amplifier Pack has successfully connected to InfoSight
─ Allow user authentication credentials for the proxy server when connecting to InfoSight
─ Added ability to specify IP address or hostname for the HPE RDA connection when connection to InfoSight
─ Ability to send updated AHS data “now” for an individual server
─ Ability to stage firmware and driver updates to the iLO Repository and then deploy the staged updates at a later date or time (HPE Gen10 servers only)
─ Allow the firmware and driver updates of servers whose iLO has been configured in CNSA (Commercial National Security Algorithm) mode (HPE Gen10 servers only)

Nov 172018
 

When running VMware vSphere 6 (and later) and ESXi on your hosts with VMFS6, you may notice that auto unmap (space reclamation) is not working even though it is enabled. In addition, you’ll find that manual unmap functions still work.

Why

This is because your storage array (SAN) may have a larger unmap granularity block size than 1MB. VMFS version 6 (source) requires an unmap granularity of 1MB, and does not support automatic unmap on arrays that are larger.

For example, on the HPe MSA 2040 the page size when using virtual storage is 4MB, hence auto unmap is not supported and does not work. You can still perform automatic unmap functions on arrays with block/page sizes larger than 1MB.

Additional Information and Resources

Perform manual VMFS unmap on vSphere 6.5 and 6.7 with VMFS 6 – https://www.stephenwagner.com/2017/02/07/vmfs-unmap-command-vsphere-6-5-with-vmfs-6-auto/

vSphere 6.5, 6.7 and VMFS 6 – Change storage reclaim priority from low to medium or high – https://www.stephenwagner.com/2017/02/08/vsphere-vmfs-6-change-storage-reclaim-priority-low-medium-high/

Release unused space on host and guest filesystems with thin-provisioned Sophos UTM appliance (SW) – https://www.stephenwagner.com/2018/01/18/release-unused-space-vmdk-thin-provisioned-sophos-utm/