Since I’ve installed and configured my Nvidia GRID K1, I’ve been wanting to do a graphics quality demo video. I finally had some time to put a demo together.
I wanted to highlight what type of graphics can be achieved in a VDI environment. Even using an old Nvidia GRID K1 card, we can still achieve amazing graphical performance in a virtual desktop environment.
This demo outlines 3D accelerated graphics provided by vGPU.
Please see below for the video:
VMware Horizon View 7.8
NVidia GRID K1
GRID vGPU Profile: GRID K180q
HPe ML310e Gen8 V2
ESXi 6.5 U2
Virtual Desktop: Windows 10 Enterprise
Game: Steam – Counter-Strike Global Offensive (CS:GO)
Resolution of the Virtual Desktop is set to 1024×768
Blast Extreme is the protocol used
Graphics on game are set to max
Motion is smooth in person, screen recorder caused some jitter
This video was then edited on that VM using CyberLink PowerDirector
VMware Horizon is great at providing an end user computing solution for your business, a byproduct of which is an amazing remote access system. With any type of access, especially remote, comes numerous security challenges. DUO Security’s MFA solution is great at provided multi-factor authentication for your environment, and fully supports VMware Horizon View.
In this guide, I’ll be providing a quick how to guide on how to get setup and configured with DUO MFA on your Horizon Server to authenticate View clients.
Enabling DUO MFA on VMWare View will require further authentication from your users via one of the following means:
DUO Push (Push auth request to mobile app)
Phone call (On user’s pre-configured phone number)
SMS Passcode (Texted to users pre-configured phone number)
VMware Horizon View Connection Server (Configured and working)
VMware View Client (for testing)
DUO Authentication Proxy installed, configured, and running (integrated with Active Directory)
Completed DUO Auth Proxy config along with “[ad_client]” as primary authentication.
Please Note: For this guide, we’re going to assume that you already have a Duo Authentication Proxy installed and fully configured on your network. The authentication proxy server acts as a RADIUS server that your VMware Horizon View Connection Server will use to authenticate users against.
The instructions will be performed in multiple steps. This includes adding the application to your DUO account, configuring the DUO Authentication Proxy, and finally configuring the VMware View Connection Server.
Add the application to your DUO account
Log on to your DUO account, on the left pane, select “Applications”.
Click on the Blue button “Protect an Application”.
Using the search, look for “VMware View”, and then select “Protect this Application”.
Record the 3 fields labelled “Integration key”, “Security key”, and “API hostname”. You’ll need these later on your authentication proxy.
Feel free to modify the Global Policy to the settings you require. You can always change and modify these later.
Under Settings, we’ll give it a friendly name, choose “Simple” for “Username normalization”, and optionally configure the “Permitted Groups”. Select “Save”.
Configure the DUO Authentication Proxy
Log on to the server that is running your DUO Authentication Proxy.
Open the file explorer and navigate to the following directory.
Using the values from the “Protect an Application”, replace the “ikey” with your “integration key”, “skey” with your “secret key”, and “api_host” with the API hostname that was provided. Additionally “radius_ip_1” should be set to your View Connection Server IP, and “radius_secret_1” is a secret passphrase shared only by DUO and the View connection server.
Save the file.
Restart the DUO Authentication Proxy either using Services (services.msc), or run the following from a command prompt:
net stop DuoAuthProxy & net start DuoAuthProxy
Configure the VMware View Connection Server
Log on to your server that runs your VMware View Connection Server.
Open the VMware Horizon 7 Administrator web interface and log on.
On the left hand side, under “Inventory”, expand “View Configuration” and select “Servers”.
On the right hand side in the “Servers” pane, click on the “Connection Servers” tab, then select your server, and click “Edit”.
On the “Edit Connection Server Settings” window, click on the “Authentication” tab.
Scroll down to the “Advanced Authentication” section, and change the “2-factor authentication” drop down, to “RADIUS”. Check both check boxes for “Enforce 2-factor and Windows user name matching”, and “Use the same user name and password for RADIUS and Windows Authentication”.
Below the check boxes you will see “Authenticator”. Open the drop down, and select “Create New Authenticator”.
In the “Add RADIUS Authenticator” window, give it a friendly name, friendly description, and populate the fields as specified in the screenshot below. You’ll be using the shared RADIUS/DUO secret we created above in the config file for the proxy auth.
Please Note that I changed the default RADIUS port in my config to 1813.
Click “Ok”, then make sure the newly created authenticator is select in the drop down. Proceed to click “Ok” on the remaining windows, and close out of the web interface.
You have now completely implemented DUO MFA on your Horizon deployment. Now when users attempt to log on to your VMware View Connection server, after entering their credentials they will be prompted for a second factor of authentication as pictured below.
You can skip the “openssh-server” package if you don’t want to enable SSH. A display manager configuration prompt will present itself, choose “gdm”
Now we need to add the internal FQDN to the hosts file. Run “nano /etc/hosts” to open the hosts file. Create a new line at the top and enter
127.0.0.1 compname.domain.com compname
Modify “compname.domain.com” and “compname” to reflect your FQDN and computer name.
Restart the Guest VM
Open terminal, “sudo su” to get a root console
Extract the Horizon Agent tarball with
tar zxvf VMware-horizonagent-linux-x86_64-7.8.0-12610615.tar.gz
Please note that if your version is different, your file name may be different. Please adjust accordingly.
Change directory in to the VMware Horizon Agent that we just extracted.
Run the installer for the horizon agent with
Follow the prompts, restart the host
Log on to your View Connection Server
Create a manual pool, and configure it accordingly
Add the Ubuntu Linux VM to the pool
Entitle the users to the pool, and assign the users to the host under inventory
In the VMware documentation, it states to select “lightdm” on the Display MAnager configuration window that presents itself in step 7. However if you choose this, the VMware Horizon Agent for Linux will not install. Choosing “gdm” allows it to install and function.
I have noticed audio issues when using the Spotify snapd. I believe this is caused by timer-based audio scheduling in PulseAudio. I have tried using the “tsched=0” flag in the PulseAudio config, however this has no effect and I haven’t been able to resolve this yet. Audio in Chrome and other audio players works fine. A workaround is to install “pavucontrol” and have it open while using Spotify and the audio issues will temporarily be resolved. I also tried using the VMware Tools (deprecated) instead of OpenVM Tools to see if this helped with the audio issues, but it did not.
If you have 3D Acceleration with a GRID card, the Linux VDI VM will be able to utilize 3D accelerated vSGA as long as you have it configured on the ESXi host.
I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!
The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPe server running ESXi 6.5 and VMware Horizon View 7.8.
Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPe, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.
For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.
Essentially, I have 3 Servers:
HPe DL360p Gen8 (Dual Proc, 128GB RAM)
HPe DL360p Gen8 (Dual Proc, 128GB RAM)
HPe ML310e Gen8 v2 (Single Proc, 32GB RAM)
For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.
As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.
Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.
Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.
We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.
We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…
I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from: https://www.nvidia.com/Download/driverResults.aspx/144909/en-us
To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.6220.127.116.1198673.vib” to a datastore, enabled SSH, and then ran the following command to install:
The command completed successfully and I shut down the host. Now I waited to meet.
We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.
Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.
For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.
I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.
Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.
Powering on the server was a tense moment for me. A few things could have happened:
Server won’t power on
Server would power on but hang & report health alert
Nvidia GRID card could overheat
Nvidia GRID card could overheat and become damaged
Nvidia GRID card could overheat and catch fire
Server would boot but not recognize the card
Server would boot, recognize the card, but not work
Server would boot, recognize the card, and work
With great suspense, the server powered on as per normal. No errors or health alerts were presented.
I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.
After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.
I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.
After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.
Power On and Testing
I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.
After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.
Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.
It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.
According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.
With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.
Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.
I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.
The two coolest parts of this project are:
3D Acceleration and Hardware h.264 Encoding on VMware Horizon
Getting a GRID K1 working on an HPe ML310e Gen8 v2
Leave a comment and let me know what you think! Or leave a question!
One really cool feature that was released in VMware Horizon View 7.7, was the ability to install the Horizon Agent on to a Physical PC or Physical Workstation and use the Blast Extreme protocol. It even supports 3D Acceleration via a GPU!
As a system admin, I see value in having some Physical PCs managed by the View connection server.
I’ll be detailing some information about doing this, what’s required, what works, and what doesn’t below…
Physical PCs and workstations with Windows 10 1803 Enterprise or higher can be brokered through Horizon 7 via Blast Extreme protocol.
So here’s what’s required to get going:
Windows 10 Enterprise (Enterprise license is a must)
Physical PC or Workstation
VMware Horizon Licensing
VMware Horizon 7.7 Connection Server
VMware Horizon 7.7 Agent on Physical PC/Workstation
Manual Desktop Pool (Manual is required for Physical PCs to be added)
3D Acceleration (via GPU with drivers)
3D Acceleration with Consumer GPUs
What Doesn’t Work
GPU Hardware h.264 encoding on consumer GPUs (h.264 encoding is still handled by the CPU)
I’ve been really enjoying this feature. Not only have I moved my desktop in to my server room and started remoting in using Blast, but I can think of many use cases for this (machines shops, sharing software licenses, etc.).
I’ve had numerous discussions with customers of mine who also say they see tremendous value in this after I brought it to their attention. I’ll update this post later on once I hear back about how some of my customers have deployed it.
One thing that is really cool, is the fact that 3D acceleration is enabled and working if the computer has a GPU installed (along with drivers). And no, you don’t need a fancy enterprise GPU. In my setup I’m running a GeForce 550 GTX TI, and a GeForce 640.
While 3D acceleration is working, I have to note that the h.264 encoding for the Blast Extreme session is still being handled by the CPU. So while you are getting some great 3D accelerated graphics, depending on your CPU and screen resolution, you may be noticing some choppiness. If you have a higher end CPU, you should be able to get some pretty high resolutions. I’m currently running 2 displays at 1920×1080 on an extremely old Core 2 Quad processor.
I spent some time trying to enable the hardware h.264 encoder on the GPUs. Even when using the “NvFBCEnable.exe” (located in C:\Program Files\VMware\VMware Blast\) application to enable hardware encoding, I still notice that the encoding is being done on the CPU. I’m REALLY hoping they change this in future releases.
Another concept that this opens the door for is consumer GPUs providing 3D acceleration without all the driver issues. Technically you could use the CPU settings (to hide the fact the VM is being virtualized), and then install the Horizon Agent as a physical PC (even though it’s being virtualized). This should allow you to use the GPU that you’re passing through, but you still won’t get h.264 encoding on the GPU. This should stop the pesky black screen issue that’s normally seen when using this work around.
Also, on a final note… I did find a bug where if any of the physical PCs are powered down or unavailable on the network, any logins from users entitled to that pool will time out and not work. When this issue occurs, a WoL packet is sent to the desktop during login, and the login will freeze until the physical PC becomes available. This occurs during the login phase, and will happen even if you don’t plan on using that pool. More information can be found here: https://www.stephenwagner.com/2019/03/19/vmware-horizon-view-stuck-authenticating-logging-in/
I noticed after upgrading to VMware Horizon View 7.8 and VMware Unified Access Gateway 3.5, when attempting to log in to a VMware Horizon View Connection Server via the Horizon Client, I would get stuck on “Authenticating”. If using the HTML client, it would get stuck on “Logging in”.
This will either timeout, or eventually (after numerous minutes) finally load. This occurs both with standard authentication, as well as 2FA/MFA/RADIUS authentication.
Originally, I thought this issue was related to 2FA and/or RADIUS, however after disabling both, the issue was still present. In the VDM debug logs, you may find something similar to below:
The apparent delay “Authenticating” or “Logging In” is caused by a Wake On LAN packet being sent to an unmanaged physical workstation that has the VMware View Agent installed. This is occurring because the system is powered off.
After powering on all unmanaged View agents running on physical computers, the issue should be resolved.
One of the coolest things I love about running VMware Horizon View and VDI is that you can repurpose old computers, laptops, or even netbooks in to perfect VDI clients running Linux! This is extremely easy to do and gives life to old hardware you may have lying around (and we all know there’s nothing wrong with that).
I generally use Fedora and the VMware Horizon View Linux client to accomplish this. See below to see how I do it!
Download the Fedora Workstation install or netboot ISO from here.
Burn it to a DVD/CD if you have DVD/CD drive, or you can write it to a USB stick using this method here.
Install Fedora on to your laptop/notebook/netbook using the workstation install.
Update your Fedora Linux install using the following command
dnf -y upgrade
Install the prerequisites for the VMware Horizon View Linux client using these commands
To run the client, you can find it in the GUI applications list as “VMware Horizon Client”, or you can launch it by running “vmware-view”.
VMware Horizon View on Linux in action
Here is a VMware Horizon View Linux client running on HP Mini 220 Netbook
-If you’re comfortable, instead of the workstation install, you can install the Fedora LXQt Desktop spin, which is a lightweight desktop environment perfect for low performance hardware or netbooks. More information and the download link for Fedora LXQt Desktop Spin can be found here: https://spins.fedoraproject.org/en/lxqt/
-If you installed Fedora Workstation and would like to install the LXQt window manager afterwards, you can do so by running the following command (after installing, at login prompt, click on the gear to change window managers):
dnf install @lxqt-desktop-environment
-Some of the prerequisites above in the guide may not be required, however I have installed them anyways for compatibility.
Well, after using the VMware Horizon Client mobile app (for Android) for a year, I finally decided to do a little write up and review. I use the android client regularly on my Samsung Tab E LTE tablet, and somewhat infrequently on my Samsung Galaxy S9+ mobile phone (due to the smaller screen).
Let’s start off by briefly explaining what VMware Horizon View is, what the client does, and finally the review. I’ll be including a couple screenshots as well to give an idea as to how the interface and resolution looks on the tablet itself.
VMware Hoirzon View is a product and solution that enables VDI technology for a business. VDI stands for Virtual Desktop Infrastructure. When a business uses VDI, they virtualize their desktops and use either thin clients, zero clients, or the view client to access these virtualized desktops. This allows the business to utilize all the awesome technologies that virtualization brings (DRS, High Availability, Backup/DR, high performance, reduced hardware costs) and provide rich computing environments to their users. The technology is also particularly interesting in the fact that it provides amazing remote access capabilities as one can access their desktop very easily with the VMware View Client.
When you tie this on to an advanced security technology such as Duo’s MFA product, you can’t go wrong!
In special case or large environments, enormous cost savings can be realized when implementing VDI.
What is the VMware Horizon View Mobile client for Android
As mentioned above, to access one’s virtualized desktop a client is needed. While a thin client or zero client can be used, this is beyond the scope of this post as here we are only discussing the VMware View client for Android.
You can download the VMware View client for Android from the App store (link here).
The VMware Horizon View Mobile client for Android allows you to connect to your VDI desktop remotely using your Android based phone or tablet. Below is a screenshot I took with my Samsung Tab E LTE tablet (with the side bar expanded):
VMware Horizon View Client on Android Tablet
VMware Horizon View Mobile Client for Android Experience
Please Note: There is more of the review below the screenshots. Scroll down for more!
The app appears to be very lightweight, with an easy interface. Configuration of View Connections Servers, or UAG’s (Unified Access Gateways) is very simple. The login process performs with RADIUS and/or MFA as the desktop client would. In the examples below, you’ll notice I use Duo’s MFA/2FA authentication solution in combination with AD logins.
VMware Horizon View Mobile Client Android Server List
The interface is almost identical to the desktop client with very little differences. The configuration options are also very similar and allow customization of the app, with options for connection quality as an example.
VMware Horizon View Mobile Client Android Server Login
VMware Horizon View Mobile Client Android Login Duo MFA
As you can see above, the RADIUS and Duo Security Login prompts are fully functional.
VMware Horizon View Mobile Client Android Server List
VMware Horizon View Mobile Client Android Windows 10 VDI Desktop
The resolution is perfect for the tablet, and is very usable. The touch interface works extremely well, and text input works as good as it can. While this wouldn’t be used as a replacement for the desktop client, or a thin/zero client, it is a valuable tool for the mobile power user.
With how lightweight and cheap tablets are now, you could almost leave your tablet in your vehicle (although I wouldn’t recommend it), so that in the event of an emergency where you need to access your desktop, you’d be able to using the app.
Windows 10 touch functionality works great
Samsung Dex is fully supported
Webcam redirection works
Works on Airplanes using in flight WiFi
Saving credentials via Fingerprint Scanner would be nice (on the S8+ and S9+)
Being in IT, I’ve had to use this many times to log in and manage my vSphere cluster, servers, HPe iLo, check temperatures, and log in to customer environments (I prefer to log in using my VDI desktop, instead of saving client information on the device I’m carrying with me). It’s perfect for these uses.
I also regularly use VDI over LTE. Using VDI over mobile LTE connections works fantastic, however you’ll want to make sure you have an adequate data plan as the H.264 video stream uses a lot of bandwidth. Using this regularly over LTE could cause you to go over your data limits and incur additional charges.
The VMware Horizon View Mobile Client for Android also supports Samsung Dex. This means that if you have a Dex dock or the Dex pad, you can use the mobile client to provide a full desktop experience to a monitor/keyboard/mouse using your Samsung Galaxy phone. I’ll be doing a write up later to demo this (it works great).
On VMware Horizon view after updating the view agent on the VM, you may notice that USB redirection stops working with the error “USB Redirection is not available for this desktop”. This is due to an issue with the certificates on the VDI host (The VM running the VDI OS), after the VMware view agent upgrade is completed.
To resolve this you must use MMC, open the local computer certificate store, browse to “VMwareView\Certificates”, delete the agent certificates (for the local agent), and finally reboot for the agent to regenerate the certificates.
See below for instructions:
While connected to the VM running the VDI OS, click Start, type “mmc.exe” (without quotations), and open the Microsoft Management Console.
Open MMC by running mmc.exe
Open the “Add/Remote Snap-in” wizard.
Open the Add/Remove Snap-in Wizard
We must now open the local certificate store on the local computer. Select “Certicates” on the Available Snap-ins, click “Add”, select “Computer Account”, then proceed to choose “Local Computer” and complete the wizard.
Select the Computer account certificate store on the local computer
Expand the “Certificates (Local Computer)” on the left underneath “Console Root”. Expand “VMwareView”, then expand and select “Certificates”. Select the certificate on the right that matches the local computer name of the VDI host, right click and select “Delete”. You may have to do this multiple times if multiple certificates exist for the local computer.
Delete the VMwareView local agent certificate
Restart the VDI host. And USB redirection should now be working!
Last night I updated my VMware VDI envionrment to VMware Horizon 7.4.0. For the most part the upgrade went smooth, however I discovered an issue (probably unrelated to the upgrade itself, and more so just previously overlooked). When connecting with Google Chrome to VMware Horizon HTML Access via the UAG (Unified Access Gateway), an error pops up after pressing the button saying “Failed to connected to the connection server”.
This error pops up ONLY when using Chrome, and ONLY when connecting through the UAG. If you use a different browser (Firefox, IE), this issue will not occur. If you connect using Chrome to the connection server itself, this issue will not occur. It took me hours to find out what was causing this as virtually nothing popped up when searching for a solution.
Finally I stumbled across a VMware document that mentions on View Connection Server instances and security servers that reside behind a gateway (such as a UAG, or Access Point), the instance must be aware of the address in which browsers will connect to the gateway for HTML access.
On a side note, I also deleted my VMware Unified Access Gateways VMs and deployed the updated version that ship with Horizon 7.4.0. This means I deployed VMware Unified Access Gateway 3.2.0. There was an issue importing the configuration from the export backup I took from the previous version, so I had to configure from scratch (installing certificates, configuring URLs, etc…), be aware of this issue importing configuration.