In this post I’m going to explain what VDI is in the most simplest form and how you can benefit from using virtual desktop infrastructure (virtualized desktops) in your EUC strategy.
Virtual Desktop Infrastructure (VDI)
VDI standards for Virtual Desktop Infrastructure. Think of your existing physical desktop infrastructure (your desktop computers, also called end user computing), now virtualize those desktop computers in a virtual environment much like your servers are, and you now have Virtual Desktop Infrastructure.
End User Computing (EUC)
Traditionally end user computing has been delivered by means of deploying physical (real) computers to each user in your office (and possibly remote users). This brings with it the cost of the systems, the time/cost to maintain the systems and hardware, and the management overhead of maintaining those systems.
By utilizing VDI, you can significantly reduce the cost, management, and maintenance required to maintain your EUC infrastructure.
What is VDI
When you implement a VDI solution, you virtualize your desktops and workstations on a virtualization server, much like your servers are probably already virtualized. Users will connect via software, a thin client, or a zero client to establish the session to transmit and receive the video, monitor, and keyboard of workstation that is virtualized.
This might sound familiar, like RDS (Remote Desktop Services). However, in an RDS environment numerous users share the same server and resources and access it un a multi-user fashion, whereas with VDI they are using a virtualized Windows instance dedicated to them running an OS like Windows 10.
How does VDI work
Using the software, thin client, or zero client, a user establishes a session to a connection broker, which then passes it along to the Virtual Machine running on the server. The Virtual Machine encodes and compresses the graphics and then connects the users keyboard and mouse to the VM.
What’s even cooler, is that remote devices like printers and USB devices can also be forwarded on to the VM, giving the user the feeling that the computer that’s running on the server, is actually right in front of them.
And if that isn’t cool enough, in an environment where 3D accelerated and high-performance graphics are required, you can use special graphics cards and GPUs to provide those high end graphics remotely to users. Technically you could game, do engineering work, video and graphics editing, and more.
Why use VDI
So your desktops are now virtualized. This means you no longer need to maintain numerous physical PCs and the hardware that is inside of them.
You can deploy a standardized golden image that instantly clones as users log in to give them a pre-configured and maintained environment. This means you manage 1 or few desktops which can get deployed to hundreds of users, instead of managed hundreds of desktops.
If a thin client or zero client fails you can simply re-deploy a new unit to the user, which are very inexpensive, and reduces downtime.
In the event of a disaster, your VDI EUC environment would be integrated in to your disaster recovery solution, meaning it would be very easy to get users back up and running.
One of the best parts is that the environment can be used inside of your office and externally, allowing you to provide a smooth experience for remote users. This made business continuity a breeze for organizations that need to deploy remote users or “Work from home” users on the fly.
The cost of VDI
The cost to roll out a VDI solution varies depending on the number of users, types of users, and functionality you’d like.
Typically, VDI is a no-brainer for large organizations and enterprises due to the cost savings on hardware, management, and maintaining the solution vs traditional desktops. But smaller organizations can also benefit from VDI, examples being organizations that use expensive desktops and/or laptops for uses such as engineering, software development, and other uses that require high-cost workstations.
One last thought I want to leave you with; imagine an environment with 50-100 systems, and all the wasted power and CPU cycles when users are just browsing the internet. In a virtual environment you can over-allocate resources, which means you can identify user trends and only purchase the hardware you need to based on observed workloads. This can significantly reduce the cost of hardware, especially for software development, engineering, and other high performance computing.
In the ever-evolving world of IT and End User Computing (EUC), new technologies and solutions are constantly being developed to decrease costs, improve functionality, and help the business’ bottom line. In this pursuit, as far as end user computing goes, two technologies have emerged: Hosted Desktop Infrastructure (HDI), and Virtual Desktop Infrastructure (VDI). In this post I hope to explain the differences and compare the technologies.
We’re at a point where due to the low cost of backend server computing, performance, and storage, it doesn’t make sense to waste end user hardware and resources. By deploying thin clients, zero clients, or software clients, we can reduce the cost per user for workstations or desktop computers, and consolidate these on the backend side of things. By moving moving EUC to the data center (or server room), we can reduce power requirements, reduce hardware and licensing costs, and take advantage of some cool technologies thanks to the use of virtualization and/or Storage (SANs), snapshots, fancy provisioning, backup and disaster recovery, and others.
See below for the video, or read on for the blog post!
And it doesn’t stop there, utilizing these technologies minimizes the resources required and spent on managing, monitoring, and supporting end user computing. For businesses this is a significant reduction in costs, as well as downtime.
What is Hosted Desktop Infrastructure (HDI) and Virtual Desktop Infrastructure (VDI)
Many IT professionals still don’t fully understand the difference between HDI and VDI, but it’s as sample as this: Hosted Desktop Infrastructure runs natively on the bare metal (whether it’s a server, or SoC) and is controlled and provided by a provisioning server or connection broker, whereas Virtual Desktop Infrastructure virtualizes (like you’re accustomed to with servers) the desktops in a virtual environment and is controlled and provided via hypervisors running on the physical hardware.
Hosted Desktop Infrastructure (HDI)
As mentioned above, Hosted Desktop Infrastructure hosts the End User Computing sessions on bare metal hardware in your datacenter (on servers). A connection broker handles the connections from the thin clients, zero clients, or software clients to the bare metal allowing the end user to see the video display, and interact with the workstation instance via keyboard and mouse.
Remote Access capabilities
Reduction in EUC hardware and cost-savings
Simplifies IT Management and Support
Runs on bare metal hardware
Resources are dedicated and not shared, the user has full access to the hardware the instance runs on (CPU, Memory, GPU, etc)
Easily provide accelerated graphics to EUC instances without additional costs
Reduction in licensing as virtualization products don’t need to be used
Limited instance count to possible instances on hardware
Scaling out requires immediate purchase of hardware
Some virtualization features are not available since this solution doesn’t use virtualization
Additional backup strategy may need to be implemented separate from your virtualized infrastructure
If you require dedicated resources for end users and want to be as cost-effective as possible, HDI is a great candidate.
An example HDI deployment would utilize HPE Moonshot which is one of the main uses for HPE Moonshot 1500 chassis. HPE Moonshot allows you to provision up to 180 OS instances for each HPE Moonshot 1500 chassis.
Virtual Desktop Infrastructure virtualizes the end user operating system instances exactly how you virtualize your server infrastructure. In VMware environments, VMware Horizon View can provision, manage, and maintain the end user computing environments (virtual machines) to dynamically assign, distribute, manage, and broker sessions for users. The software product handles the connections and interaction between the virtualized workstation instances and the thin client, zero client, or software client.
Remote Access capabilities
Reduction in EUC hardware and cost-savings
Simplifies IT Management and Support
Runs as a virtual machine
Shared resources (you don’t waste hardware or resources as end users share the resources)
Easy to scale out (add more backend infrastructure as required, don’t need to “halt” scaling while waiting for equipment)
Can over-commit (over-provision)
Backup strategy is consistent with your virtualized infrastructure
Capabilities such as VMware DRS, VMware HA
Resources are not dedicated and are shared, users share the server resources (CPU, Memory, GPU, etc)
Extra licensing may be required
Extra licensing required for virtual accelerated graphics (GPU)
If you want to share a pool of resources, require high availability, and/or have dynamic requirements then virtualization would be the way to go. You can over commit resources while expanding and growing your environment without any discontinuation of services. With virtualization you also have access to technologies such as DRS, HA, and special Backup and DR capabilities.
Both technologies are great and have their own use cases depending on your business requirements. Make sure you research and weigh each of the options if you’re considering either technologies. Both are amazing technologies which will compliment and enhance your IT strategy.
I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!
The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPE server running ESXi 6.5 and VMware Horizon View 7.8.
Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPE, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.
For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.
Essentially, I have 3 Servers:
HPE DL360p Gen8 (Dual Proc, 128GB RAM)
HPE DL360p Gen8 (Dual Proc, 128GB RAM)
HPE ML310e Gen8 v2 (Single Proc, 32GB RAM)
For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.
As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.
Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.
Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.
We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.
We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…
I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from: https://www.nvidia.com/Download/driverResults.aspx/144909/en-us
To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.618.104.22.16898673.vib” to a datastore, enabled SSH, and then ran the following command to install:
The command completed successfully and I shut down the host. Now I waited to meet.
We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.
Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.
For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.
I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.
Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.
Powering on the server was a tense moment for me. A few things could have happened:
Server won’t power on
Server would power on but hang & report health alert
Nvidia GRID card could overheat
Nvidia GRID card could overheat and become damaged
Nvidia GRID card could overheat and catch fire
Server would boot but not recognize the card
Server would boot, recognize the card, but not work
Server would boot, recognize the card, and work
With great suspense, the server powered on as per normal. No errors or health alerts were presented.
I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.
After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.
I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.
After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.
Power On and Testing
I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.
After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.
Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.
It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.
According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.
With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.
Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.
I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.
The two coolest parts of this project are:
3D Acceleration and Hardware h.264 Encoding on VMware Horizon
Getting a GRID K1 working on an HPE ML310e Gen8 v2
Highly recommend getting a setup like this for your own homelab!
Uses and Projects
Well, I’m writing this “Uses and Projects” section after I wrote the original article (it’s now March 8th, 2020). I have to say I couldn’t be impressed more with this setup, using it as my daily driver.
Since I’ve set this up, I’ve used it remotely while on airplanes, working while travelling, even for video editing.
Some of the projects (and posts) I’ve done, can be found here:
Leave a comment and let me know what you think! Or leave a question!
Privacy & Cookies Policy