Jul 072020
 
Picture of a business office with cubicles

In the ever-evolving world of IT and End User Computing (EUC), new technologies and solutions are constantly being developed to decrease costs, improve functionality, and help the business’ bottom line. In this pursuit, as far as end user computing goes, two technologies have emerged: Hosted Desktop Infrastructure (HDI), and Virtual Desktop Infrastructure (VDI). In this post I hope to explain the differences and compare the technologies.

We’re at a point where due to the low cost of backend server computing, performance, and storage, it doesn’t make sense to waste end user hardware and resources. By deploying thin clients, zero clients, or software clients, we can reduce the cost per user for workstations or desktop computers, and consolidate these on the backend side of things. By moving moving EUC to the data center (or server room), we can reduce power requirements, reduce hardware and licensing costs, and take advantage of some cool technologies thanks to the use of virtualization and/or Storage (SANs), snapshots, fancy provisioning, backup and disaster recovery, and others.

And it doesn’t stop there, utilizing these technologies minimizes the resources required and spent on managing, monitoring, and supporting end user computing. For businesses this is a significant reduction in costs, as well as downtime.

What is Hosted Desktop Infrastructure (HDI) and Virtual Desktop Infrastructure (VDI)

Many IT professionals still don’t fully understand the difference between HDI and VDI, but it’s as sample as this: Hosted Desktop Infrastructure runs natively on the bare metal (whether it’s a server, or SoC) and is controlled and provided by a provisioning server or connection broker, whereas Virtual Desktop Infrastructure virtualizes (like you’re accustomed to with servers) the desktops in a virtual environment and is controlled and provided via hypervisors running on the physical hardware.

Hosted Desktop Infrastructure (HDI)

As mentioned above, Hosted Desktop Infrastructure hosts the End User Computing sessions on bare metal hardware in your datacenter (on servers). A connection broker handles the connections from the thin clients, zero clients, or software clients to the bare metal allowing the end user to see the video display, and interact with the workstation instance via keyboard and mouse.

Pros:

  • Remote Access capabilities
  • Reduction in EUC hardware and cost-savings
  • Simplifies IT Management and Support
  • Reduces downtime
  • Added redundancy
  • Runs on bare metal hardware
  • Resources are dedicated and not shared, the user has full access to the hardware the instance runs on (CPU, Memory, GPU, etc)
  • Easily provide accelerated graphics to EUC instances without additional costs
  • Reduction in licensing as virtualization products don’t need to be used

Cons:

  • Limited instance count to possible instances on hardware
  • Scaling out requires immediate purchase of hardware
  • Some virtualization features are not available since this solution doesn’t use virtualization
  • Additional backup strategy may need to be implemented separate from your virtualized infrastructure

Example:

If you require dedicated resources for end users and want to be as cost-effective as possible, HDI is a great candidate.

An example HDI deployment would utilize HPE Moonshot which is one of the main uses for HPE Moonshot 1500 chassis. HPE Moonshot allows you to provision up to 180 OS instances for each HPE Moonshot 1500 chassis.

More information on the HPE Moonshot (and HPE Edgeline EL4000 Converged Edge System) can be found here: https://www.stephenwagner.com/2018/08/22/hpe-moonshot-the-absolute-definition-of-high-density-software-defined-infrastructure/

Virtual Desktop Infrastructure (VDI)

Virtual Desktop Infrastructure virtualizes the end user operating system instances exactly how you virtualize your server infrastructure. In VMware environments, VMware Horizon View can provision, manage, and maintain the end user computing environments (virtual machines) to dynamically assign, distribute, manage, and broker sessions for users. The software product handles the connections and interaction between the virtualized workstation instances and the thin client, zero client, or software client.

Pros:

  • Remote Access capabilities
  • Reduction in EUC hardware and cost-savings
  • Simplifies IT Management and Support
  • Reduces downtime
  • Added redundancy
  • Runs as a virtual machine
  • Shared resources (you don’t waste hardware or resources as end users share the resources)
  • Easy to scale out (add more backend infrastructure as required, don’t need to “halt” scaling while waiting for equipment)
  • Can over-commit (over-provision)
  • Backup strategy is consistent with your virtualized infrastructure
  • Capabilities such as VMware DRS, VMware HA

Cons:

  • Resources are not dedicated and are shared, users share the server resources (CPU, Memory, GPU, etc)
  • Extra licensing may be required
  • Extra licensing required for virtual accelerated graphics (GPU)

Example:

If you want to share a pool of resources, require high availability, and/or have dynamic requirements then virtualization would be the way to go. You can over commit resources while expanding and growing your environment without any discontinuation of services. With virtualization you also have access to technologies such as DRS, HA, and special Backup and DR capabilities.

An example use case of VMware Horizon View and VDI can be found at: https://www.digitallyaccurate.com/blog/2018/01/23/vdi-use-case-scenario-machine-shops/

 Conclusion

Both technologies are great and have their own use cases depending on your business requirements. Make sure you research and weigh each of the options if you’re considering either technologies. Both are amazing technologies which will compliment and enhance your IT strategy.

Jun 072020
 

This month on June 23rd, HPE is hosting their annual HPE Discover event. This year is a little bit different as COVID-19 has resulted in a change of the usual in-person event, and this year’s event is now being hosted as a virtual experience.

I expect it’ll be the same great content as they have every year, only difference is you’ll be able to virtually experience it from the comfort of your own home.

I’m especially excited to say that I’ve been invited to be special VIP Influencer for the event, so I’ll be posting some content on Twitter, LinkedIn, and of course generating some posts on my blog.

HPE Discover Register Now Graphic
Register for HPE Discover Now

Stay tuned, and don’t forget to register at: https://www.hpe.com/us/en/discover.html

The content catalog is now live!

May 262020
 

So you want to add NVMe storage capability to your HPE Proliant DL360p Gen8 (or other Proliant Gen8 server) and don’t know where to start? Well, I was in the same situation until recently. However, after much research, a little bit of spending, I now have 8TB of NVMe storage in my HPE DL360p Gen8 Server thanks to the IOCREST IO-PEX40152.

Unsupported you say? Well, there are some of us who like to live life dangerously, there is also those of us with really cool homelabs. I like to think I’m the latter.

PLEASE NOTE: This is not a supported configuration. You’re doing this at your own risk. Also, note that consumer/prosumer NVME SSDs do not have PLP (Power Loss Prevention) technology. You should always use supported configurations and enterprise grade NVME SSDs in production environments.

DISCLAIMER: If you attempt what I did in this post, you are doing it at your own risk. I won’t be held liable for any damages or issues.

Use Cases

There’s a number of reasons why you’d want to do this. Some of them include:

  • Server Storage
  • VMware Storage
  • VMware vSAN
  • Virtualized Storage (SDS as example)
  • VDI
  • Flash Cache
  • Special applications (database, high IO)

Adding NVMe capability

Well, after all that research I mentioned at the beginning of the post, I installed an IOCREST IO-PEX40152 inside of an HPE Proliant DL360p Gen8 to add NVMe capabilities to the server.

IOCREST IO-PEX40152 with 4 x 2TB Sabrent Rocket 4 NVME

At first I was concerned about dimensions as technically the card did fit, but technically it didn’t. I bought it anyways, along with 4 X 2TB Sabrent Rocket 4 NVMe SSDs.

The end result?

Picture of an HPE DL360p Gen8 with NVME SSD
HPE DL360p Gen8 with NVME SSD

IMPORTANT: Due to the airflow of the server, I highly recommend disconnecting and removing the fan built in to the IO-PEX40152. The DL360p server will create more than enough airflow and could cause the fan to spin up, generate electricity, and damage the card and NVME SSD.

Also, do not attempt to install the case cover, additional modification is required (see below).

The Fit

Installing the card inside of the PCIe riser was easy, but snug. The metal heatsink actually comes in to contact with the metal on the PCIe riser.

Picture of an IO-PEX40152 installed on DL360p PCIe Riser
IO-PEX40152 installed on DL360p PCIe Riser

You’ll notice how the card just barely fits inside of the 1U server. Some effort needs to be put in to get it installed properly.

Picture of an DL360p Gen8 1U Rack Server with IO-PEX40152 Installed
HPE DL360p Gen8 with IO-PEX40152 Installed

There are ribbon cables (and plastic fittings) directly where the end of the card goes, so you need to gently push these down and push cables to the side where there’s a small amount of thin room available.

We can’t put the case back on… Yet!

Unfortunately, just when I thought I was in the clear, I realized the case of the server cannot be installed. The metal bracket and locking mechanism on the case cover needs the space where a portion of the heatsink goes. Attempting to install this will cause it to hit the card.

Picture of the HPE DL360p Gen8 Case Locking Mechanism
HPE DL360p Gen8 Case Locking Mechanism

The above photo shows the locking mechanism protruding out of the case cover. This will hit the card (with the IOCREST IO-PEX40152 heatsink installed). If the heatsink is removed, the case might gently touch the card in it’s unlocked and recessed position, but from my measurements clears the card when locked fully and fully closed.

I had to come up with a temporary fix while I figure out what to do. Flip the lid and weight it down.

Picture of an HPE DL360p Gen8 case cover upside down
HPE DL360p Gen8 case cover upside down

For stability and other tests, I simply put the case cover on upside down and weighed it down with weights. Cooling is working great and even under high load I haven’t seen the SSD’s go above 38 Celsius.

The plan moving forward was to remove the IO-PEX40152 heatsink, and install individual heatsinks on the NVME SSD as well as the PEX PCIe switch chip. This should clear up enough room for the case cover to be installed properly.

The fix

I went on to Amazon and purchased the following items:

4 x GLOTRENDS M.2 NVMe SSD Heatsink for 2280 M.2 SSD

1 x BNTECHGO 4 Pcs 40mm x 40mm x 11mm Black Aluminum Heat Sink Cooling Fin

They arrived within days with Amazon Prime. I started to install them.

Picture of Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

And now we install it in the DL360p Gen8 PCIe riser and install it in to the server.

You’ll notice it’s a nice fit! I had to compress some of the heat conductive goo on the PFX chip heatsink as the heatsink was slightly too high by 1/16th of an inch. After doing this it fit nicely.

Also, note the one of the cable/ribbon connectors by the SAS connections. I re-routed on of the cables between the SAS connectors they could be folded and lay under the card instead of pushing straight up in to the end of the card.

As I mentioned above, the locking mechanism on the case cover may come in to contact with the bottom of the IOCREST card when it’s in the unlocked and recessed position. With this setup, do not unlock the case or open the case when the server is running/plugged in as it may short the board. I have confirmed when it’s closed and locked, it clears the card. To avoid “accidents” I may come up with a non-conductive cover for the chips it hits (to the left of the fan connector on the card in the image).

And with that, we’ve closed the case on this project…

Picture of a HPE DL360p Gen8 Case Closed
HPE DL360p Gen8 Case Closed

One interesting thing to note is that the NVME SSD are running around 4-6 Celsius cooler post-modification with custom heatsinks than with the stock heatsink. I believe this is due to the awesome airflow achieved in the Proliant DL360 servers.

Conclusion

I’ve been running this configuration for 6 days now stress-testing and it’s been working great. With the server running VMware ESXi 6.5 U3, I am able to passthrough the individual NVME SSD to virtual machines. Best of all, installing this card did not cause the fans to spin up which is often the case when using non-HPE PCIe cards.

This is the perfect mod to add NVME storage to your server, or even try out technology like VMware vSAN. I have a number of cool projects coming up using this that I’m excited to share.

Apr 292020
 
Screenshot of HPE MSA Storage Array Health Check

Are you having issues your HPE MSA SAN? Want to have more insight in to your storage array? Last week, HPE made available a new tool that allows you to check the health of your HPE MSA Storage Array!

While this tool was released to the public last week, rumor has it that this is the same tool that HPE uses internally when providing support to customers.

This tool is FREE to use!

I originally spotted this on the MSA Storage section of the HPE Community forums here: https://community.hpe.com/t5/msa-storage/new-hpe-tool-msa-health-check/td-p/7085594

HPE MSA Array Health Check Video

See below for a video discussing and demonstrating the HPE MSA health Check on an HPE MSA 2040 SAN array.

Accessing the MSA Health Check

The HPE MSA Health Check site can be found at https://msa.ext.hpe.com/MSALogUploader.aspx

The following HPE MSA Arrays are supported:

  • HPE P2000 G3 MSA Array
  • HPE MSA 1040/1050
  • HPE MSA 2040 and variants (MSA 2042)
  • HPE MSA 2050 and variants (MSA 2052)

How to use the MSA Health Check

Using the HPE MSA Health Check is easy!

  1. Log on to your MSA Array SMU (Storage Management Utility)
  2. On the bottom left of the UI, click on the following up-arrow and select save logs
    Save Logs on HPE MSA Array Screenshot
  3. Wait for the logs to generate.
  4. Download the logs to your computer
  5. Open the MSA Storage Array Health Check
    Screenshot of HPE MSA Storage Array Health Check
  6. Click on the “Upload MSA Log File (.zip)” button, and then select your log dump zip file
  7. Wait for the File to upload
    Screenshot of Upload status on HPE MSA Array Log File
  8. View your health report, and optionally download a PDF copy
    Screenshot of a HPE MSA Array Health Check Report

And that’s it!

Available Tests

When running a health check, the following tests and checks are made on the log files:

  • Background Scrub Setting
  • Compact Flash Events
  • Controller Firmware Version Mismatch
  • Controller Partner Firmware Update Setting
  • Default User Check
  • Drive Firmware Version Mismatch
  • Enclosure Firmware Version Mismatch
  • NonSecure Protocols
  • Notification Settings
  • Sparing Best Practices
  • Unhealthy Component Check
  • Volume Mapping

Conclusion

Even if your MSA array is healthy, I’d still recommend generating a log dump and loading it up in to the MSA Health Check. Any extra visibility, is good visibility!

Apr 022020
 
HPE iLO Logo

In response to the COVID19 crises and to help customers and partners, HPE is providing a free iLO Advanced license for your HPE Servers.

These licenses are full licenses that are valid until January 1st, 2021. This means you’ll have the full Integrated Lights-Out product through the end of the 2020 year.

This was announced on the HPE Servers Facebook page, followed by a post on HPE’s blog!

You can watch my video below, or read on for more information!

How to get your free iLO Advanced license

To get your free license, head over to the link below.

https://www.hpe.com/us/en/resources/integrated-systems/ilo-advanced-trial.html

Free iLO Advanced License
Free iLO Advanced License

This link will take you to sign up for a HPE iLO Advanced trial license. After filling out the form, you’ll be able to download your iLO welcome letter, which includes your iLO key (that is valid through 2020), and instructions.

Free HPE iLO Advanced License key, instructions, and expiry date of January 1, 2021.

This is awesome, and will definitely help out a ton of IT administrators this year to remotely manage, monitor, and maintain their servers.

Thanks HPE!

May 182019
 
VMware Horizon View Mobile Client Android Windows 10 VDI Desktop

Since I’ve installed and configured my Nvidia GRID K1, I’ve been wanting to do a graphics quality demo video. I finally had some time to put a demo together.

I wanted to highlight what type of graphics can be achieved in a VDI environment. Even using an old Nvidia GRID K1 card, we can still achieve amazing graphical performance in a virtual desktop environment.

This demo outlines 3D accelerated graphics provided by vGPU.

Demo Video

Please see below for the video:

Information

Demo Specifications

  • VMware Horizon View 7.8
  • NVidia GRID K1
  • GRID vGPU Profile: GRID K180q
  • HPE ML310e Gen8 V2
  • ESXi 6.5 U2
  • Virtual Desktop: Windows 10 Enterprise
  • Game: Steam – Counter-Strike Global Offensive (CS:GO)

Please Note

  • Resolution of the Virtual Desktop is set to 1024×768
  • Blast Extreme is the protocol used
  • Graphics on game are set to max
  • Motion is smooth in person, screen recorder caused some jitter
  • This video was then edited on that VM using CyberLink PowerDirector
  • vGPU is being used on the VM
May 172019
 
Right side of MSA 2040

You may encounter a situation where you’re unable to connect to the management interface or NIC on your HPE MSA array. When this condition occurs, you are not able to ping the NIC, and the SMU (web interface) will not load.

When you visibly look at the array, the AMBER warning light may or may not be flashing.

If you have a dual controller setup, and connect to the SMU on the other controller, you may see numerous log entries where the management NIC port status changes repeatedly from up to down.

What’s happening

I’ve witnessed this issue occur on 2 separate HPE MSA 2040 storage arrays (both with dual controllers).

When you physically look at the management NICs on the controller in question, you’ll notice that the port status LED indicator turns on, and turns off repeatedly. The link status keeps changing from up to down (as reflected in the logs).

The Fix

Restarting the unit will have no effect. Changing the network cable will have no effect.

To resolve this issue, you must play with the network cable and re-seat it a few times (possibly half-way if possible a couple times as sketchy as that sounds).

If you can get the link status up, and disconnect/reconnect the cable before the light turns off, the connection will stay up. It will continue to function and survive restarts until sometime in the future when you disconnect it and reconnect it.

Replacing the controller may also fix it, however in the first instance I observed this, the replacement controller exhibited the same behavior months later in the future.

May 022019
 
Nvidia GRID Logo

I can’t tell you how excited I am that after many years, I’ve finally gotten my hands on and purchased an Nvidia Quadro K1 GPU. This card will be used in my homelab to learn, and demo Nvidia GRID accelerated graphics on VMware Horizon View. In this post I’ll outline the details, installation, configuration, and thoughts. And of course I’ll have plenty of pictures below!

The focus will be to use this card both with vGPU, as well as 3D accelerated vSGA inside in an HPE server running ESXi 6.5 and VMware Horizon View 7.8.

Please Note: Some, most, or all of what I’m doing is not officially supported by Nvidia, HPE, and/or VMware. I am simply doing this to learn and demo, and there was a real possibility that it may not have worked since I’m not following the vendor HCL (Hardware Compatibility lists). If you attempt to do this, or something similar, you do so at your own risk.

Nvidia GRID K1 Image

For some time I’ve been trying to source either an Nvidia GRID K1/K2 or an AMD FirePro S7150 to get started with a simple homelab/demo environment. One of the reasons for the time it took was I didn’t want to spend too much on it, especially with the chances it may not even work.

Essentially, I have 3 Servers:

  1. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  2. HPE DL360p Gen8 (Dual Proc, 128GB RAM)
  3. HPE ML310e Gen8 v2 (Single Proc, 32GB RAM)

For the DL360p servers, while the servers are beefy enough, have enough power (dual redundant power supplies), and resources, unfortunately the PCIe slots are half-height. In order for me to use a dual-height card, I’d need to rig something up to have an eGPU (external GPU) outside of the server.

As for the ML310e, it’s an entry level tower server. While it does support dual-height (dual slot) PCIe cards, it only has a single 350W power supply, misses some fancy server technologies (I’ve had issues with VT-d, etc), and only a single processor. I should be able to install the card, however I’m worried about powering it (it has no 6pin PCIe power connector), and having ESXi be able to use it.

Finally, I was worried about cooling. The GRID K1 and GRID K2 are typically passively cooled and meant to be installed in to rack servers with fans running at jet engine speeds. If I used the DL360p with an external setup, this would cause issues. If I used the ML310e internally, I had significant doubts that cooling would be enough. The ML310e did have the plastic air baffles, but only had one fan for the expansion cards area, and of course not all the air would pass through the GRID K1 card.

The Purchase

Because of a limited budget, and the possibility I may not even be able to get it working, I didn’t want to spend too much. I found an eBay user local in my city who had a couple Grid K1 and Grid K2 cards, as well as a bunch of other cool stuff.

We spoke and he decided to give me a wicked deal on the Grid K1 card. I thought this was a fantastic idea as the power requirements were significantly less (more likely to work on the ML310e) on the K1 card at 130 W max power, versus the K2 card at 225 W max power.

NVIDIA GRID K1 and K2 Specifications
NVIDIA GRID K1 and K2 Specification Table

The above chart is a capture from:
https://www.nvidia.com/content/cloud-computing/pdf/nvidia-grid-datasheet-k1-k2.pdf

We set a time and a place to meet. Preemptively I ran out to a local supply store to purchase an LP4 power adapter splitter, as well as a LP4 to 6pin PCIe power adapter. There were no available power connectors inside of the ML310e server so this was needed. I still thought the chances of this working were slim…

These are the adapters I purchased:

Preparation and Software Installation

I also decided to go ahead and download the Nvidia GRID Software Package. This includes the release notes, user guide, ESXi vib driver (includes vSGA, vGPU), as well as guest drivers for vGPU and pass through. The package also includes the GRID vGPU Manager. The driver I used was from:
https://www.nvidia.com/Download/driverResults.aspx/144909/en-us

To install, I copied over the vib file “NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib” to a datastore, enabled SSH, and then ran the following command to install:

esxcli software vib install -v /path/to/file/NVIDIA-vGPU-kepler-VMware_ESXi_6.5_Host_Driver_367.130-1OEM.650.0.0.4598673.vib

The command completed successfully and I shut down the host. Now I waited to meet.

We finally met and the transaction went smooth in a parking lot (people were staring at us as I handed him cash, and he handed me a big brick of something folded inside of grey static wrap). The card looked like it was in beautiful shape, and we had a good but brief chat. I’ll definitely be purchasing some more hardware from him.

Hardware Installation

Installing the card in the ML310e was difficult and took some time with care. First I had to remove the plastic air baffle. Then I had issues getting it inside of the case as the back bracket was 1cm too long to be able to put the card in. I had to finesse and slide in on and angle but finally got it installed. The back bracket (front side of case) on the other side slid in to the blue plastic case bracket. This was nice as the ML310e was designed for extremely long PCIe expansion cards and has a bracket on the front side of the case to help support and hold the card up as well.

For power I disconnected the DVD-ROM (who uses those anyways, right?), and connected the LP5 splitter and the LP5 to 6pin power adapter. I finally hooked it up to the card.

I laid the cables out nicely and then re-installed the air baffle. Everything was snug and tight.

Please see below for pictures of the Nvidia GRID K1 installed in the ML310e Gen8 V2.

Host Configuration

Powering on the server was a tense moment for me. A few things could have happened:

  1. Server won’t power on
  2. Server would power on but hang & report health alert
  3. Nvidia GRID card could overheat
  4. Nvidia GRID card could overheat and become damaged
  5. Nvidia GRID card could overheat and catch fire
  6. Server would boot but not recognize the card
  7. Server would boot, recognize the card, but not work
  8. Server would boot, recognize the card, and work

With great suspense, the server powered on as per normal. No errors or health alerts were presented.

I logged in to iLo on the server, and watched the server perform a BIOS POST, and start it’s boot to ESXi. Everything was looking well and normal.

After ESXi booted, and the server came online in vCenter. I went to the server and confirmed the GRID K1 was detected. I went ahead and configured 2 GPUs for vGPU, and 2 GPUs for 3D vSGA.

ESXi Graphics Settings for Host Graphics and Graphics Devices
ESXi Host Graphics Devices Settings

VM Configuration

I restarted the X.org service (required when changing the options above), and proceeded to add a vGPU to a virtual machine I already had configured and was using for VDI. You do this by adding a “Shared PCI Device”, selecting “NVIDIA GRID vGPU”, and I chose to use the highest profile available on the K1 card called “grid_k180q”.

Virtual Machine Edit Settings with NVIDIA GRID vGPU and grid_k180q profile selected
VM Settings to add NVIDIA GRID vGPU

After adding and selecting ok, you should see a warning telling you that must allocate and reserve all resources for the virtual machine, click “ok” and continue.

Power On and Testing

I went ahead and powered on the VM. I used the vSphere VM console to install the Nvidia GRID driver package (included in the driver ZIP file downloaded earlier) on the guest. I then restarted the guest.

After restarting, I logged in via Horizon, and could instantly tell it was working. Next step was to disable the VMware vSGA Display Adapter in the “Device Manager” and restart the host again.

Upon restarting again, to see if I had full 3D acceleration, I opened DirectX diagnostics by clicking on “Start” -> “Run” -> “dxdiag”.

DirectX Diagnostic Tool (dxdiag) showing Nvidia Grid K1 on VMware Horizon using vGPU k180q profile
dxdiag on GRID K1 using k180q profile

It worked! Now it was time to check the temperature of the card to make sure nothing was overheating. I enabled SSH on the ESXi host, logged in, and ran the “nvidia-smi” command.

nvidia-smi command on ESXi host showing GRID K1 information, vGPU information, temperatures, and power usage
“nvidia-smi” command on ESXi Host

According to this, the different GPUs ranged from 33C to 50C which was PERFECT! Further testing under stress, and I haven’t gotten a core to go above 56. The ML310e still has an option in the BIOS to increase fan speed, which I may test in the future if the temps get higher.

With “nvidia-smi” you can see the 4 GPUs, power usage, temperatures, memory usage, GPU utilization, and processes. This is the main GPU manager for the card. There are some other flags you can use for relevant information.

nvidia-smi with vgpu flag for vgpu information
“nvidia-smi vgpu” for vGPU Information
nvidia-smi with vgpu -q flag
“nvidia-smi vgpu -q” to Query more vGPU Information

Final Thoughts

Overall I’m very impressed, and it’s working great. While I haven’t tested any games, it’s working perfect for videos, music, YouTube, and multi-monitor support on my 10ZiG 5948qv. I’m using 2 displays with both running at 1920×1080 for resolution.

I’m looking forward to doing some tests with this VM while continuing to use vGPU. I will also be doing some testing utilizing 3D Accelerated vSGA.

The two coolest parts of this project are:

  • 3D Acceleration and Hardware h.264 Encoding on VMware Horizon
  • Getting a GRID K1 working on an HPE ML310e Gen8 v2

Highly recommend getting a setup like this for your own homelab!

Uses and Projects

Well, I’m writing this “Uses and Projects” section after I wrote the original article (it’s now March 8th, 2020). I have to say I couldn’t be impressed more with this setup, using it as my daily driver.

Since I’ve set this up, I’ve used it remotely while on airplanes, working while travelling, even for video editing.

Some of the projects (and posts) I’ve done, can be found here:

Leave a comment and let me know what you think! Or leave a question!

Feb 182019
 
ESXi Fatal error: 8 (Device Error)

Unable to boot ESXi from USB or SD Card on HPE Proliant Server

After installing HPE iLO Amplifier on your network and updating iLO 4 firmware to 2.60 or 2.61, you may notice that your HPE Proliant Servers may fail to boot ESXi from a USB drive or SD-Card.

This was occuring on 2 ESXi Hosts. Both were HPE Proliant DL360p Gen8 Servers. One server was using an internal USB drive for ESXi, while the other was using an HPE branded SD Card.

The issue started occuring on both hosts after a planned InfoSight implementation. Both hosts iLO controllers firmware were upgraded to 2.61, iLO Amplifier was deployed (and the servers added), and the amplifier was connected to an HPE InfoSight account.

Update – May 24th 2019: As an HPE partner, I have been working with HPE, the product manager, and development team on this issue. HPE has provided me with a fix to test that I have been able to verify fully resolves this issue! Stay tuned for more information!

Update – June 5th 2019: Great news! As Bob Perugini (WW Product Manager at HPE) put it: “HPE is happy to announce that this issue has been fixed in latest version of iLO Amplifier Pack, v1.40. To download iLO Amplifier Pack v1.40, go to http://www.hpe.com/servers/iloamplifierpack and click “download”.” Scroll to the bottom of the post for more information!

Please see below for errors:

Errors

ESXi Fatal error: 8 (Device Error)
ESXi Fatal error: 8 (Device Error)
Error loading /s.v00
Compressed MD5: 00000000000000000000000000
Decompressed MD5: 00000000000000000000000000
Fatal error: 8 (Device Error)
Error mboot.c32 attempted DOS system call
Error mboot.c32 attempted DOS system call
mboot.c32: attempted DOS system call INT 21 0d00 E8004391
boot:

Symptoms

This issue may occur intermittently, on the majority of boots, or on all boots. Re-installing ESXi on the media, as well as replacing the USB/SD Card has no effect. Installation will be successful, however you the issue is still experiences on boot.

HPE technical support was unable to determine the root of the issue. We found the source of the issue and reported it to HPE technical support and are waiting for an update.

The Issue and Fix

This issue occurs because the HPE iLO Amplifier is running continuous server inventory scans while the hosts are booting. When one inventory completes, it restarts another scan.

The following can be noted:

  • iLO Amplifier inventory percentage resets back to 0% and starts again numerous times during the server boot
  • Inventory scan completes, only to restart again numerous times during the server boot
  • Inventory scan resets back to 0% during numerous different phases of BIOS initialization and POST.
HPe iLO Amplifier Inventory
HPE iLO Amplifier Inventory

We noticed that once the HPE iLO Amplifier Virtual Machine was powered off, not only did the servers boot faster, but they also booted 100% succesfully each time. Powering on the iLO Amplifier would cause the ESXi hosts to fail to boot once again.

I’d also like to note that on the host using the SD-Card, the failed boot would actually completely lock up iLO, and would require physical intervention to disconnect and reconnect the power to the server. We were unable to restart the server once it froze (this did not happen to the host using the USB drive).

There are some settings on the HPE iLO amplifier to control performance and intervals of inventory scans, however we noticed that modifying these settings did not alter or stop the issue, and had no effect.

As a temporary workaround, make sure your iLO amplifier is powered off during any maintenance to avoid hosts freezing/failing to boot.

To fully resolve this issue, upgrade your iLO Amplifier to the latest version (1.40 as of the time of this update). The latest version can be downloaded at: http://www.hpe.com/servers/iloamplifierpack.

Update – April 10th 2019

I’ve attempted to try downgrading to the earliest supported iLo version 2.54, and the issue still occurs.

I also upgraded to the newest version 2.62 which presented some new issues.

On the first boot, the BIOS reported memory access issues on Processor 1 socket 1, then another error reporting memory access issues on Processor 1 socket 4.

I disconnected the power cables, reconnected, and restarted the server. This boot, the server didn’t even detect the bootable USB stick.

Again, after shutting down the iLo Amplifier, the server booted properly and the issue disappeared.

Update – May 24th 2019

As an HPE partner, I have been working with HPE, the product manager, and development team on this issue. HPE has provided me with a fix to test that I have been able to verify fully resolves this issue! Stay tuned for more information!

Update – June 5th 2019 – ITS FIXED!!!

Great news as the issue is now fixed! As Bob Perugini (WorldWide Product Manager at HPE) said it:

HPE is happy to announce that this issue has been fixed in latest version of iLO Amplifier Pack, v1.40.

To download iLO Amplifier Pack v1.40, go to http://www.hpe.com/servers/iloamplifierpack and click “download”.


Here’s what’s new in iLO Amplifier Pack v1.40:
─ Available as a VMware ESXi appliance and as a Hyper-V appliance (Hyper-V is new)
─ VMware tools have been added to the ESXi appliance
─ Ability to schedule the time of the daily transmission of Active Health System (AHS) data to InfoSight
─ Ability to opt-in and allow the IP address and hostname of the server to be transmitted to InfoSight and displayed
─ Test connectivity button to help verify iLO Amplifier Pack has successfully connected to InfoSight
─ Allow user authentication credentials for the proxy server when connecting to InfoSight
─ Added ability to specify IP address or hostname for the HPE RDA connection when connection to InfoSight
─ Ability to send updated AHS data “now” for an individual server
─ Ability to stage firmware and driver updates to the iLO Repository and then deploy the staged updates at a later date or time (HPE Gen10 servers only)
─ Allow the firmware and driver updates of servers whose iLO has been configured in CNSA (Commercial National Security Algorithm) mode (HPE Gen10 servers only)

Nov 042018
 

This weekend I came across a big issue with my HPE MSA 2040 where one of the SAN controllers became unresponsive, and appeared to had failed because it would not boot.

It all started when I decided to clean the MSA SAN. I try to clean the components once or twice a year to remove dust and make sure it’s not getting all jammed up. Sometimes I’ll shut the entire unit down and remove the individual components, other times I’ll remove them while operating. Because of the redundancies and since I have two controllers, I can remove and clean each controller individually at separate times.

Please Note: When dusting equipment with fans, never allow the fans to spin up with compressed air as this can generate current which can damage components. Never allow compressed air flow to spin up fans.

After cleaning out the power supply’s, it came time to clean the controllers.

The Problem

As always, I logged in to the SMU to shutdown controller A (storage). I shut it down, the blue LED illuminated it was safe for removal. I then proceeded to remove it, clean it, and re-insert it. The controller came back online, and ownership of the applicable disk groups were successfully moved back. Controller A was now completed successfully. I continued to do the same for controller B: I logged in to shutdown controller B (storage). It shut down just like controller A, the blue LED removable light illuminated. I was able to remove it, clean it, and re-insert it.

However, controller B did not come back online.

After inserting controller B, the status light was flashing (as if it was booting). I waited 20 minutes with no change. The SMU on controller B was responding to HTTPS requests, however you could not log on due to the error “system is initializing”. SSH was functioning and you could log in and issue commands, however any command to get information would return “Please wait while this information is pulled from the MC controller”, and ultimately fail. The SMU on controller A would report a controller fault on controller B, and not provide any other information (including port status on controller B).

I then tried to re-seat the controller with the array still running. Gave it plenty of time with no effect.

I then removed the failed controller, shutdown the unit, powered it back on (only with controller A), and re-inserted Controller B. Again, no effect.

The Fix

At this point I’m thinking the controller may have failed or died during the cleaning process. I was just about to call HPE support for a replacement until I noticed the “Power LED” light inside of the failed controller would flash every 5 seconds while removed.

This made me start to wonder if there was an issue writing the cache to the compact flash card, or if the controller was still running off battery power but had completely frozen.

I tried these 3 things on the failed controller while it was unplugged and removed:

  1. I left the controller untouched for 1 hour out of the array (to maybe let it finish whatever it was doing while on battery power)
  2. There’s an unlabeled button on the back of the controller. As a last resort (thinking it was a reset button), I pressed and held it for 20 seconds, waited a minute, then briefly pressed it for 1 second while it was out of the unit.
  3. I removed the Compact Flash card from the controller for 1 minute, then re-inserted it. In hoping this would fail the cache copy if it was stuck in the process of writing cache to compact flash.

I then re-inserted the controller, and it booted fine! It was not functioning and working (and came up very fast). Looking at the logs, it has no record of what occurred between the first shutdown, and final boot. I hope this post helps someone else with the same issue, it can save you a support ticket, and time with a controller down.

Disclaimer

PLEASE NOTE: I could not find any information on the unlabeled button on the controller, and it’s hard to know exactly what it does. Perform this at your own risk (make sure you have a backup). Since I have 2 controllers, and my MSA 2040 was running fine on Controller A, I felt comfortable doing this, as if this did reset controller B, the configuration would replicate back from controller A. I would not do this in a single controller environment.

Update – 24 Hours later

After I got everything up and running, I checked the logs of the unit and couldn’t find anything on controller B that looked out of ordinary. However, 24 hours later, I logged back in and noticed some new events showed up from the day before (from the day I had the issues):

MSA 2040 Code 549

MSA 2040 Code 549

You’ll notice the event log with severity error:

Recovery from internal processor fault detected on controller.
Code 549

One thing that’s very odd is that I know for a fact the time is wrong on the error severity log entry, this could be due to the fact we had a daylight savings time change last night at midnight. Either way it appears that it finally did detect that the Storage controller was in an error state and logged it, but it still would have been nice for some more information.

On a final note, the unit has been running perfectly for over 24 hours.

Update – April 2nd 2019

Well, in March a new firmware update was released for the MSA. I went to upgrade and the same issue as above occurred. During the firmware upgrade, at one point of the firmware update process a step had failed and repeated 4 times until successful.

The firmware update log (below was repeated):

Updating system configuration files
System configuration complete
Loading SC firmware.
STATUS: Updating Storage Controller firmware.
Waiting 5 seconds for SC to shutdown.
Shutdown of SC successful.
Sending new firmware to SC.
Updating SC Image:Remaining size 6263505
Updating SC Image:Remaining size 5935825
Updating SC Image:Remaining size 5608145
Updating SC Image:Remaining size 5280465
Updating SC Image:Remaining size 4952785
Updating SC Image:Remaining size 4625105
Updating SC Image:Remaining size 4297425
Updating SC Image:Remaining size 3969745
Updating SC Image:Remaining size 3642065
Updating SC Image:Remaining size 3314385
Updating SC Image:Remaining size 2986705
Updating SC Image:Remaining size 2659025
Updating SC Image:Remaining size 2331345
Updating SC Image:Remaining size 2003665
Updating SC Image:Remaining size 1675985
Updating SC Image:Remaining size 1348305
Updating SC Image:Remaining size 1020625
Updating SC Image:Remaining size 692945
Updating SC Image:Remaining size 365265
Updating SC Image:Remaining size 37585
Waiting for Storage Controller to complete programming.
Please wait...
Please wait...
Please wait...
Please wait...
Storage Controller has completed programming.
Got an error (138) on firmware packet
CAPI error: Firmware Update failed. Controller needs to reboot.
Waiting 5 seconds for SC to shutdown.
Shutdown of SC successful.
Sending new firmware to SC.
Updating SC Image:Remaining size 6263505
Updating SC Image:Remaining size 5935825
Updating SC Image:Remaining size 5608145
Updating SC Image:Remaining size 5280465
Updating SC Image:Remaining size 4952785
Updating SC Image:Remaining size 4625105
Updating SC Image:Remaining size 4297425
Updating SC Image:Remaining size 3969745
Updating SC Image:Remaining size 3642065
Updating SC Image:Remaining size 3314385
Updating SC Image:Remaining size 2986705
Updating SC Image:Remaining size 2659025
Updating SC Image:Remaining size 2331345
Updating SC Image:Remaining size 2003665
Updating SC Image:Remaining size 1675985
Updating SC Image:Remaining size 1348305
Updating SC Image:Remaining size 1020625
Updating SC Image:Remaining size 692945
Updating SC Image:Remaining size 365265
Updating SC Image:Remaining size 37585
Waiting for Storage Controller to complete programming.
Please wait...
Please wait...
Storage Controller has completed programming.
Got an error (138) on firmware packet
CAPI error: Firmware Update failed. Controller needs to reboot.
Waiting 5 seconds for SC to shutdown.
Shutdown of SC successful.
Sending new firmware to SC.
Updating SC Image:Remaining size 6263505
Updating SC Image:Remaining size 5935825
Updating SC Image:Remaining size 5608145
Updating SC Image:Remaining size 5280465
Updating SC Image:Remaining size 4952785
Updating SC Image:Remaining size 4625105
Updating SC Image:Remaining size 4297425
Updating SC Image:Remaining size 3969745
Updating SC Image:Remaining size 3642065
Updating SC Image:Remaining size 3314385
Updating SC Image:Remaining size 2986705
Updating SC Image:Remaining size 2659025
Updating SC Image:Remaining size 2331345
Updating SC Image:Remaining size 2003665
Updating SC Image:Remaining size 1675985
Updating SC Image:Remaining size 1348305
Updating SC Image:Remaining size 1020625
Updating SC Image:Remaining size 692945
Updating SC Image:Remaining size 365265
Updating SC Image:Remaining size 37585
Waiting for Storage Controller to complete programming.
Please wait...
Please wait...
Storage Controller has completed programming.
Got an error (138) on firmware packet
CAPI error: Firmware Update failed. Controller needs to reboot.
Waiting 5 seconds for SC to shutdown.
Shutdown of SC successful.
Sending new firmware to SC.
Updating SC Image:Remaining size 6263505
Updating SC Image:Remaining size 5935825
Updating SC Image:Remaining size 5608145
Updating SC Image:Remaining size 5280465
Updating SC Image:Remaining size 4952785
Updating SC Image:Remaining size 4625105
Updating SC Image:Remaining size 4297425
Updating SC Image:Remaining size 3969745
Updating SC Image:Remaining size 3642065
Updating SC Image:Remaining size 3314385
Updating SC Image:Remaining size 2986705
Updating SC Image:Remaining size 2659025
Updating SC Image:Remaining size 2331345
Updating SC Image:Remaining size 2003665
Updating SC Image:Remaining size 1675985
Updating SC Image:Remaining size 1348305
Updating SC Image:Remaining size 1020625
Updating SC Image:Remaining size 692945
Updating SC Image:Remaining size 365265
Updating SC Image:Remaining size 37585
Waiting for Storage Controller to complete programming.
Please wait...
Please wait...
Storage Controller has completed programming.
Updating SC Image:Remaining size 0
Storage Controller has been successfully updated.
STATUS: Current CPLD firmware is up-to-date.
CPLD update not required.
==========================================
Software Component Load Summary:
MC Software:    SUCCESSFUL
SC Software:    SUCCESSFUL
EC Software:    NOT ATTEMPTED
CPLD Software:  NOT ATTEMPTED
==========================================

During the Storage Controller restarting process, the controller never came back up. I removed the controller 1 hour, re-inserted and the above fix did not work. I then tried it after 2 hours of disconnection.

At this point I contacted HPE, who is sending a replacement controller.

The following day (12 hours of controller removed), I re-inserted it again and it actually booted up, was working with the new firmware, and then did a PFU (Partner Firmware Update) of controller A.

While it is working now, I’m still going to replace the controller as I believe something is not functioning correctly.