May 262020
 

So you want to add NVMe storage capability to your HPE Proliant DL360p Gen8 (or other Proliant Gen8 server) and don’t know where to start? Well, I was in the same situation until recently. However, after much research, a little bit of spending, I now have 8TB of NVMe storage in my HPE DL360p Gen8 Server thanks to the IOCREST IO-PEX40152.

Unsupported you say? Well, there are some of us who like to live life dangerously, there is also those of us with really cool homelabs. I like to think I’m the latter.

PLEASE NOTE: This is not a supported configuration. You’re doing this at your own risk. Also, note that consumer/prosumer NVME SSDs do not have PLP (Power Loss Prevention) technology. You should always use supported configurations and enterprise grade NVME SSDs in production environments.

DISCLAIMER: If you attempt what I did in this post, you are doing it at your own risk. I won’t be held liable for any damages or issues.

Use Cases

There’s a number of reasons why you’d want to do this. Some of them include:

  • Server Storage
  • VMware Storage
  • VMware vSAN
  • Virtualized Storage (SDS as example)
  • VDI
  • Flash Cache
  • Special applications (database, high IO)

Adding NVMe capability

Well, after all that research I mentioned at the beginning of the post, I installed an IOCREST IO-PEX40152 inside of an HPE Proliant DL360p Gen8 to add NVMe capabilities to the server.

IOCREST IO-PEX40152 with 4 x 2TB Sabrent Rocket 4 NVME

At first I was concerned about dimensions as technically the card did fit, but technically it didn’t. I bought it anyways, along with 4 X 2TB Sabrent Rocket 4 NVMe SSDs.

The end result?

Picture of an HPE DL360p Gen8 with NVME SSD
HPE DL360p Gen8 with NVME SSD

IMPORTANT: Due to the airflow of the server, I highly recommend disconnecting and removing the fan built in to the IO-PEX40152. The DL360p server will create more than enough airflow and could cause the fan to spin up, generate electricity, and damage the card and NVME SSD.

Also, do not attempt to install the case cover, additional modification is required (see below).

The Fit

Installing the card inside of the PCIe riser was easy, but snug. The metal heatsink actually comes in to contact with the metal on the PCIe riser.

Picture of an IO-PEX40152 installed on DL360p PCIe Riser
IO-PEX40152 installed on DL360p PCIe Riser

You’ll notice how the card just barely fits inside of the 1U server. Some effort needs to be put in to get it installed properly.

Picture of an DL360p Gen8 1U Rack Server with IO-PEX40152 Installed
HPE DL360p Gen8 with IO-PEX40152 Installed

There are ribbon cables (and plastic fittings) directly where the end of the card goes, so you need to gently push these down and push cables to the side where there’s a small amount of thin room available.

We can’t put the case back on… Yet!

Unfortunately, just when I thought I was in the clear, I realized the case of the server cannot be installed. The metal bracket and locking mechanism on the case cover needs the space where a portion of the heatsink goes. Attempting to install this will cause it to hit the card.

Picture of the HPE DL360p Gen8 Case Locking Mechanism
HPE DL360p Gen8 Case Locking Mechanism

The above photo shows the locking mechanism protruding out of the case cover. This will hit the card (with the IOCREST IO-PEX40152 heatsink installed). If the heatsink is removed, the case might gently touch the card in it’s unlocked and recessed position, but from my measurements clears the card when locked fully and fully closed.

I had to come up with a temporary fix while I figure out what to do. Flip the lid and weight it down.

Picture of an HPE DL360p Gen8 case cover upside down
HPE DL360p Gen8 case cover upside down

For stability and other tests, I simply put the case cover on upside down and weighed it down with weights. Cooling is working great and even under high load I haven’t seen the SSD’s go above 38 Celsius.

The plan moving forward was to remove the IO-PEX40152 heatsink, and install individual heatsinks on the NVME SSD as well as the PEX PCIe switch chip. This should clear up enough room for the case cover to be installed properly.

The fix

I went on to Amazon and purchased the following items:

4 x GLOTRENDS M.2 NVMe SSD Heatsink for 2280 M.2 SSD

1 x BNTECHGO 4 Pcs 40mm x 40mm x 11mm Black Aluminum Heat Sink Cooling Fin

They arrived within days with Amazon Prime. I started to install them.

Picture of Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

And now we install it in the DL360p Gen8 PCIe riser and install it in to the server.

You’ll notice it’s a nice fit! I had to compress some of the heat conductive goo on the PFX chip heatsink as the heatsink was slightly too high by 1/16th of an inch. After doing this it fit nicely.

Also, note the one of the cable/ribbon connectors by the SAS connections. I re-routed on of the cables between the SAS connectors they could be folded and lay under the card instead of pushing straight up in to the end of the card.

As I mentioned above, the locking mechanism on the case cover may come in to contact with the bottom of the IOCREST card when it’s in the unlocked and recessed position. With this setup, do not unlock the case or open the case when the server is running/plugged in as it may short the board. I have confirmed when it’s closed and locked, it clears the card. To avoid “accidents” I may come up with a non-conductive cover for the chips it hits (to the left of the fan connector on the card in the image).

And with that, we’ve closed the case on this project…

Picture of a HPE DL360p Gen8 Case Closed
HPE DL360p Gen8 Case Closed

One interesting thing to note is that the NVME SSD are running around 4-6 Celsius cooler post-modification with custom heatsinks than with the stock heatsink. I believe this is due to the awesome airflow achieved in the Proliant DL360 servers.

Conclusion

I’ve been running this configuration for 6 days now stress-testing and it’s been working great. With the server running VMware ESXi 6.5 U3, I am able to passthrough the individual NVME SSD to virtual machines. Best of all, installing this card did not cause the fans to spin up which is often the case when using non-HPE PCIe cards.

This is the perfect mod to add NVME storage to your server, or even try out technology like VMware vSAN. I have a number of cool projects coming up using this that I’m excited to share.

May 252020
 
Picture of an IOCREST IO-PEX40152 PCIe x16 to Quad M.2 NVMe

Looking to add quad (4) NVMe SSDs to your system and don’t have the M.2 slots or a motherboard that supports bifurcation? The IOCREST IO-PEX40152 QUAD NVMe PCIe card, is the card for you!

The IO-PEX40152 PCIe card allows you to add 4 NVMe SSDs to a computer, workstation, or server that has an available PCIe x16 slot. This card has a built in PEX PCIe switch chip, so your motherboard does not need to support bifurcation. This card can essentially be installed and used in any system with a free x16 slot.

This card is also available under the PART# SI-PEX40152.

In this post I’ll be reviewing the IOCREST IO-PEX40152, providing information on how to buy, benchmarks, installation, configuration and more! I’ve also posted tons of pics for your viewing pleasure. I installed this card in an HPE DL360p Gen8 server to add NVME capabilities.

We’ll be using and reviewing this card populated with 4 x Sabrent Rocket 4 PCIe NVMe SSD, you can see the review on those SSD’s individually here.

Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

Why and How I purchased the card

Originally I purchased this card for a couple of special and interesting projects I’m working on for the blog and my homelab. I needed a card that provided high density NVME flash storage, but didn’t require bifurcation as I planned on using it on a motherboard that didn’t support 4/4/4/4 bifurcation.

By choosing this specific card, I could also use it in any other system that had an available x16 PCIe slot.

I considered many other cards (such as some from SuperMicro and Intel), but in the end chose this one as it seemed most likely to work for my application. The choices from SuperMicro and Intel looked like they are designed to be use on their own systems.

I purchased the IO-PEX40152 from the IOCREST AliExpress store (after verifying it was their genuine online store) and they had the most cost-effective price out of the 4 sources.

They shipped the card with FedEx International Priority, so I received it within a week. Super fast shipping and it was packed perfectly!

Picture of the IOCREST IO-PEX40152 box
IOCREST IO-PEX40152 Box

Where to buy the IO-PEX40152

I found 3 different sources to purchase the IO-PEX40152 from:

  1. IOCREST AliExpress Store – https://www.aliexpress.com/i/4000359673743.html
  2. Amazon.com – https://www.amazon.com/IO-CREST-Non-RAID-Bifurcation-Controller/dp/B083GLR3WL/
  3. Syba USA – Through their network of resellers or distributors at https://www.sybausa.com/index.php?route=information/wheretobuy

Note that Syba USA is selling the IO-PEX40152 as the SI-PEX40152. The card I actually received has branding that identifies it both as an IO-PEX40152 and an SI-PEX40152.

As I mentioned above, I purchased it from the IOCREST AliExpress Online Store for around $299.00USD. From Amazon, the card was around $317.65USD.

IO-PEX40152 Specifications

Now let’s talk about the technical specifications of the card.

Picture of the IOCREST IO-PEX40152 Side Shot with cover on
IO-PEX40152 Side Shot

According to the packaging, the IO-PEX40152 features the following:

  • Installation in a PCIe x16 slot
  • Supports PCIe 3.1, 3.0, 2.0
  • Compliant with PCI Express M.2 specification 1.0, 1.2
  • Supports data transfer up to 2.5Gb (250MB/sec), 5Gb (500MB/sec), 8Gb (1GB/sec)
  • Supports 2230, 2242, 2260, 2280 size NGFF SSD
  • Supports four (4) NGFF M.2 M Key sockets
  • 4 screw holes 2230/2242/2260/2280 available to fix NGFF SSD card
  • 4 screw holes available to fix PCB board to heatsink
  • Supports Windows 10 (and 7, 8, 8.1)
  • Supports Windows Server 2019 (and 2008, 2012, 2016)
  • Supports Linux (Kernel version 4.6.4 or above)

While this list of features and specs are listed on the website and packaging, I’m not sure how accurate some of these statements are (in a good way), I’ll cover that more later in the post.

What’s included in the packaging?

  • 1 x IO-PEX40152 PCIe x 16 to 4 x M.2(M-Key) card
  • 1 x User Manual
  • 1 x M.2 Mounting material
  • 1 x Screwdriver
  • 5 x self-adhesive thermal pad

They also note that contents may vary depending on country and market.

Unboxing, Installation, and Configuration

As menitoned above, our build includes:

  • 1 x IOCREST IO-PEX40152
  • 4 x Sabrent Rocket 4 NVMe PCIe NVMe SSD
Picture of IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD

You’ll notice it’s a very sleek looking card. The heatsink is beefy, heavy, and very metal (duh)! The card is printed on a nice black PCB.

Removing the 4 screws to release the heatsink, we see the card and thermal paste pads. You’ll notice the PCIe switch chip.

Picture of the front side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Frontside of card

And the backside of the card.

Picture of the back side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Backside of card

NVMe Installation

I start to install the Sabrent Rocket 4 NVMe 2TB SSD.

Picture of a IO-PEX40152 with 2 SSD populated
IO-PEX40152 with 2 SSD populated
Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

That’s a good looking 8TB of NVMe SSD!

Note that the cards will wiggle side to side and have play until screw is tightened. Do not over-tighten the screw!

Make sure before installing the heatsink cover that you remove the blue plastic film on the heat transfer material between NVME and heatsink, and the PEX chip and heatsink.

After that, I installed it in the server and was ready to go!

Heatsink and cooling

A quick note on the heatsink and cooling…

While the heatsink and cooling solution it comes with works amazing, you have flexibility if need be to run and operate the card without the heatsink and fan (the fan doesn’t cause any warnings if disconnected). This works out great if you want to use your own cooling solution, or need to use this card in a system where there isn’t much space. The fan can be removed by removing the screws and disconnecting the power connector.

Note, after installing the NVME SSD, and you affix the heatsink, in the future you will notice that the heatsink get’s stuck to the cards if you try to remove it at a later date. If you do need to remove the heatsink, be very patient and careful, and slowly remove the heatsink to avoid damaging or cracking the NVME SSD and the PCIe card itself.

Speedtest and benchmark

Let’s get to one of the best parts of this review, using the card!

Unfortunately due to circumstances I won’t get in to, I only had access to a rack server to test the card. The server was running VMware vSphere and ESXi 6.5 U3.

After shutting down the server, installing the card, and powering on, you could see the NVMe SSD appearing as available to PCI Passthrough to the VMs. I enabled passthrough and restarted again. I then added the individual 4 NVME drives as PCI passthrough devices to the VM.

Picture of IOCREST IO-PEX40152 passthrough with NVMe to VMware guest VM
IO-PEX40152 PCI Passthrough on VMware vSphere and ESXi

Turning on the system, we are presented with the NVMe drives inside of the “Device Manager” on Windows Server 2016.

A screenshot of an IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016
IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016

Now that was easy! Everything’s working perfectly…

Now we need to go in to disk manager and create some volumes for some quick speed tests and benchmarks.

A screenshot of Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Again, no problems and very quick!

Let’s load up CrystalDiskMark and test the speed and IOPS!

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

What’s interesting is that I was able to achieve much higher speeds using this card in an older system, than directly installing one of the SSDs in a new HP Z240 workstation. However, unfortunately due to CPU limitations (maxing the CPU out) of this server used above, I could not fully test, max out, or benchmark the IOPS on an individual SSD.

Additional Notes on the IO-PEX40152

Some additional notes I have on the IO-PEX40152:

The card works perfectly with VMware ESXi PCI passthrough when passing it through to a virtualized VM.

The card according to the specifications states a data transfer up to 1GB/sec, however I achieved over 3GB/sec using the Sabrent Rocket 4 NVME SSD.

While the specifications and features state it supports NVME spec 1.0 and 1.1, I don’t see why it wouldn’t support the newer specifications as it’s simply a PCIe switch which NVMe slots.

Conclusion

This is a fantastic card that you can use reliably if you have a system with a free x16 slot. Because of the fact it has a built in PCIe switch and doesn’t require PCIe bifurcation, you can confidently use it knowing it will work.

I’m looking forward to buying a couple more of these for some special applications and projects I have lined up, stay tuned for those!

May 252020
 
vSphere Logo Image

When troubleshooting connectivity issues with your vMotion network (or vMotion VLAN), you may notice that you’re unable to ping using the ping or vmkping command on your ESXi and VMware hosts.

This occurs when you’re suing the vMotion TCP/IP stack on your vmkernel (vmk) adapters that are configured for vMotion.

This also applies if you’re using long distance vMotion (LDVM).

Why

The vMotion TCP/IP stack requires special syntax for ping and ICMP tests on the vmk adapters.

A screenshot of vmk adapters, one of which is using the vMotion TCP/IP Stack
VMK using vMotion TCP/IP Stack

Above is an example where a vmk adapter (vmk3) is configured to use the vMotion TCP/IP stack.

How

To “ping” and test your vMotion network that uses the vMotion TCP/IP stack, you’ll need to use the special command below:

esxcli network diag ping -I vmk1 --netstack=vmotion -H ip.add.re.ss

In the command above, change “vmk1” to the vmkernel adapter you want to send the pings from. Additionally, change “ip.add.re.ss” to the IP address of the host you want to ping.

Using this method, you can fully verify network connectivity between the vMotion vmks using the vMotion stack.

Additional information and examples can be found at https://kb.vmware.com/s/article/59590.

May 222020
 
A Picture of the 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Today we’re going to be talking about Sabrent’s newest line of NVMe storage products, particularly the 2TB Rocket NVMe PCIe 4.0 M.2 2280 Internal SSD Solid State Drive or the Sabrent Rocket 4 2TB NVMe stick as I like to call it.

Last week I purchased a quantity of 4 of these for a total of 8TB of NVMe storage to use on an IOCrest IO-PEX40152 Quad NVMe PCIe Card. For the purpose of this review, we’re benchmarking one inside of an HP Z240 Workstation.

While these are targeted for users with a PCIe 4.0 interface, I’ll be using these on PCIe 3 as it’s backwards compatible. I purchased the PCIe 4 option to make sure the investment was future-proofed.

Keep reading for a bunch of pictures, specs, speed tests, benchmarks, information, and more!

A picture of 4 unopened boxes of Sabrent Rocket 4 2TB NVMe sticks
4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Let’s get started with the review!

How and Why I purchased these

I’ve been working on a few special top-secret projects for the blog and YouTube channel, and needed some cost-effective yet high performing NVMe storage.

I needed at least 8TB of NVMe flash and I’m sure as all of you are aware, NVMe isn’t cheap.

After around a month of research I finally decided to pull the trigger and purchase a quantity of 4 x Sabrent Rocket 4 NVMe 2TB SSD. For future projects I’ll be using these in an IOCREST IO-PEX40152 NVME PCIe card.

These NVMe SSDs are targeted for consumers (normal users, gamers, power users, and IT professionals) and are a great fit! Just remember these do not have PLP (power loss protection), which is a feature that isn’t normally found in consumer SSDs.

Specifications

See below for the specifications and features included with the Sabrent Rocket 4 2TB NVMe SSD.

Hardware Specs:

  • Toshiba BiCS4 96L TLC NAND Flash Memory
  • Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD Controller
  • Kioxia 3D TLC NAND
  • M.2 2280 Form Factor
  • PCIe 4.0 Speeds
    • Read Speed of 5000MB/sec
    • Write Speed of 4400MB/sec
  • PCIe 3.0 Speeds
    • Read Speed of 3400MB/sec
    • Write Speed of 2750MB/sec
  • 750,000 IOPS on 2TB Model
  • Endurance: 3,600TBW for 2TB, 1,800TBW for 1TB, 850TBW for 500TB
  • Available in 500GB, 1TB, 2TB
  • Made in Taiwan

Features:

  • NVMe M.2 2280 Interface for PCIe 4.0 (NVMe 1.3 Compliant)
  • APST, ASPM, L1.2 Power Management Support
  • Includes SMART and TRIM Support
  • ONFi 2.3, ONFi 3.0, ONFi 3.2 and ONFi 4.0 interface
  • Includes Advanced Wear Leveling, Bad Block Management, Error Correction Code, and Over-Provision
  • User Upgradeable Firmware
  • Software Tool to change Block Size

Where and how to buy

One of the perks of owning an IT company is that typically you can purchase all of your internal use product at cost or discount, unfortunately this was not the case.

I was unable to find the Sabrent products through any of the standard distribution channels and had to purchase through Amazon. This is unfortunate because I wouldn’t mind being able to sell these units to customers.

Amazon Purchase Links (2TB Model)

The PART#s are as follows for the different sizes:

ProductNVMe Disk SizePART#
No Heatsink
PART#
Heatsink
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive2TBSB-ROCKET-NVMe4-2TBSB-ROCKET-NVMe4-HTSK-2TB
1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive1TBSB-ROCKET-NVMe4-1TBSB-ROCKET-NVMe4-HTSK-1TB
500GB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive500GBSB-ROCKET-NVMe4-500SB-ROCKET-NVMe4-HTSK-500
Sabrent Rocket 4 Part Number Lookup Table

Cost

At the time of creation of this post, purchasing from Amazon Canada the 2TB model would set you back $699.99CAD for a single unit, however there was a sale going on for $529.99CAD per unit.

Additionally, at the time of creation of this post the 2TB model on Amazon USA would set you back $399.98 USD.

A total quantity of 4 set me back around $2,119.96CAD on sale versus $2,799.96 at regular price.

If you’re familiar with NVMe pricing, you’ll notice that this pricing is extremely attractive when comparing to other high performance NVMe SSDs.

Unboxing

I have to say I was very impressed with the packaging! Small, sleek, and impressive!

A picture of Sabrent Rocket 4 2TB NVMe sticks metal case packagin
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive Metal Case Packaging

Initially I was surprised how small the boxes were as they fit in the palm of your hand, but then you realize how small the NVMe sticks are, so it makes sense.

Opening the box you are presented with a beautiful metal case containing the instructions, information on the product warranty, and more.

Picture of a Sabrent Rocket 4 case opened
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive in case

And the NVME stick removed from it’s case

Picture of a Sabrent Rocket 4 case opened and the NVME SSD removed
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive removed from case

While some of the packaging may be unnecesary, after further thought I realized it’s great to have as you can re-use the packaging when storing NVMe drives to keep them safe and/or to put them in to storage.

And here’s a beautiful shot of 8TB of NVMe storage.

A picture of 8TB of total storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive
8TB Total NVMe storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Now let’s move on to usage!

Installation, Setup, and Configuration

Setting one of these up in my HP Workstation was super easy. You simply populate the NVMe M.2 slot, install the screw, and boot up the system.

Picture of a Sabrent Rocket PCIe4 NVMe 2TB SSD Installed in computer
Sabrent Rocket 4 NVMe 2TB SSD in HP Z240 SFF Workstation

Upon booting, the Sabrent SSD was available inside of the Device Manager. I read on their website that they had some utilities so I wanted to see what configuration options I had access to before moving on to speed test benchmarks.

All Sabrent Rocket utilities can be downloaded from their website at https://www.sabrent.com/downloads/.

Sabrent Sector Size Converter

The Sabrent Sector Size Converter utility allows you to configure the sector size of your Sabrent Rocket SSD. Out of the box, I noticed mine was configured with a 512e sector format, which I promptly changed to 4K.

Screenshot using the Sabrent Sector Size Converter to change SSD from 512e to 4K Sector Size
Sabrent Sector Size Converter v1.0

The change was easy, required a restart and I was good to go! You’ll notice it has a drop down to select which drive you want to modify, which is great if you have more than one SSD in your system.

I did notice one issue… When you have multiple (in my case 4) of these in one system, for some reason the sector size change utility had trouble changing one of them from 512e to 4K. It would appear to be succesful, but stay at 512e on reboot. Ultimately I removed all of the NVME sticks with the exception of the problematic one, ran the utility, and the issue was resolved.

Sabrent Rocket Control Panel

Another useful utility that was available for download is the Sabrent Rocket Control Panel.

Screenshot of the Sabrent Rocket Control Panel
Sabrent Rocket Control Panel

The Sabrent Rocket Control Panel provides the following information:

  • Drive Size, Sector Size, Partition Count
  • Serial Number and Drive identifier
  • Feature status (TRIM Support, SMART, Product Name)
  • Drive Temperature
  • Drive Health (Lifespan)

You can also use this app to view S.M.A.R.T. information, flash updated Sabrent firmware, and more!

Now that we have this all configured, let’s move on to testing this SSD out!

Speed Tests and Benchmarks

The system we used to benchmark the Sabrent Rocket 4 2TB NVMe SSD is an HP Z240 SFF (Small Form Factor) workstation.

The specs of the Z240 Workstation:

  • Intel Xeon E3-1240 v5 @ 3.5Ghz
  • 16GB of RAM
  • Samsung EVO 500GB as OS Drive
  • Sabrent Rocket 4 NVMe 2TB SSD as Test Drive

I ran a few tests using both CrystalDiskMark and ATTO Disk Benchmark, and the NVMe SSD performed flawlessly at extreme speeds!

CrystalDiskMark Results

Loading up and benching with CrystalDiskMark, we see the following results:

Screenshot of speedtest and benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark Results

As you can see, the Sabrent Rocket 4 2TB NVMe tested at a read speed of 3175.63MB/sec and write speed of 3019.17MB/sec.

Screenshot of IOPS benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark IOPS Results

Using the Peak Performance profile, we some amazing IO with 613171.14IOPS read and 521861.33IOPS write with RND4K.

While we’re only testing with a PCIe 3.0 system, these numbers are still amazing and inline with what’s advertised.

ATTO Disk Benchmark Results

Switing over to ATTO Disk Benchmark, we test both speed and IOPS.

First, the speed benchmarks with I/O sized 4K to 12MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 4K to 12MB

After taking a short cooldown break (we don’t have a heatsink installed), we tested 12MB to 64MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 12MB to 64MB

And now we move on to analyze the IO/s.

First from 4K to 12MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 4K to 12MB

And then after a short break, 12MB to 64MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 12MB to 64MB

Those numbers are insane!

Additional Notes

When you purchase a new Sabrent Rocket 4 SSD it comes with a 1 year standard warranty, however if you register your product within 90 days of purchase, you can extend it to an awesome 5 year warranty.

To register your product, visit https://www.sabrent.com/product-registration/

The process is easy if you have one device, however it very repettitive and takes time if you have multiuple as the steps have to be repeated for each device you have. Sabrent, if you’re listening a batch registration tool would be nice! 🙂

Remember that after registering your product, you should record your “Registration Unique ID” for future reference and use.

Conclusion

All-in-all I’d definitely recommend the Sabrent Rocket 4 NVMe SSD! It provides extreme performance, is extremely cost-effective, and I wouldn’t see any reason not to buy them.

Just remember that these SSDs (like all consumer SSDs) do not provide power loss protection, meaning you should not use these in enterprise environments (or in a NAS or SAN).

I’m really looking forward to using these in my upcoming blog and YouTube projects.

May 172020
 
Microsoft Windows Server Logo Image

Today we take it back to basics with a guide on how to create an Active Directory Domain on Windows Server 2019. These instructions are also valid for previous versions of Microsoft Windows Server.

This video will demonstrate and explain the process of installing, configuring, and deploying a Windows Server 2019 instance as a Domain Controller, DNS Server, and DHCP Server and then setting up a standard user.

Check it out and feel free to leave a comment! Scroll down below for more information and details on the guide.

Windows Server 2019: How to Create an Active Directory Domain

Who’s this guide for

No matter if you’re an IT professional who’s just getting started or if you’re a small business owner (on a budget) setting up your first network, this guide is for you!

What’s included in the video

In this guide I will walk you through the following:

  • Installing Windows Server 2019
  • Documenting a new Server installation
  • Configuring Network Settings
  • Installation and configuration of Microsoft Active Directory
  • Promote a server as a new domain controller
  • Installation and configuration of DNS Role
  • Installation and configuration of DHCP Role
  • Setup and configuration of a new user account

What’s required

To get started you’ll need:

How to create an Active Directory Domain (The Video)

Hardware/Software used in this demonstration

  • VMware vSphere
  • HPE DL360p Gen8 Server
  • Microsoft Windows Server 2019
  • pfSense Firewall

Other blog posts referenced in the video

The following blog posts are mentioned in the video:

May 072020
 
Picture of a Raspberry Pi 4 UART connected to a console port on a Synology Disktation 1813+

As a result of my Synology DS1813+ crashing yet again due to the Synology Memory issues and Crashing that I’ve been regularly experiencing, I finally decided to try hacking the Synology NAS to run another operating system. Let it also be noted that numerous of my readers are also experiencing these issues as I receive chats and e-mails about this almost on a daily basis.

Under the hood, the DS1813+ is just another x86 computer system. There’s no reason why we shouldn’t be able to hack this to run another Linux distribution or possibly even a BSD variant like FreeNAS.

Ultimately all I want from this is a reliable NAS to perform software RAID and provide an iSCSI target, it would also be kinda cool to see what we can install on it!

I’ve already started preliminary work on this, so keep visiting back as the blog post get’s updated with more and more information on a regular basis. If you feel you can contribute, please don’t hesitate to leave a comment or reach out.

Current Status

In this section, I’ll be updating it regularly with the current status of my efforts.

Completed:

  • Serial console access
  • UEFI Shell Access
  • GRUB Bootloader Access

See the below sections for information.

Accessing the DS1813+ system

There’s numerous different approachs we can take to try to gain access to repurpose the Synology Disk Station and install another operating system.

These include:

  • Accessing the serial console
  • Accessing the BIOS/UEFI and/or bootloader
  • Booting from a USB stick or modified HD
  • Modifying the USB DOM

The ultimate result we are looking for is to boot our own linux kernel, kick off a Linux or BSD OS installer, or boot from a modified drive that already has Linux installed on it.

Accessing the serial console

Serial console access to the Synology Diskstation is easily acheived.

I originally found this post which provided me information on the pinouts and the voltage: http://www.netbsd.org/ports/sandpoint/instSynology.html

While the above post is for older units utilizing architectures other than x86, the pinout information along wiht the voltage is still relevant.

With the Synology unit using 3.3v, you cannot use a normal computer RS-232 interface to connect to it as it runs at 5V. You’ll need to step-down the voltage using a converter or use a RS-232 interface that runs at 3.3v.

In my case, I used a Raspberry Pi 4 and one of the UART ports along with Minicom to access it. The Pi 4 uses 3.3v for UART so it works perfect. You’ll need Rx, Tx, and GRND for the connection to work.

Picture of a Raspberry Pi 4 with UART connection to ttyS0
Raspberry Pi 4 UART Connection ttyS0

In my case, I used the ttyS0 UART interface to avoid issues with the Mhz and timing (that’s experienced with using ttyAMA0). To use ttyS0, you’ll need to enable the UART on your Pi boot configs, as well as disable the Raspberry Pi console.

Picture of a Raspberry Pi 4 UART connected to a Synology DS1813+ serial console connection
Raspberry Pi 4 UART connected to Synology Diskstation DS1813+ console port

I used the following command to initialize minicom:

minicom -b 115200 -o -D /dev/ttyS0

After connecting, I was able to view and interact with the serial console.

Accessing the BIOS/UEFI and/or bootloader

After gaining serial console access, powering on the Synology DS1813+ results in the following:

Intel (R) Granite Well Platform
Copyright (C) 1999-2011 Intel Corporation. All rights reserved.
Product Name : GRANITE WELL
Processor : Intel(R) Atom(TM) CPU D2701 @ 2.13GHz
Current Speed : 2.12 GHz
Total Memory : 4096 MB
Intel BLDK Version : Tiano-GraniteWell (Allegro 0.3.7)

Miscellaneous Info

Memory Ref Code Version :
CDV Ref Code Version : 0.9.0-1
P-Unit Firmware Version :
P-Unit Location in Flash : 0xFFFB0000
P-Unit Location in RAM : 0xDF6F0000
No of SATA ports available : 6
No of SATA ports enabled : 6

Press F10 in 3 seconds to list all boot options
Any other key to active boot…

Unfortunately, I’m unable to press F10 due to terminal emulation issues (it’s also possible they’ve removed this feature to stop someone from doing what I’m doing).

After 10 seconds, the Synology will UEFI boot the GRUB bootloader.

You can browse through the list, edit the entries, as well as run the GRUB command line.

Booting from a USB stick or modified HD

I attempted to boot numerous different USB sticks containing either OS installers (Linux variants and FreeNAS) with no success. I also tried to boot off an HD connected to one of the SATA ports in the NAS, this was also unsuccesful.

I noticed that out of the 8 SATA connections, ports 1-6 are treated differently (possibly being on a SATA expander) and 7-8 may be accessed by the UEFI, BIOS, or bootloader.

I attempted to chainload a CD image written to a USB stick, however GRUB is not able to see any USB or HDs other than the SATA DOM it’s residing on.

Removing the SATA DOM presents you with a UEFI shell, however you are unable to see, view, or execute any efi files as the shell is unable to read any USB or HD devices other than the SATA DOM.

It appears both the UEFI/BIOS and GRUB have been modified to either not allow access to other bootable devices, or drivers are required which haven’t been incorporated.

In order to execute our own kernel or OS, we may need to modify the SATA DOM.

Modifying the USB DOM

The onboard USB DOM appears to be the only bootable device that is presented to the UEFI/BIOS.

On a booted system, the DOM appears as the device “/dev/synoboot”.

While logged in to the Synology via SSH, you are unable to mount this device to a mount point. You can however image the device, copy it, and write it to another device on another system.

To image the USB DOM, I ran the following command:

dd if=/dev/synoboot of=/volume1/ShareName/synoboot-image

I than downloaded the “synoboot-image” image file to another Linux system, wrote it to a USB stick and I was able to mount the partitions.

There are two vfat partitions containing some linux kernels, ramdisks, and the UEFI version of GRUB.

I believe in an effort to move forward, we will need to either modify and incorporate a version of GRUB with extra drivers, or we will need to use the existing version to boot our own kernel and initial ramdisk.

At this point, we’ll need to evaluate how to write to the SATA DOM. There are two options:

  • Modify the image we created, and write it back after copying it back to the Synology NAS.
  • Find a way to directly mount and access the partitions on the Synology NAS, at this point we are unable due to “access is denied”, however dd reading functions.
  • Connect the SATA DOM to the USB headers on another system.

Once we access this SATA DOM, it may be possible to copy the kernels and ramdisks to kick off an OS installer, or better yet install a more feature and driver filled version of GRUB.

Apr 292020
 
Screenshot of HPE MSA Storage Array Health Check

Are you having issues your HPE MSA SAN? Want to have more insight in to your storage array? Last week, HPE made available a new tool that allows you to check the health of your HPE MSA Storage Array!

While this tool was released to the public last week, rumor has it that this is the same tool that HPE uses internally when providing support to customers.

This tool is FREE to use!

I originally spotted this on the MSA Storage section of the HPE Community forums here: https://community.hpe.com/t5/msa-storage/new-hpe-tool-msa-health-check/td-p/7085594

HPE MSA Array Health Check Video

See below for a video discussing and demonstrating the HPE MSA health Check on an HPE MSA 2040 SAN array.

Accessing the MSA Health Check

The HPE MSA Health Check site can be found at https://msa.ext.hpe.com/MSALogUploader.aspx

The following HPE MSA Arrays are supported:

  • HPE P2000 G3 MSA Array
  • HPE MSA 1040/1050
  • HPE MSA 2040 and variants (MSA 2042)
  • HPE MSA 2050 and variants (MSA 2052)

How to use the MSA Health Check

Using the HPE MSA Health Check is easy!

  1. Log on to your MSA Array SMU (Storage Management Utility)
  2. On the bottom left of the UI, click on the following up-arrow and select save logs
    Save Logs on HPE MSA Array Screenshot
  3. Wait for the logs to generate.
  4. Download the logs to your computer
  5. Open the MSA Storage Array Health Check
    Screenshot of HPE MSA Storage Array Health Check
  6. Click on the “Upload MSA Log File (.zip)” button, and then select your log dump zip file
  7. Wait for the File to upload
    Screenshot of Upload status on HPE MSA Array Log File
  8. View your health report, and optionally download a PDF copy
    Screenshot of a HPE MSA Array Health Check Report

And that’s it!

Available Tests

When running a health check, the following tests and checks are made on the log files:

  • Background Scrub Setting
  • Compact Flash Events
  • Controller Firmware Version Mismatch
  • Controller Partner Firmware Update Setting
  • Default User Check
  • Drive Firmware Version Mismatch
  • Enclosure Firmware Version Mismatch
  • NonSecure Protocols
  • Notification Settings
  • Sparing Best Practices
  • Unhealthy Component Check
  • Volume Mapping

Conclusion

Even if your MSA array is healthy, I’d still recommend generating a log dump and loading it up in to the MSA Health Check. Any extra visibility, is good visibility!

Apr 122020
 
Picture of Raspberry Pi 4 box and Raspberry Pi 4 board below box

If you’re worried about destroying your SD Cards, need some more space, or just want to learn something new, I’m going to show you how to use an NFS root for the Raspberry Pi 4.

When you use an NFS Root with your Raspberry Pi, it stores the entire root filesystem on a remote NFS export (think of it as a network filesystem share). This means you’ll have as much space as the NFS export, and you’ll probably see way faster performance since it’ll be running at 1Gb/sec instead of the speed of the SD Card.

This also protects your SD card, as the majority of the reading and writing is performed on the physical storage of the NFS export, instead of the SD card in the Pi which has limited reads and writes.

What you’ll need

To get started, you’ll need:

  • Raspberry Pi 4
  • Ubuntu or Raspbian for Raspberry Pi 4 Image
  • A small SD card for the Boot Partition (1-2GB)
  • SD card for the Raspberry Pi Linux image
  • Access to another Linux system (workstation, or a Raspberry Pi)

There are multiple ways to do this, but I’m providing instructions on the easiest way it was for me to do this with the resources I had immediately available.

Instructions

To boot your Raspberry Pi 4 from an NFS root, multiple steps are involved. Below you’ll find the summary, and further down you’ll find the full instructions. You can click on an item below to go directly to the section.

The process:

  1. Write the Linux image to an SD Card
  2. Create boot SD Card for NFS Root
  3. Prep the Linux install for NFS Root
  4. Create the NFS Export
  5. Copy the Linux install to the NFS Export
  6. Copy and Modify the boot SD Card to use NFS Root
  7. Boot using SD Card and test NFS Root

See below for the individual instructions for each step.

Write the Linux image to an SD Card

First, we need to write the SD Card Linux image to your SD card. You’ll need to know which device your SD card will appear to your computer. In my case it was /dev/sdb, make sure you verify the right device or you could damage your current Linux install.

  1. Download Ubuntu or Raspbian for Raspberry Pi.
  2. unzip or unxz depending on distribution to uncompress the image file.
  3. Write the SD card image to SD card.
    dd if=imagename.img of=/dev/sdb bs=4M

You now have an SD Card Linux install for your Raspberry Pi. We will later modify and then copy this to the NFS root and boot SD card.

Create boot SD Card for NFS Root

In this step, we’re going to create a bootable SD card that contains the Linux kernel and other needed files for the Raspberry Pi to boot.

This card will be installed in the Pi, load the kernel, and then kick off the boot process to load the NFS root.

I previously created a post to create a boot partition layout for a Raspberry Pi. Please follow those instructions to complete this step.

Later on in this guide, you’ll be copying the boot partition from the SD Card Linux image, on to this newly created boot SD Card for the NFS Root.

Prep the Linux install for NFS Root

There’s a few things we have to do to prep the Ubuntu or Raspbian Linux install to be usable as an NFS Root.

  1. Boot the Raspbian or Ubuntu SD Card you create in the first step on your Raspberry Pi.
  2. Complete the first boot procedures. Create your account, and complete the setup.
  3. Enable and confirm SSH is working so you can troubleshoot.
  4. Install the NFS client files using the following command:
    apt install nfs-common
  5. Open the /etc/network/interfaces file, and add the following line so that the Pi only get’s an IP once during boot:
    iface eth0 inet manual
  6. Modify your /etc/fstab entries to reflect the NFS root and the new boot SD card as per below.

For step 6, we need to modify the /etc/fstab entry for the root fs. It is different depending on whether you’re using Ubuntu or Raspbian.

For Raspbian, your /etc/fstab should look like this:

proc /proc proc defaults 0 0
LABEL=boot /boot vfat defaults 0 2
NFS-SERVER-IP:/nfs-export/PI-Raspbian / nfs defaults 0 0

For Ubuntu, your /etc/fstab should look like this:

LABEL=system-boot /boot/firmware vfat defaults 0 2
/dev/nfs / nfs defaults 0 0

After you do this, the Linux SD image may not boot again if directly installed in the Raspberry Pi, so make sure you’ve made the proper modifications before powering it down.

Create the NFS Export

In my case I used a Synology DS1813+ as an NFS server to host my Raspberry Pi NFS root images. But you can use any Linux server to host it.

If you’re using a synology disk station, create a shared folder, disable the recycling bin, leave everything else default. Head over to the “NFS Permissions” tab and create an ACL entry for your PI and workstations. You can also add a network segment for your entire network (ex. 192.168.0.0/24″) instead of specifying individual IPs.

Screenshot of Synology Create NFS rule for ACL
Create an NFS ACL Rule for Synology NFS Access

Once you create an entry, it’ll look like this. Note the “Mount path” in the lower part of the window.

Screenshot of NFS Shared Folder Permissions and Mount Point on Synology NAS
NFS Permissions and Mount Path for NFS Export

Now, if you’re using a standard Linux server the steps are different.

  1. Install the require NFS packages:
    apt install nfs-kernel-server
  2. Create a directory, we’ll call it “nfs-export” on your root fs on the server:
    mkdir /nfs-export/
  3. Then create a directory for the Raspberry Pi NFS Root:
    mkdir /nfs-export/PI-ImageName
  4. Now edit your /etc/exports file and add this line to the file to export the path:
    /nfs-export/PI-ImageName     IPorNetworkRange(rw,no_root_squash,async,insecure)
  5. Reload the NFS exports to take affect:
    exportfs -ra

Take note of the mount point and/or NFS export path, as this is the directory your Raspberry Pi will need to mount to access it’s NFS root. This is also the directory you will be copying your SD Card Linux install root FS to.

Copy the Linux install to the NFS Export

When you’re ready to copy your SD Card Linux install to your NFS Export, you’ll need to do the following. In my case I’ll be using an Ubuntu desktop computer to perform these steps.

When I insert the SD Card containing the Raspberry Pi Linux image, it appeared as /dev/sdb on my system. Please make sure you are using your proper device names to avoid using the wrong one to avoid writing or using the wrong disk.

Instructions to copy the root fs from the SD card to the NFS root export:

  1. Mount the root partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb2 old/
  2. Mount the NFS Export for the NFS Root to a directory. In my case I used a directory called “nfs”.
    mount IPADDRESS:/nfs-export/PI-ImageName nfs/
  3. Use the rsync command to transfer the SD card Linux install to the NFS Root Export.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ nfs/
  4. Unmount the directories.
    umount old/
    umount nfs/

Once this is complete, your OS root is now copied to the NFS root.

Copy and Modify the boot SD Card to use NFS Root

First we have to copy the boot partition from the SD Card Linux install to the boot SD card, then we need to modify the contents of the new boot SD card.

Top copy the boot files, follow these instructions.

  1. Mount the boot partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb1 old/
  2. Mount the new boot partition of the boot SD card to a new directory. In my case I used the directory called “new”.
    mount /dev/sdc1 new/
  3. Use the rsync command to transfer the SD card Linux install boot partition to the new boot SD card.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ new/
  4. Unmount the directories.
    umount old/
    umount new/

Now there are few steps we have to take to make to the boot SD card boot to an NFS Root.

We have to make a modification to the PI boot command. It is different depending on which Linux image (Ubuntu or Raspbian) you’re using.

First, insert the boot SD card, and mount it to a temporary directory.

mount /dev/sdc1 new/

If you’re running Ubuntu, your existing nobtcmd.txt should look like this:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Ubuntu,tcp,rw ip=dhcp rootfstype=nfs elevator=deadline rootwait

For Raspbian, your existing cmdline.txt should look like this:

console=serial0,115200 console=tty1 root=PARTUUID=97709164-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

console=serial0,115200 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Raspbian,tcp rw vers=3 ip=dhcp rootfstype=nfs elevator=deadline rootwait

Once you make the modifications, save the file and unmount the SD card.

Your SD card is now ready to boot.

Boot using SD Card and test NFS Root

At this point, insert the boot SD Card in your Raspberry Pi and attempt to boot. All should be working now and it should boot and use the NFS root!

If you’re having issues, if the boot process stalls, or something doesn’t work right, look back and confirm you followed all the steps above properly.

You’re done!

You’re now complete and have a fully working NFS root for your Raspberry Pi. You’ll no longer worry about storage, have high speed access to it, and you’ll have some new skills!

And don’t forget to check out these Handy Tips, Tricks, and Commands for the Raspberry Pi 4!

Apr 072020
 
VMware Horizon View Icon

In response to COVID 19, VMware has extended their VMware Horizon 7 trial offering up to 90 days and includes 100 users. This includes both VMware Horizon 7 On-Premise, as well as VMware Cloud on AWS.

This is great if you’re planning or about to implement and deploy VMware Horizon 7.

In it’s simplest form, Horizon 7 allows an organization to virtualize their end user computing. No more computers, no more desktops, only Zero clients and software clients. Not only does this streamline the end user computing experience, but it enables a beautiful remote access solution as well.

And Horizon isn’t limited to VDI… You can install the VMware Horizon Agent on a Physical PC so you can use VDI technologies like Blast Extreme to remote in to physical desktops at your office. It makes the perfect remote access solution. Give it a try today with an evaluation license!

To get your evaluation license, please visit https://my.vmware.com/en/web/vmware/evalcenter?p=horizon-7.

Apr 042020
 
VMware Horizon View Icon

I see quite a bit of traffic come in on a regular basis pertaining to issues with VMware Horizon View. A lot of these visitors either are looking for help in setting something up or are experiencing an issue I’ve dealt with. While my posts usually help these people do specific things or troubleshoot specific issues, one of the biggest issues that comes up is when users experience a VMware Horizon blank screen (or black).

This can be caused by a number of different things. I wanted to take this opportunity to go over some of the most common issues that cause this and make a master guide for troubleshooting and fixing the VMware Horizon blank screen.

Horizon Blank Screen Causes

There’s a number of different causes of a blank or black screen when connecting and establishing a VDI session to Horizon View. Click on the item below to jump to that section of the post.

Now that we have a list, let’s dive in to each of these individually. Some of these will require you to do your own research and will only guide you, while other sections will include the full fix for the issue.

VMware Tools and Horizon Agent Installation Order

When deploying the VMware Horizon View agent, you are required to install the agent, along with VMware tools in a specific order. Failing to do so can cause problems, including a blank screen screen.

The installation order:

  1. Install GPU/vGPU drivers (if needed)
  2. Install VMware Tools Agent
  3. Install the VMware Horizon Agent
  4. Install the VMware User Environment Manager Agent (if needed)
  5. Install the VMware App Volumes Agent (if needed)

It is important to also consider this when upgrading the agents as well.

Network ports are blocked (Computer Firewall, Network Firewall)

For the VMware Horizon agent to function properly, ports must be accesible through your firewall, whether it’s the firewall on the VM guest, client computer, or network firewall.

The following ports are required for the VMware Horizon Agent when connecting directly to a View Connection Server.

DestinationNetwork ProtocolDestination PortDetails
Horizon Connection ServerTCP443Login, authentication, and connection to the VMware Connection Server.
Horizon AgentTCP22443Blast Extreme
UDP22443Blast Extreme
TCP4172PCoIP
UDP4172PCoIP
TCP3389RDP (Remote Desktop Protocol)
TCP9427Client Shared Drive redirection (CDR) and Multi-media redirection (MMR).
TCP32111USB Redirection (Optional), can be incorporated in to the Blast Extreme connection.
Network Ports Required for VMware Horizon View to View Connection Server

The following ports are required for the VMware Horizon Agent when connecting through a VMware Unified Access Gateway (UAG).

DestinationNetwork ProtocolDestination PortDetails
Unified Access GatewayTCP443Login, authentication, and connection to the Unified Access Gateway. This port/connection can also carry tunneled RDP, client drive redirection, and USB redirection traffic.
TCP4172PCoIP via PCoIP Secure Gateway
UDP4172PCoIP via PCoIP Secure Gateway
UDP443Optional for Login traffic. Blast Extreme will attempt a UDP login if there are issues establishing a TCP connection.
TCP8443Blast Extreme via Blast Secure Gateway (High Performance connection)
UDP8443Blast Extreme via Blast Secure Gateway (Adaptive performance connection)
TCP443Blast Extreme via UAG port sharing.
Network Ports Required for VMware Horizon View to VMware Unified Access Gateway (UAG)

You’ll notice the ports that are required for Blast Extreme and PCoIP. If these are not open you can experience a blank screen when connecting to the VMware Horizon VDI Guest VM.

For more information on VMware Horizon 7 network ports, visit https://techzone.vmware.com/resource/network-ports-vmware-horizon-7.

DNS Issues

While VMware Horizon View usually uses IP address for connectivity between the View Connection Server, guest VM, and client, I have seen times where DNS issues have stopped certain components from functioning properly.

It’s always a good idea to verify that DNS is functioning. DNS (forward and reverse) is required for VMware Horizon Linux guests VMs.

Incorrectly configured Unified Access Gateway

A big offender when it comes to blank screens is an incorrectly configured VMWare Unified Access Gateway.

Sometimes, first-time UAG users will incorrectly configure the View Connection server and UAG.

When configuring a UAG, you must disable both “Blast Secure Gateway”, and “PCoIP Secure Gateway” on the View Connection Server, as the UAG will be handling this. See below.

Picture of the Secure Gateway settings on VMware Horizon View Connection Server when used with VMware UAG.
Secure Gateway Settings on View Connection Server when used with UAG

Another regular issue is when admins misconfigure the UAG itself. There are a number of key things that must be configured properly. These are the values that should be populated on the UAG under Horizon Settings.

Connection Server URLhttps://ConnectionServerIP:443
Connection Server URL Thumbprintsha1=SSLTHUMPRINT
(Thumbprint of the SSL certificate your View Connection Server is using)
PCOIP External URLUAG-EXTERNALIP:4172
Blast External URLUAG-InternetFQDN:443
Tunnel External URLUAG-InternetFQDN:443

You must also have a valid SSL certificate installed under “TLS Server Certificate Settings”. I’d recommend applying it to both the admin and internal interface. This is a certificate that must match the FQDN (internal and external) of your UAG appliance.

Once you’re good, you’re green!

Picture of the VMware UAG interface showing all green (functioning).
VMware Unified Access Gateway showing valid

You should always see green lights, all protocols should work, and the connections should run smooth. If not, troubleshoot.

GPU Driver Issue

When using a GPU with your VM for 3D graphics, make sure you adhere to the requirements of the GPU vendor, along with the VMware requirements.

Some vendors have display count, resolution, and other limits that when reached, cause Blast Extreme to fail.

An incorrectly installed driver can also cause issues. Make sure that there are no issues with the drivers in the “Device Manager”.

VMware documents regarding 3D rendering:

Blast Extreme log files can be found on the guest VM in the following directory.

C:\ProgramData\VMware\VMware Blast\

Looking at these log files, you can find information that may pertain to the H.264 or display driver issues that will assist in troubleshooting.

VMware Tools

A corrupt VMware tools install, whether software or drivers can cause display issues. Make sure that the drivers (including the display driver) are installed and functioning properly.

It may be a good idea to completely uninstall VMware Tools and re-install.

If you’re experiencing display driver issues (such as a blank screen), before re-installing VMware Tools try forcibly removing the display driver.

  1. Open “Device Manager”
  2. Right click on the VMware Display adapter and open “Properties”
  3. On the “Driver” tab, select “Uninstall”
  4. Check the box for “Delete the driver software for this device”.

This will fully remove the VMware driver. Now re-install VMware Tools.

Horizon Agent

Often, re-installing the Horizon Agent can resolve issues. Always make sure that VMware Tools are installed first before installing the Horizon Agent.

Make sure that if you are running 64-bit Windows in the VM then you install and use the 64-bit Horizon Agent.

You may experience issues with the “VMware Horizon Indirect Display Driver”. Some users have reported an error on this driver and issues loading it, resulting in a blank screen. To do this, I’d recommend forcibly uninstalling the driver and re-installing the Horizon Agent.

To forcibly remove the “VMware Horizon Indirect Display Driver”:

  1. Open “Device Manager”
  2. Right click on the “VMware Horizon Indirect Display Driver” and open “Properties”
  3. On the “Driver” tab, select “Uninstall”
  4. Check the box for “Delete the driver software for this device”.

Now proceed to uninstall and reinstall the Horizon View Agent.

On a final note, when running the Horizon Agent on Horizon for Linux, make sure that forward and reverse DNS entries exist, and that DNS is functioning on the network where the Linux VM resides.

Video Settings (Video Memory (VRAM), Resolution, Number of Displays)

When experiencing video display issues or blank screens on VMware Horizon View, these could be associated with the guest VM’s memory, video memory (VRAM), display resolution, and number of displays.

Make sure you are adhering to the specifications put forth by VMware. Please see the following links for more information.

Protocol

When troubleshooting blank screens with VMware Horizon, you need to try to identify if it’s specific to the guest VM, or if it’s associated with the connection protocol you’re using (and the route it takes whether through a Connection Server, or UAG).

Always try different protocols to see if the issue is associated with all, or one. Then try establishing connections and find if it’s isolated direct to the Connection Server, or through the UAG.

If the issue is with a specific protocol, you can view the protocol log files. If the issue is with the UAG, you can troubleshoot the UAG.

Log files can be found in the following directory:

C:\ProgramData\VMware\

HTTPS Proxy and redirection issues

If you are connecting through a network that does passive HTTPS scanning or that uses a proxy server, you may experience issues with inability to connect, or blank screens.

You’ll need to modify your firewall or proxy to allow the VMware connection and open the required ports for VMware Horizon View.

Login banner or disclaimer (PCoIP)

I haven’t seen or heard of this one in some time, but when using VMware Horizon with PCoIP, sessions can fail or show a blank screen when the legal disclaimer login banner is used.

For more information on this issue, and how to resolve or workaround, visit https://kb.vmware.com/s/article/1016961.

Old version of Horizon View

It never stops surprising me how old some of the VMware Horizon View environments are that some businesses are running. VMware regularly updates, and releases new versions of VMware Horizon View that resolve known issues and bugs in the software.

While it may be difficult, simply upgrading your VMware Horizon environment (VMware vSphere, View Connection Server, VMware Tools, VMware Horizon Agent) can resolve your issues.

Blank Screen connecting to Physical PC running Horizon Agent

When you install the VMware Horizon Agent on a Physical PC, you may encounter issues with a blank screen.

This is usually caused by:

After troubleshooting these issues, you should be able to resolve the issue.

Conclusion

As you can see there are a number of different things that can cause Horizon View to show a blank screen on login.

Let me know if this helped you out, or if you find other reasons and feel I should add them to the list!