Jul 132020
 
Picture of the DUO Security Logo

When you’re looking for additional or enhanced options to secure you’re business and enterprise IT systems, MFA/2FA can help you achieve this. Get away from the traditional single password, and implement additional means of authentication! MFA provides a great compliment to your cyber-security policies.

My company, Digitally Accurate Inc, has been using the Duo Security‘s MFA product in our own infrastructure, as well as our customers environments for some time. Digitally Accurate is a DUO Partner and can provide DUO MFA Services including licensing/software and the hardware tokens (Duo D-100 Tokens using HOTP).

What is MFA/2FA

MFA is short for Multi Factor authentication, additionally 2FA is short for Two Factor Authentication. While they are somewhat the same, multi means many, and 2 means two. Additional security is provided with both, since it provides more means of authentication.

Traditionally, users authenticate with 1 (one) level of authentication: their password. In simple terms MFA/2FA in addition to a password, provides a 2nd method of authentication and identity validation. By requiring users to authentication with a 2nd mechanism, this provides enhanced security.

Why use MFA/2FA

In a large portion of security breaches, we see users passwords become compromised. This can happen during a phishing attack, virus, keylogger, or other ways. Once a malicious user or bot has a users credentials (username and password), they can access resources available to that user.

By implementing a 2nd level of authentication, even if a users password becomes compromised, the real (or malicious user) must pass a 2nd authentication check. While this is easy for the real user, in most cases it’s nearly impossible for a malicious user. If a password get’s compromised, nothing can be accessed as it requires a 2nd level of authentication. If this 2nd method is a cell phone or hardware token, a malicious user won’t be ale to access the users resources unless they steal the cell phone, or hardware token.

How does MFA/2FA work

When deploying MFA or 2FA you have the option of using an app, hardware token (fob), or phone verification to perform the additional authentication check.

After a user attempts to logs on to a computer or service with their username and password, the 2nd level of authentication will be presented, and must pass in order for the login request to succeed.

Please see below for an example of 2FA selection screen after a successful username and password:

Screenshot of Duo MFA 2FA Prompt on Windows Login
Duo Security Windows Login MFA 2FA Prompt

After selecting an authentication method for MFA or 2FA, you can use the following

2FA with App (Duo Push)

Duo Push sends an authentication challenge to your mobile device which a user can then approve or deny.

Please see below for an example of Duo Push:

Screenshot of Duo Push Notification to Mobile Android App
Duo Push to Mobile App on Android

Once the user selects to approve or deny the login request, the original login will either be approved or denied. We often see this as being the preferred MFA/2FA method.

2FA with phone verification (Call Me)

Duo phone verification (Call Me) will call you on your phone number (pre-configured by your IT staff) and challenge you to either hangup to deny the login request, or press a button on the keypad to accept the login request.

While we rarely use this option, it is handy to have as a backup method.

2FA with Hardware Token (Passcode)

Duo Passcode challenges are handled using a hardware token (or you can generate a passcode using the Duo App). Once you select this method, you will be prompted to enter the passcode to complete the 2FA authentication challenge. If you enter the correct passcode, the login will be accepted.

Here is a Duo D-100 Token that uses HOTP (HMAC-based One Time Password):

Picture of Duo D-100 HOTP Hardware Token
Duo D-100 HOTP Hardware Token

When you press the green button, a passcode will be temporarily displayed on the LCD display which you can use to complete the passcode challenge.

You can purchase Hardware Token’s directly from Digitally Accurate Inc by contacting us, your existing Duo Partner, or from Duo directly. Duo is also compatible with other 3rd party hardware tokens that use HOTP and TOTP.

2FA with U2F

While you can’t visibly see the option for U2F, you can use U2F as an MFA or 2FA authentication challenge. This includes devices like a Yubikey from Yubico, which plugs in to the USB port of your computer. You can attach a Yubikey to your key chain, and bring it around with you. The Yubikey simply plugs in to your USB port and has a button that you press when you want to authenticate.

When the 2FA window pops up, simply hit the button and your Yubikey will complete the MFA/2FA challange.

What can MFA/2FA protect

Duo MFA supports numerous cloud and on-premise applications, services, protocols, and technologies. While the list is very large (full list available at https://duo.com/product/every-application), we regularly deploy and use Duo Security for the following configurations.

Windows Logins (Server and Workstation Logon)

Duo MFA can be deployed to not only protect your Windows Servers and Workstations, but also your remote access system as well.

When logging on to a Windows Server or Windows Workstation, a user will be presented with the following screen for 2FA authentication:

Screenshot of Duo MFA 2FA Prompt on Windows Login
Duo Security Windows Login MFA 2FA Prompt

Below you can see a video demonstration of DUO on Windows Login.

DUO works with both Windows Logins and RDP (Remote Desktop Protocol) Logins.

VMWare Horizon View Clients (VMWare VDI Logon)

Duo MFA can be deployed to protect your VDI (Virtual Desktop Infrastructure) by requiring MFA or 2FA when users log in to access their desktops.

When logging on to the VMware Horizon Client, a user will be presented with the following screen for 2FA authentication:

Screenshot of Duo MFA 2FA Prompt on VMWare Horizon Client Login
Duo Security VMWare Horizon Client Login MFA 2FA Prompt

Below you can see a video demonstration of DUO on VMware Horizon View (VDI) Login.

Sophos UTM (Admin and User Portal Logon)

Duo MFA can be deployed to protect your Sophos UTM firewall. You can protect the admin account, as well as user accounts when accessing the user portal.

If you’re using the VPN functionality on the Sophos UTM, you can also protect VPN logins with Duo MFA.

Unix and Linux (Server and Workstation Logon)

Duo MFA can be deployed to protect your Unix and Linux Servers. You can protect all user accounts, including the root user.

We regularly deploy this with Fedora and CentOS (even FreePBX) and you can protect both SSH and/or console logins.

When logging on to a Unix or Linux server, a user will be presented with the following screen for 2FA authentication:

Screenshot of Duo MFA 2FA Prompt on CentOS Linux Login
Duo Security CentOS Linux login MFA 2FA Prompt

Below you can see a video demonstration of DUO on Linux.

WordPress Logon

Duo MFA can be deployed to protect your WordPress blog. You can protect your admin and other user accounts.

If you have a popular blog, you know how often bots are attempting to hack and brute force your passwords. If by chance your admin password becomes compromised, using MFA or 2FA can protect your site.

When logging on to a WordPress blog admin interface, a user will be presented with the following screen for 2FA authentication:

Screenshot of Duo MFA 2FA Prompt on WordPress Login
Duo Security WordPress Login MFA 2FA Prompt

Below you can see a video demonstration of DUO on a WordPress blog.

How easy is it to implement

Implementing Duo MFA is very easy and works with your existing IT Infrastructure. It can easily be setup, configured, and maintained on your existing servers, workstations, and network devices.

Duo offers numerous plugins (for windows), as well as options for RADIUS type authentication mechanisms, and other types of authentication.

How easy is it to manage

Duo is managed through the Duo Security web portal. Your IT admins can manage users, MFA devices, tokens, and secured applications via the web interface. You can also deploy appliances that allow users to manage, provision, and add their MFA devices and settings.

Duo also integrates with Active Directory to make managing and maintaining users easy and fairly automated.

Let’s get started with Duo MFA

Want to protect your business with MFA? Give me a call today!

Jul 082020
 

Need to add 5 SATA drives or SSDs to your system? The IO-PCE585-5I is a solid option!

The IO-PCE585-5I PCIe card adds 5 SATA ports to your system via a single PCIe x4 card using 2 PCIe lanes. Because the card uses PCIe 3.1a, this sounds like a perfect HBA to use to add SSD’s to your system.

This card can be used in workstations, DIY NAS (Network Attached Storage), and servers, however for the sake of this review, we’ll be installing it in a custom built FreeNAS system to see how the card performs and if it provides all the features and functionality we need.

Picture of an IO-PCE585-5I PCIe Card
IOCREST IO-PCE585-5I PCIe Card

A big thank you to IOCREST for shipping me out this card to review, they know I love storage products! 🙂

Use Cases

The IO-PCE585-5I card is strictly an HBA (a Host Bus Adapter). This card provides JBOD access to the disks so that each can be independently accessed by the computer or servers operating system.

Typically HBAs (or RAID cards in IT mode) are used for storage systems to provide direct access to disks, so that that the host operating system can perform software RAID, or deploy a special filesystem like ZFS on the disks.

The IOCREST IO-PCE585-5I is the perfect card to accomplish this task as it supports numerous different operating systems and provides JBOD access of disks to the host operating system.

In addition to the above, the IO-PCE585-5I provides 5 SATA 6Gb/s ports and uses PCIe 3 with 2 PCIe lanes, to provide a theoretical maximum throughput close to 2GB/s, making this card perfect for SSD use as well!

Need more drives or SSDs? With the PCIe 2x interface, simply just add more to your system!

While you could use this card with Windows software RAID, or Linux mdraid, we’ll be testing the card with FreeNAS, a NAS system built on FreeBSD.

First, how can you get this card?

Where to buy the IO-PCE585-5I

You can purchased the IO-PCE585-5I from:

This card is also marketed as the SI-PEX40139 and IO-PEX40139 Part Numbers.

IO-PCE585-5I Specifications

Let’s get in to the technical details and specs on the card.

Picture of an IO-PCE585-5I PCIe Card
IO-PCE585-5I (IO-PEX40139) PCIe Card

According to the packaging, the IO-PCE585-5I features the following:

  • Supports up to two lanes over PCIe 3.0
  • Complies with PCI Express Base Specification Revision 3.1a.
  • Supports PCIe link layer power saving mode
  • Supports 5 SATA 6Gb/s ports
  • Supports command-based and FIS-based for Port Multipliers
  • Complies with SATA Specification Revision 3.2
  • Supports AHCI mode and IDE programming interface
  • Supports Native Command Queue (NCQ)
  • Supports SATA link power saving mode (partial and slumber)
  • Supports SATA plug-in detection capable
  • Supports drive power control and staggered spin-up
  • Supports SATA Partial / Slumber power management state
  • Supports SATA Port Multiplier

Whats included in the packaging?

  • 1 × IO-PCE585-5I (IO-PEX40139) PCIe 3.0 card to 5 SATA 6Gb/s
  • 1 × User Manual
  • 5 × SATA Cables
  • 1 x Low Profile Bracket
  • 1 x Driver CD (not needed, but nice to have)

Unboxing, Installation, and Configuration

It comes in a very small and simple package.

Picture of the IO-PCE585-5I Retail Box
IO-PCE585-5I Retail Box

Opening the box, you’ll see the package contents.

Picture of IO-PCE585-5I Box Contents
IO-PCE585-5I Box Contents Unboxed

And finally the card. Please note that it comes with the full-height PCIe bracket installed. It also ships with the half-height bracket and can easily be replaced.

Picture of an IO-PCE585-5I PCIe Card
IO-PCE585-5I (SI-PEX40139) PCIe Card

Installation in FreeNAS Server and cabling

We’ll be installing this card in to a computer system, in which we will then install the latest version of FreeNAS. The original plan is to connect the IO-PCE585-5I to a 5-Bay SATA Hotswap backplane/drive cage full of Seagate 1TB Barracuda Hard Drives for testing.

The card installed easily, however we ran in to an issue when running the cabling. The included SATA cables have right angel connectors on the end that connects to the drive, which stops us from being able to connect them to the backplane’s connectors. To overcome this we could either buy new cables, or directly connect to the disks. I chose the latter.

I installed the card in the system, and booted it up. The HBA’s BIOS was shown.

IO-PCE585-5I BIOS
IO-PCE585-5I BIOS

I then installed FreeNAS.

Inside of the FreeNAS UI the disks are all detected! I ran an “lspci” to see what the controller is listed as.

Screenshot of IO-PCE585-5I FreeNAS lspci
IO-PCE585-5I FreeNAS lspci
SATA controller: JMicron Technology Corp. Device 0585

I went ahead and created a ZFS striped pool, created a dataset, and got ready for testing.

Speedtest and benchmark

Originally I was planning on providing numerous benchmarks, however in every case I hit the speed limit of the hard disks connected to the controller. Ultimately this is great because the card is fast, but bad because I can’t pinpoint the exact performance numbers.

To get exact numbers, I may possibly write up another blog post in the future when I can connect some SSDs to test the controllers max speed. At this time I don’t have any immediately available.

One thing to note though is that when I installed the card in a system with PCIe 2.0 slots, the card didn’t run at the expected speed limitations of PCIe 2.0, but way under. For some reason I could not exceed 390MB/sec (reads or writes) when technically I should have been able to achieve close to 1GB/sec. I’m assuming this is due to performance loss with backwards compatibility with the slower PCIe standard. I would recommend using this with a motherboard that supports PCIe 3.0 or higher.

The card also has beautiful blue LED activity indicators to show I/O on each disk independently.

Animated GIF of IO-PCE585-5I LED Activity Indicators
IO-PCE585-5I LED Activity Indicators

After some thorough testing, the card proved to be stable and worked great!

Additional Notes & Issues

Two additional pieces of information worth noting:

  1. IO-PCE585-5I Chipset – The IO-PCE585-5I uses a JMicron JMB585 chipset. This chipset is known to work well and stable with FreeNAS.
  2. Boot Support – Installing this card in different systems, I noticed that all of them allowed me to boot from the disks connected to the IO-PCE585-5I.

While this card is great, I would like to point out the following issues and problems I had that are worth mentioning:

  1. SATA Cable Connectors – While it’s nice that this card ships with the SATA cables included, note that the end of the cable that connects to the drive is right-angled. In my situation, I couldn’t use these cables to connect to the 5 drive backplane because there wasn’t clearance for the connector. You can always purchase other cables to use.
  2. Using card on PCIe 2.0 Motherboard – If you use this PCIe 3.0 card on a motherboard with PCIe 2.0 slots it will function, however you will experience a major performance decrease. This performance degradation will be larger than the bandwidth limitations of PCIe 2.0.

Conclusion

This card is a great option to add 5 hard disks or solid state drives to your FreeNAS storage system, or computer for that matter! It’s fast, stable, and inexpensive.

I would definitely recommend the IOCREST IO-PCE585-5I.

Jul 072020
 
Picture of a business office with cubicles

In the ever-evolving world of IT and End User Computing (EUC), new technologies and solutions are constantly being developed to decrease costs, improve functionality, and help the business’ bottom line. In this pursuit, as far as end user computing goes, two technologies have emerged: Hosted Desktop Infrastructure (HDI), and Virtual Desktop Infrastructure (VDI). In this post I hope to explain the differences and compare the technologies.

We’re at a point where due to the low cost of backend server computing, performance, and storage, it doesn’t make sense to waste end user hardware and resources. By deploying thin clients, zero clients, or software clients, we can reduce the cost per user for workstations or desktop computers, and consolidate these on the backend side of things. By moving moving EUC to the data center (or server room), we can reduce power requirements, reduce hardware and licensing costs, and take advantage of some cool technologies thanks to the use of virtualization and/or Storage (SANs), snapshots, fancy provisioning, backup and disaster recovery, and others.

And it doesn’t stop there, utilizing these technologies minimizes the resources required and spent on managing, monitoring, and supporting end user computing. For businesses this is a significant reduction in costs, as well as downtime.

What is Hosted Desktop Infrastructure (HDI) and Virtual Desktop Infrastructure (VDI)

Many IT professionals still don’t fully understand the difference between HDI and VDI, but it’s as sample as this: Hosted Desktop Infrastructure runs natively on the bare metal (whether it’s a server, or SoC) and is controlled and provided by a provisioning server or connection broker, whereas Virtual Desktop Infrastructure virtualizes (like you’re accustomed to with servers) the desktops in a virtual environment and is controlled and provided via hypervisors running on the physical hardware.

Hosted Desktop Infrastructure (HDI)

As mentioned above, Hosted Desktop Infrastructure hosts the End User Computing sessions on bare metal hardware in your datacenter (on servers). A connection broker handles the connections from the thin clients, zero clients, or software clients to the bare metal allowing the end user to see the video display, and interact with the workstation instance via keyboard and mouse.

Pros:

  • Remote Access capabilities
  • Reduction in EUC hardware and cost-savings
  • Simplifies IT Management and Support
  • Reduces downtime
  • Added redundancy
  • Runs on bare metal hardware
  • Resources are dedicated and not shared, the user has full access to the hardware the instance runs on (CPU, Memory, GPU, etc)
  • Easily provide accelerated graphics to EUC instances without additional costs
  • Reduction in licensing as virtualization products don’t need to be used

Cons:

  • Limited instance count to possible instances on hardware
  • Scaling out requires immediate purchase of hardware
  • Some virtualization features are not available since this solution doesn’t use virtualization
  • Additional backup strategy may need to be implemented separate from your virtualized infrastructure

Example:

If you require dedicated resources for end users and want to be as cost-effective as possible, HDI is a great candidate.

An example HDI deployment would utilize HPE Moonshot which is one of the main uses for HPE Moonshot 1500 chassis. HPE Moonshot allows you to provision up to 180 OS instances for each HPE Moonshot 1500 chassis.

More information on the HPE Moonshot (and HPE Edgeline EL4000 Converged Edge System) can be found here: https://www.stephenwagner.com/2018/08/22/hpe-moonshot-the-absolute-definition-of-high-density-software-defined-infrastructure/

Virtual Desktop Infrastructure (VDI)

Virtual Desktop Infrastructure virtualizes the end user operating system instances exactly how you virtualize your server infrastructure. In VMware environments, VMware Horizon View can provision, manage, and maintain the end user computing environments (virtual machines) to dynamically assign, distribute, manage, and broker sessions for users. The software product handles the connections and interaction between the virtualized workstation instances and the thin client, zero client, or software client.

Pros:

  • Remote Access capabilities
  • Reduction in EUC hardware and cost-savings
  • Simplifies IT Management and Support
  • Reduces downtime
  • Added redundancy
  • Runs as a virtual machine
  • Shared resources (you don’t waste hardware or resources as end users share the resources)
  • Easy to scale out (add more backend infrastructure as required, don’t need to “halt” scaling while waiting for equipment)
  • Can over-commit (over-provision)
  • Backup strategy is consistent with your virtualized infrastructure
  • Capabilities such as VMware DRS, VMware HA

Cons:

  • Resources are not dedicated and are shared, users share the server resources (CPU, Memory, GPU, etc)
  • Extra licensing may be required
  • Extra licensing required for virtual accelerated graphics (GPU)

Example:

If you want to share a pool of resources, require high availability, and/or have dynamic requirements then virtualization would be the way to go. You can over commit resources while expanding and growing your environment without any discontinuation of services. With virtualization you also have access to technologies such as DRS, HA, and special Backup and DR capabilities.

An example use case of VMware Horizon View and VDI can be found at: https://www.digitallyaccurate.com/blog/2018/01/23/vdi-use-case-scenario-machine-shops/

 Conclusion

Both technologies are great and have their own use cases depending on your business requirements. Make sure you research and weigh each of the options if you’re considering either technologies. Both are amazing technologies which will compliment and enhance your IT strategy.

Jun 202020
 

Well, it was about time… I just purchased two Ubiquiti UniFi US-8 Gigabit Switches to replace a couple of aged Linksys Routers and Switches.

Picture of 2 Ubiquiti UniFi US-8 Switches
Ubiquiti UniFi US-8 Switch

I’ll be outlining why I purchased these, how they are setup, my impressions, and review.

Make sure you check out the video review below, and read the entire written review below as well!

Ubiquiti UniFi US-8 Gigabit Switch Review Video

Now on to the written review…

The back story

Yes, you read the first paragraph correctly, I’m replacing wireless routers with the UniFi US 8 Port switch.

While my core infrastructure in my server room is all Ubiquiti UniFi, I still have a few routers/switches deployed around the house to act as “VLAN breakout boxes“. These are Linksys wireless routers that I have hacked and installed OpenWRT on to act as switches for VLAN trunks and also provide native access to VLANs.

Originally these were working fine (minus the ability to manage them from the UniFi controller), but as time went on the hardware started to fail. I also wanted to fully migrate to an end-to-end UniFi Switching solution.

The goal

In the end, I want to replace all these 3rd party switches and deploy UniFi switches to provide switching with the VLAN trunks and provide native access to VLANs. I also want to be able to manage these all from the UniFi Controller I’m running on a Linux virtual machine.

Picture of Ubiquiti UniFi US-8 Switch Front and Back Box Shot
Ubiquiti UniFi US-8 Switch Front and Back Box Shot

To meet this goal, I purchased 2 of the Ubiquiti UniFi US-8, 8 port Gigabit manageable switches.

Ubiquiti UniFi US-8 Switch

So I placed an order through distribution for 2 of these switches.

As with all UniFi product, I was very impressed with the packaging.

Unboxing a Ubiquiti UniFi US-8 Switch
Ubiquiti UniFi US-8 Switch Unboxing

And here is the entire package unboxed.

A Ubiquiti UniFi US-8 Switch unboxed
Ubiquiti UniFi US-8 Switch Unboxed

Another good looking UniFi Switch!

Specs

The UniFi Switch 8 is available in two variants, the non-PoE and PoE version.

Part#: US-8Part#: US-8-60W
8Gbps of Non-Blocking Throughput8Gbps of Non-Blocking Throughput
16Gbps Switching Capacity16Gbps Switching Capacity
12W Power Consumption12W Power Consumption
Powered by PoE (Port 1) or AC/DC Power AdapterPowered by AC/DC Power Adapter
48V PoE Passthrough on Port 8
(Powered by PoE passthrough from Port 1, or DC Power Adapter)
4 Auto-Sensing 802.3af PoE Ports (Ports 5-8)

UniFi Controller Adoption

After plugging in the two switches, they instantly appeared in the UniFi controller and required a firmware update to adopt.

Adoption was easy, and I was ready to configure the devices! Click on the images to view the screenshots.

Screenshot of Ubiquiti UniFi US-8 Adopted on UniFi Controller
Ubiquiti UniFi US-8 Adopted on UniFi Controller

Configuration and Setup

I went ahead and configured the management VLANs, along with the required VLAN and switch port profiles on the applicable ports.

One of these switches were going in my furnace room which has a direct link (VLAN trunk) from my server room. The other switch is going on my office desk, which will connect back to the furnace room (VLAN trunk). The switch on my desk will provide native access to one of my main VLANs.

I also planed on powering a UniFi nanoHD on my main floor with the PoE passthrough port, so I also enabled that on the switch residing in my furnace room.

Configuration was easy and took minutes. I then installed the switches physically in their designated place.

Screenshot of Ubiquiti UniFi US-8 Adopted and Configured on UniFi Controller
Ubiquiti UniFi US-8 Adopted and Configured on UniFi Controller

One things I want to note that I found really handy was the ability to restart and reset PoE devices via the UniFi Controller web interface. I’ve never had to reset any of my nanoHDs, but it’s handy to know I have the ability.

Everything worked perfectly once the switches were configured, setup, and implemented.

Overall Review

These are great little switches, however the price point can be a bit much when compared to the new UniFi USW-Flex-Mini switches. I’d still highly recommend this switch, especially if you have an end-to-end UniFi setup.

Use Cases:

  • Small Network Switch
  • Patch Panel Switch
  • Desktop Switch to connect to core switches
  • Network switch to power Wireless Access Points

What I liked the most:

  • Easy to setup
  • Visually attractive hardware
  • Uniform management with other UniFi devices
  • No fan, silent running
  • PoE Passthrough even on the Non-PoE version

What could be improved:

  • Price

Additional Resources and Blog Posts:

Manufacturer Product Links

Jun 172020
 

One thing I love doing is mixing technology with sport.

In my free time I’m often hiking, cycling, running, or working out. I regularly use technology to supplement and track my activities. It helps to record, remember, track, and compete with myself.

I use a combo of hardware and software to do so, including watches, phones, software, etc but today I wanted to put emphasis on the Snapchat Spectacles.

The Snapchat Spectacles

Picture of Snapchat Spectacles and Charging Case
Snapchat Spectacles

I’ve had a pair of the 1st generation Snapchat Spectacles since they were released (I had to use my US shipping address to bring them over to Canada). Over the years I’ve used them to collect videos and haven’t really done much with them, with the exception of sending snaps to friends.

Thankfully I save everything I record and as of the past year, incorporating my new hobby with video, I’ve been able to use some of the old footage to generate some AMAZING videos!

See below for a video I put together of 3 beautiful mountain summits I hiked in one month, first person from the Snapchat Spectacles.

Snapchat Spectacles: 3 Mountain Summits in 31 Days

If you keep reading through to the end of the post there’s another video.

First person view

As you can see, even the first version of the Snapchat Spectacles generates some beautiful HD video, providing a first person view of the wearers field of vision.

You might say it’s similar to wearing a GoPro, but what I like about the Spectacles is that the camera is mounted beside your eyes, which makes the video capture that much more personal.

My wishlist

What I’d really like is the ability to continuously record HD video non-stop and even possibly record to my mobile device. Even if this couldn’t be accomplished wirelessly and required a wire to my mobile device, I would still be using it all the time.

Another thing that would be nice would be more size options, as the first generation are way too small for my head, LOL! 🙂

Conlusion

Tech is awesome, and I love using tech like this to share personal experiences!

Snapchat Spectacles: Hiking Grotto Mountain September 2017

Snapchat, if you’re listening, I’d love to help with the design of future versions of the Snapchat Spectacles…

Jun 152020
 

We all love speed, whether it’s our internet connection or our home network. And as our internet speeds approach gigabits per second, it’s about time our networks hit 10Gb per second…

High speed networking, particularly 10Gig network is becoming more cost-effective day by day, and with vendors releasing affordable switches, there hasn’t been a better time to upgrade.

Today we’re going 10Gig with the Ubiquiti UniFi US-16-XG switch.

Picture of a Ubiquiti UniFi US-16-XG Switch retail box
Ubiquiti UniFi US-16-XG Switch Box shot

I’ll be discussing my configuration and setup, why you should use this switch for your homelab and/or business, as well as providing a review on the equipment.

Make sure you check out the video below and read the entire post!

Going 10Gig with the Ubiquiti UniFi US-16-XG Network Switch

Let’s get to it!

The back story

Just like the backstory with my original Ubiquiti UniFi Review, I wanted to optimize my network, increase speeds, and remove bottlenecks.

Most of my servers have 10Gig network adapters (through 10GbaseT or SFP+ ports), and I wanted to upgrade my other servers. I always wanted the ability to add more uplinks to allow a single host/server to have redundant connections to my network.

Up until now, I had 2 hosts connected via my Ubiquiti UniFi US-48 switch via the SFP+ ports with a SFP+ to 10GbaseT module. Using both of the 10Gig ports disallows anymore 10Gig devices being connected. Also, the converter module adds latency.

The goal

Ultimately I wanted to implement a solution that included a new 10Gb network switch acting as a backbone for the network, with connections to my servers, storage, 10Gig devices, and secondary 1Gb switches.

While not needed, it would be nice to have access to both SFP+ connections, as well as 10GbaseT as I have devices that use both.

At the same time, I wanted something that would be easy to manage, affordable, and compatible with equipment from other vendors.

Picture of Ubiquiti UniFi 16 XG Switch with UDC-1 DAC SFP+ Cables
Ubiquiti UniFi 16 XG Switch with UDC-1 DAC SFP+ Cables

I chose the Ubiquiti UniFi US-16-XG Switch for the task, along with an assortment of cables.

Ubiquiti UniFi US-16-XG Switch

After already being extremely please with the Ubiquiti UniFi product line, I was happy to purchase a unit for internal use, as my company sells Ubiquiti products.

Receiving the product, I was very impressed with the packaging and shipping.

Ubiquiti UniFi US-16-XG Switch Unboxing
Ubiquiti UniFi US-16-XG Switch Unboxing
Picture of Ubiquiti UniFi US-16-XG Switch Unboxing and Package contents
Ubiquiti UniFi US-16-XG Switch Unboxing and Package contents

And here I present the Ubiquiti UniFi 16 XG Switch…

Picture of Ubiquiti UniFi US-16-XG Switch
Ubiquiti UniFi US-16-XG Switch

You’ll notice the trademark UniFi product design. On the front, the UniFi 16 XG switch has 12 x 10Gb SFP+ ports, along with 4 x 10GbaseT ports. All ports can be used at the same time as none are shared.

Picture of the backside of a Ubiquiti UniFi US-16-XG Switch
Ubiquiti UniFi US-16-XG Switch Backside

The backside of the switch has a console port, along with 2 fans, DC power input, and the AC power.

Overall, it’s a good looking unit. It has even better looking specs…

Specs

The UniFi 16 XG switch specifications:

  • 12 x 10Gb SFP+ Ports
  • 4 x 10GbaseT Ports
  • 160 Gbps Total Non-Blocking Line Rate
  • 1U Form Factor
  • Layer 2 Switching
  • Fully Managed via UniFi Controller

The SFP+ ports allow you to use a DAC (Direct Attach Cable) for connectivity, or fiber modules. You can also populate them with converters, such as the Ubiquiti 10GBASE-T SFP+ CopperModule.

Picture of the Ubiquiti UniFi 16 XG Switch Ports
Ubiquiti UniFi 16 XG Switch Ports

You can also attach 4 devices to the 10GbaseT ports.

UDC-3 “FiberCable” DAC

I also purchased 2 x Ubiquiti UDC-3 SFP+ DAC cables. These cables provide connectivity between 2 devices with DAC ports. These can be purchased in lengths of 1 meter, 2 meter, and 3 meters with the part numbers of UDC-1, UDC-2, and UDC-3 respectively.

10Gtek Cable DAC

To test compatibility and have cables from other vendors (in case of any future issues), I also purchased an assortment of 10Gtek SFP+ DAC cables. I specifically chose these as I wanted to have a couple of half meter cables to connect the switches with an aggregated LAG.

UniFi Controller Adoption

To get quickly up and running, I setup the US-16-XG on my workbench, plugged in a network cable in to one of the 10GbaseT ports, and powered it on.

Picture of Ubiquiti US-16-XG Initial Provisioning on workbench
Ubiquiti US-16-XG Initial Provisioning

Boot-up was quick and it appeared in the UniFi Controller immediately. It required a firmware update before being able to adopt it to the controller.

Screenshot of UniFi US-16-XG UniFi Controller Pre-adoption
UniFi US-16-XG UniFi Controller Pre-adoption

After a quick firmware update, I was able to adopt and configure the switch.

Screenshot of UniFi US-16-XG UniFi Configured
UniFi US-16-XG UniFi Configured

The device had a “Test date” of March 2020 on the box, and the UniFi controller reported it as a hardware revision 13.

Configuration and Setup

Implementing, configuration, and setup will be an ongoing process over the next few weeks as I add more storage, servers, and devices to the switch.

The main priority was to test cable compatibility, connect the US-16-XG to my US-48, test throughput, and put my servers directly on the new switch.

I decided to just go ahead and start hooking it up. I decided to do this live without shutting anything down. I went ahead and perfomed the following:

  1. Put the US-16-XG on top of the US-48
  2. Disconnect servers from SFP+ CopperModules on US-48 switch
  3. Plug servers in to 10GbaseT ports on US-16-XG
  4. Remove SFP+ to 10GbaseT CopperModule from US-48 SFP+ ports
  5. Connect both switches with a SFP+ DAC cable
Picture of US-16-XG and US-48 Connected and Configured
US-16-XG and US-48 Connected and Configured

Performing these steps only took a few seconds and everything was up and running. One particular thing I’d like to note is that the port auto-negotiation time on the US-16-XG was extremely quick.

Taking a look at the UniFi Controller view of the US-16-XG, we see the following.

Screenshot of US-16-XG Configured and Online with UniFi Controller
US-16-XG Configured and Online with UniFi Controller

Everything is looking good! Ports auto-detected the correct speed, traffic was being passed, and all is good.

After running like this for a few days, I went ahead and tested the 10Gtek cables which worked perfectly.

To increase redundancy and throughput, I used 2 x 0.5-Meter 10Gtek SFP+ DAC cables and configured an aggregated LAG between the two switches which has also been working perfectly!

In the coming weeks I will be connecting more servers as well as my SAN, so keep checking back for updated posts.

Overall Review

This is a great switch at an amazing price-point to take your business network or homelab network to 10Gig speeds. I highly recommend it!

Use Cases:

  • Small network 10Gig switch
  • 10Gig backbone for numerous other switches
  • SAN switch for small SAN network

What I liked the most:

  • 10Gig speeds
  • Easy setup as always with all the UniFi equipment
  • Beautiful management interface via the UniFi Controller
  • Near silent running
  • Ability to use both SFP+ and 10GbaseT
  • Compatibility with SFP+ DAC Cables

What could be improved:

  • Redundant power supplies
  • Option for more ports
  • Bug with mobile app showing 10Mbps manual speed for 10Gig ports

Additional Resources and Blog Posts:

Manufacturer Product Links

Jun 072020
 

This month on June 23rd, HPE is hosting their annual HPE Discover event. This year is a little bit different as COVID-19 has resulted in a change of the usual in-person event, and this year’s event is now being hosted as a virtual experience.

I expect it’ll be the same great content as they have every year, only difference is you’ll be able to virtually experience it from the comfort of your own home.

I’m especially excited to say that I’ve been invited to be special VIP Influencer for the event, so I’ll be posting some content on Twitter, LinkedIn, and of course generating some posts on my blog.

HPE Discover Register Now Graphic
Register for HPE Discover Now

Stay tuned, and don’t forget to register at: https://www.hpe.com/us/en/discover.html

The content catalog is now live!

Jun 062020
 
Screenshot of NVMe SSD on FreeNAS

Looking at using SSD and NVMe with your FreeNAS setup and ZFS? There’s considerations and optimizations that must be factored in to make sure you’re not wasting all that sweet performance. In this post I’ll be providing you with my own FreeNAS ZFS optimizations for SSD and NVMe.

This post will contain observations and tweaks I’ve discovered during testing and production of a FreeNAS ZFS pool sitting on NVMe vdevs. I will update it with more information as I use and test the array more.

Screenshot of FreeNAS ZFS NVMe SSD Pool with multiple datasets
FreeNAS ZFS NVMe SSD Pool with multiple datasets

Considerations

It’s important to note that while your SSD and/or NVMe ZFS pool technically could reach insane speeds, you will probably always be limited by the network access speeds.

With this in mind, to optimize your ZFS SSD and/or NVMe pool, you may be trading off features and functionality to max out your drives. These optimizations may in fact be wasted if you reach the network speed bottleneck.

Some feature you may be giving up may actually help extend the life or endurance of your SSD such as compression and deduplication, as they reduce the number of writes performed on each of your vdevs (drives).

You may wish to skip these optimizations should your network be the limiting factor, which will allow you to utilize these features with no performance or minimal performance degradation to the final client. You should measure your network throughput to establish the baseline of your network bottleneck.

Deploying SSD and NVMe with FreeNAS

For reference, the environment I deployed FreeNAS with NVMe SSD consists of:

As mentioned above, FreeNAS is virtualizatized on one of the HPE DL360 Proliant servers and has 8 CPUs and 32GB of RAM. The NVME are provided by VMware ESXi as PCI passthrough devices. There has been no issues with stability in 3 weeks of testing.

Screenshot of Sabrent Rocket 4 2TB NVMe SSD on FreeNAS
Sabrent Rocket 4 2TB NVMe SSD on FreeNAS

Important notes:

  • VMXNET3 NIC is used on VMs to achieve 10Gb networking
  • Using PCI passthrough, snapshots on FreeNAS VM are disabled (this is fine)
  • NFS VM datastore is used for testing as the host running the FreeNAS VM has the NFS datastore store mounted on itself.

There are a number of considerations that must be factored in when virtualization FreeNAS however those are beyond the scope of this blog post. I will be creating a separate post for this in the future.

Use Case (Fast and Risky or Slow and Secure)

The use case of your setup will depict which optimizations you can use as some of the optimizations in this post will increase the risk of data loss (such as disabled sync writes and RAIDz levels).

Fast and Risky

Since SSDs are more reliable and less likely to fail, if you’re using the SSD storage as temporary hot storage, you could simply using striping to stripe across multiple vdevs (devices). If a failure occurred, the data would be lost, however if you’re were just using this for “staging” or using hot data and the risk was acceptable, this is an option to drastically increase speeds.

Example use case for fast and risky

  • VDI Pool for clones
  • VMs that can be restored easily from snapshots
  • Video Editing
  • Temporary high speed data dump storage

The risk can be lowered by replicating the pool or dataset to slower storage on a frequent or regular basis.

Slow and Secure

Using RAIDz-1 or higher will allow for vdev (drive) failures, but with each level increase, performance will be lost due to parity calculations.

Example use case for slow and secure

  • Regular storage for all VMs
  • Database (SQL)
  • Exchange
  • Main storage

Slow and Secure storage is the type of storage found in most applications used for SAN or NAS storage.

SSD Endurance and Lifetime

Solid state drives have a lifetime that’s typically measured in lifetime writes. If you’re storing sensitive data, you should plan ahead to mitigate the risk of failure when the drive reaches it’s full lifetime.

Steps to mitigate failures

  • Before putting the stripe or RAIDz pool in to production, perform some large bogus writes and stagger the amount of data written on the SSDs individually. While this will reduce the life counter on the SSDs, it’ll help you offset and stagger the lifetime of each drives so they don’t die at the same time.
  • If using RAIDz-1 or higher, preemptively replace the SSD before the lifetime is hit. Do this well in advance and stagger it to further create a different between the lifetime of each drive.

Decommissioning the drives preemptively and early doesn’t mean you have to throw them away, this is just to secure the data on the ZFS pool. You can can continue to use these drives in other systems with non-critical data, and possibly use the drive well beyond it’s recommended lifetime.

Compression and Deduplication

Using compression and deduplication with ZFS is CPU intensive (and RAM intensive for deduplication).

The CPU usage is negligible when using these features on traditional magnetic storage (traditional magentic platter hard drive storage) because when using traditional hard drives, the drives are the performance bottleneck.

SSD are a total different thing, specifically with NVMe. With storage speeds in the gigabytes per second, CPUs cannot keep up with the deduplication and compression of data being written and become the bottleneck.

I performed a simple test comparing speeds with compression and dedupe with the same VM running CrystalDiskMark on an NFS VMware datastore running over 10Gb networking. The VM was configured with a single drive on a VMware NVME controller.

NVMe SSD with compression and deduplication

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication

NVMe SSD with deduplication only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only
CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only

NVMe SSD with compression only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression only
CrystalDiskMark on FreeNAS NFS SSD datastore with compression only

Now this is really interesting, that we actually see a massive speed increase with compression only. This is because I have a server class CPU with multiple cores and a ton of RAM. With lower performing specs, you may notice a decrease in performance.

NVMe SSD without compression and deduplication

Screenshot of benchmark with CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication

In my case, the 10Gb networking was the bottleneck on read operations as there was virtually no change. It was a different story for write operations as you can see there is a drastic change in write speeds. Write speeds are greatly increased when writes aren’t being compressed or deduped.

Note that on faster networks, read speeds could and will be affected.

If your network connection to the client application is the limiting factor and the system can keep up with that bottleneck then you will be able to get away with using these features.

Higher throughput with compression and deduplication can be reached with higher frequency CPUs (more Ghz), more cores (for more client connections). Remember that large amounts of RAM are required for deduplication.

Using compression and deduplication may also reduce the writes to your SSD vdevs, prolonging the lifetime and reducing the cost of maintaining the solution.

ZFS ZIL and SLOG

When it comes to writes on a filesystem, there a different kinds.

  • Synchronous – Writes that are made to a filesystem that are only marked as completed and successful once it has actually been written to the physical media.
  • Asynchronous – Writes that are made to a filesystem that are marked as completed or successful before the write has actually been completed and committed to the physical media.

The type of write performed can be requested by the application or service that’s performing the write, or it can be explicitly set on the file system itself. In FreeNAS (in our example) you can override this by setting the “sync” option on the zpool, dataset, or zvol.

Disabling sync will allow writes to be marked as completed before they actually are, essentially “caching” writes in a buffer in memory. See below for “Ram Caching and Sync Writes”. Setting this to “standard” will perform the type of write requested by the client, and setting to “always” will result in all writes being synchronous.

We can speed up and assist writes by using a SLOG for ZIL.

ZIL stands for ZFS Intent Log, and SLOG standards for Separated Log which is usually stored on a dedicated SLOG device.

By utilizing a SLOG for ZIL, you can have dedicated SSDs which will act as your intent log for writes to the zpool. On writes that request a synchronous write, they will be marked as completed when sent to the ZIL and written to the SLOG device.

Implementing a SLOG that is slower than the combined speed of your ZFS pool will result in a performance loss. You SLOG should be faster than the pool it’s acting as a ZIL for.

Implementing a SLOG that is faster than the combined speed of your ZFS pool will result in a performance gain on writes, as it essentially act as “write cache” for synchronous writes and will possibly even perform more orderly writes when it commits it to the actual vdevs in the pool.

If using a SLOG for ZIL, it is highly recommend to use an SSD that has PLP (power loss protection) as well as a mirrored set to avoid data loss and/or corruption in the event of a power loss, crash, or freeze.

RAM Caching and Sync Writes

In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.

Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.

In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.

You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).

Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.

SSD Sector Size

Traditional drives typically used 512k physical sector sizes. Newer hard drives and SSDs use 4k sectors, but often emulate 512k logical sectors (called 512e) for compatibility. SSD’s specifically sometimes ship with 512e to increase compatibility with operating systems and the ability to clone your old drive to the new SSD during migrations.

When emulating 512k logical sectors on an HD or SSD that uses 4k physical native sectors, an operation that writes 4k will result in 4 operations instead of 1. This increases overhead and could result in reduced IO and speed, as well as create more wear on the SSD when performing writes.

Some HDs and SSDs come with utilities or tools to change the sector size of the drive. I highly recommend changing it to it’s native sector size.

iSCSI vs NFS

Technically faster speeds should possible using iSCSI instead of NFS, however special care must be made when using iSCSI.

If you’re using iSCSI and the host that is virtualizing the FreeNAS instance is also mounting the iSCSI VMFS target that it’s presenting, you must unmount this iSCSI volume every time you go plan to shut down the FreeNAS instance, or the entire host that is hosting it. Unmounting the iSCSI datastore also means unregistering any VMs that reside on it.

Screenshot of VMware ESXi with FreeNAS NVMe SSD as NFS datastore
VMware ESXi with virtualized FreeNAS as NFS datastore

If you simply shutdown the FreeNAS instance that’s hosting the iSCSI datastore, this will result in a improper unclean unmount of the VMFS volume and could lead to data loss, even if no VMs are running.

NFS provides a cleaner mechanism, as the FreeNAS handles the unmount of the base filesystem cleanly on shutdown and to the ESXi hosts it appears as an NFS disconnect. If VMs are not running (and no I/O is occuring) when the FreeNAS instance is shut down, data loss is not a concern.

Jumbo Frames

Since you’re pushing more data, more I/O, and at a faster pace, we need to optimize all layers of the solution as much as possible. To reduce overhead on the networking side of things, if possible, you should implement jumbo frames.

Instead of sending many smaller packets which independently require acknowledgement, you can send fewer larger packets. This significantly reduces overhead and allows for faster speed.

In my case, my FreeNAS instance will be providing both NAS and SAN services to the network, thus has 2 virtual NICs. On my internal LAN where it’s acting as a NAS (NIC 1), it will be using the default MTU of 1500 byte frames to make sure it can communicate with workstations that are accessing the shares. On my SAN network (NIC 2) where it will be acting as a SAN, it will have a configured MTU of 9000 byte frames. All other devices (SANs, client NICs, and iSCSI initiators) on the SAN network have a matching MTU of 9000.

Additional Notes

Please note that consumer SSDs usually do not have PLP (Power Loss Prevention). This means that in the event of a power failure, any data sitting on the write cache on the SSD may be lost. This could put your data at risk. Using enterprise solid state drives remedies this issue as they often come with PLP.

Conclusion

SSD’s are great for storage, whether it be file, block, NFS, or iSCSI! It’s in my opinion that NVMe and all flash arrays is where the future of storage is going.

I hope this information helps, and if you feel I left anything out, or if anything needs to be corrected, please don’t hesitate to leave a comment!

Jun 012020
 

If you’re running a Sophos UTM firewall, you may start noticing websites not loading properly, or presenting an error reporting that a root CA has expired.

The error presented is below:

Sectigo COMODO CA Cetificate Untrusted Website Certificate has expired

Webpages that do not present an error may fail to load, or only load partial parts of the page.

Update June 3rd 2020 – There are reports that this issue is also occurring with other vendors security solutions as well (such as Palo Alto Firewalls).

The Issue

This is due to some root CA (Certificate Authority) certificates expiring.

Particularly this involves the following Root CA Certificates:

  • AddTrust AB – AddTrust External CA Root
  • The USERTRUST Network – USERTrust RSA Certification Authority
  • The USERTRUST Network – USERTrust ECC Certification Authority

Read more about the particular issue here: https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l00000117LT

The Fix

To resolve this, we must first disable 3 of the factory shipped Root CA’s (listed above) on the Sophos UTM, and then upload the new Root CAs.

You’ll need to go to the following links (as referenced on Sectigo’s page above, and download the new Root CAs:

USERTrust RSA Root CA (Updated) – https://crt.sh/?id=1199354

USERTrust ECC Root CA (Updated) – https://crt.sh/?id=2841410

When you go to each of the above pages, click on “Certificate” as shown below, to download the Root CA cert.

Download Updated Root CAs Example

Do this for both certificates and save to your system.

Now we must update and fix the Sophos UTM:

  1. Log on to your Sophos UTM Web Interface
  2. Navigate to “Web Protection”, then “Filtering Options”, then select the “HTTPS CAs” tab.
  3. Browse through the list of “Global Verification CAs” and disable the following certificates:
    1. AddTrust AB – AddTrust External CA Root
    2. The USERTRUST Network – USERTrust RSA Certification Authority
    3. The USERTRUST Network – USERTrust ECC Certification Authority
  4. Scroll up and under “Local Verification CAs”, use the “Upload local CA” to upload the 2 new certificates you just downloaded.
  5. Make sure they are enabled.

After you complete these steps, verify they are in the list.

New USERTrust Root CAs Enabled

After performing these steps you must restart the HTTPS Web filter Scanning services or restart your Sophos UTM.

The issue should now be resolved. Leave a comment and let me know if it worked for you!

May 262020
 

So you want to add NVMe storage capability to your HPE Proliant DL360p Gen8 (or other Proliant Gen8 server) and don’t know where to start? Well, I was in the same situation until recently. However, after much research, a little bit of spending, I now have 8TB of NVMe storage in my HPE DL360p Gen8 Server thanks to the IOCREST IO-PEX40152.

Unsupported you say? Well, there are some of us who like to live life dangerously, there is also those of us with really cool homelabs. I like to think I’m the latter.

PLEASE NOTE: This is not a supported configuration. You’re doing this at your own risk. Also, note that consumer/prosumer NVME SSDs do not have PLP (Power Loss Prevention) technology. You should always use supported configurations and enterprise grade NVME SSDs in production environments.

DISCLAIMER: If you attempt what I did in this post, you are doing it at your own risk. I won’t be held liable for any damages or issues.

Use Cases

There’s a number of reasons why you’d want to do this. Some of them include:

  • Server Storage
  • VMware Storage
  • VMware vSAN
  • Virtualized Storage (SDS as example)
  • VDI
  • Flash Cache
  • Special applications (database, high IO)

Adding NVMe capability

Well, after all that research I mentioned at the beginning of the post, I installed an IOCREST IO-PEX40152 inside of an HPE Proliant DL360p Gen8 to add NVMe capabilities to the server.

IOCREST IO-PEX40152 with 4 x 2TB Sabrent Rocket 4 NVME

At first I was concerned about dimensions as technically the card did fit, but technically it didn’t. I bought it anyways, along with 4 X 2TB Sabrent Rocket 4 NVMe SSDs.

The end result?

Picture of an HPE DL360p Gen8 with NVME SSD
HPE DL360p Gen8 with NVME SSD

IMPORTANT: Due to the airflow of the server, I highly recommend disconnecting and removing the fan built in to the IO-PEX40152. The DL360p server will create more than enough airflow and could cause the fan to spin up, generate electricity, and damage the card and NVME SSD.

Also, do not attempt to install the case cover, additional modification is required (see below).

The Fit

Installing the card inside of the PCIe riser was easy, but snug. The metal heatsink actually comes in to contact with the metal on the PCIe riser.

Picture of an IO-PEX40152 installed on DL360p PCIe Riser
IO-PEX40152 installed on DL360p PCIe Riser

You’ll notice how the card just barely fits inside of the 1U server. Some effort needs to be put in to get it installed properly.

Picture of an DL360p Gen8 1U Rack Server with IO-PEX40152 Installed
HPE DL360p Gen8 with IO-PEX40152 Installed

There are ribbon cables (and plastic fittings) directly where the end of the card goes, so you need to gently push these down and push cables to the side where there’s a small amount of thin room available.

We can’t put the case back on… Yet!

Unfortunately, just when I thought I was in the clear, I realized the case of the server cannot be installed. The metal bracket and locking mechanism on the case cover needs the space where a portion of the heatsink goes. Attempting to install this will cause it to hit the card.

Picture of the HPE DL360p Gen8 Case Locking Mechanism
HPE DL360p Gen8 Case Locking Mechanism

The above photo shows the locking mechanism protruding out of the case cover. This will hit the card (with the IOCREST IO-PEX40152 heatsink installed). If the heatsink is removed, the case might gently touch the card in it’s unlocked and recessed position, but from my measurements clears the card when locked fully and fully closed.

I had to come up with a temporary fix while I figure out what to do. Flip the lid and weight it down.

Picture of an HPE DL360p Gen8 case cover upside down
HPE DL360p Gen8 case cover upside down

For stability and other tests, I simply put the case cover on upside down and weighed it down with weights. Cooling is working great and even under high load I haven’t seen the SSD’s go above 38 Celsius.

The plan moving forward was to remove the IO-PEX40152 heatsink, and install individual heatsinks on the NVME SSD as well as the PEX PCIe switch chip. This should clear up enough room for the case cover to be installed properly.

The fix

I went on to Amazon and purchased the following items:

4 x GLOTRENDS M.2 NVMe SSD Heatsink for 2280 M.2 SSD

1 x BNTECHGO 4 Pcs 40mm x 40mm x 11mm Black Aluminum Heat Sink Cooling Fin

They arrived within days with Amazon Prime. I started to install them.

Picture of Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

And now we install it in the DL360p Gen8 PCIe riser and install it in to the server.

You’ll notice it’s a nice fit! I had to compress some of the heat conductive goo on the PFX chip heatsink as the heatsink was slightly too high by 1/16th of an inch. After doing this it fit nicely.

Also, note the one of the cable/ribbon connectors by the SAS connections. I re-routed on of the cables between the SAS connectors they could be folded and lay under the card instead of pushing straight up in to the end of the card.

As I mentioned above, the locking mechanism on the case cover may come in to contact with the bottom of the IOCREST card when it’s in the unlocked and recessed position. With this setup, do not unlock the case or open the case when the server is running/plugged in as it may short the board. I have confirmed when it’s closed and locked, it clears the card. To avoid “accidents” I may come up with a non-conductive cover for the chips it hits (to the left of the fan connector on the card in the image).

And with that, we’ve closed the case on this project…

Picture of a HPE DL360p Gen8 Case Closed
HPE DL360p Gen8 Case Closed

One interesting thing to note is that the NVME SSD are running around 4-6 Celsius cooler post-modification with custom heatsinks than with the stock heatsink. I believe this is due to the awesome airflow achieved in the Proliant DL360 servers.

Conclusion

I’ve been running this configuration for 6 days now stress-testing and it’s been working great. With the server running VMware ESXi 6.5 U3, I am able to passthrough the individual NVME SSD to virtual machines. Best of all, installing this card did not cause the fans to spin up which is often the case when using non-HPE PCIe cards.

This is the perfect mod to add NVME storage to your server, or even try out technology like VMware vSAN. I have a number of cool projects coming up using this that I’m excited to share.