Sep 182021
 

Welcome to Episode 03 of The Tech Journal Vlog at StephenWagner.com

In this episode

Fun Stuff

  • Homelab Video Demo (https://youtu.be/oaZv2hpQKac)
  • Telus Fiber 1G Internet (for Business)
    • Sophos UTM Dual WAN Balancing
  • Synology
    • Synology Diskstation DS1621+
    • DSM 7.0
    • Synology C2 Cloud Backup

Work Update

  • VDI Consulting
    • VDI Golden Images for Non-Persistent VDI
  • DUO MFA/2FA
    • Implementations of DUO with Horizon
  • Exchange Projects
  • IT Director as a Service 🙂

Life Update

  • Back at the Gym
  • Travel is Back (Regina, Vancouver)

New Blog Posts

Current Projects

  • Synology DS1621+
  • AMD S7150 x2 MxGPU
  • NVME Storage Server Project
  • 10ZiG Thin Clients

Don’t forget to like and subscribe!
Leave a comment, feedback, or suggestions!

May 012021
 
Picture of NVMe Storage Server Project

For over a year and a half I have been working on building a custom NVMe Storage Server for my homelab. I wanted to build a high speed storage system similar to a NAS or SAN, backed with NVMe drives that provides iSCSI, NFS, and SMB Windows File Shares to my network.

The computers accessing the NVMe Storage Server would include VMware ESXi hosts, Raspberry Pi SBCs, and of course Windows Computers and Workstations.

The focus of this project is on high throughput (in the GB/sec) and IOPS.

The current plan for the storage environment is for video editing, as well as VDI VM storage. This can and will change as the project progresses.

The History

More and more businesses are using all-flash NVMe and SSD based storage systems, so I figured there’s no reason why I can’t have build and have my own budget custom all NVMe flash NAS.

This is the story of how I built my own NVMe based Storage Server.

The first version of the NVMe Storage Server consisted of the IO-PEX40152 card with 4 x 2TB Sabrent Rocket 4 NVMe drives inside of an HPE Proliant DL360p Gen8 Server. The server was running ESXi with TrueNAS virtualized, and the PCIe card passed through to the TrueNAS VM.

The results were great, the performance was amazing, and both servers had access to the NFS export via 2 x 10Gb SFP+ networking.

There were three main problems with this setup:

  1. Virtualized – Once a month I had an ESXi PSOD. This was either due to overheating of the IO-PEX40152 card because of modifications I made, or bugs with the DL360p servers and PCIe passthrough.
  2. NFS instead of iSCSI – Because TrueNAS was virtualized inside of the host that was using it for storage, I had to use NFS since the host virtualizing TrueNAS would also be accessing the data on the TrueNAS VM. When shutting down the host, you need to shut down TrueNAS first. NFS disconnects are handled way healthier than iSCSI disconnects (which can cause corruption even if no files are being used).
  3. CPU Cores maxed on data transfer – When doing initial testing, I was maxing out the CPU cores assigned to the TrueNAS VM because the data transfers were so high. I needed a CPU and setup that was better fit.

Version 1 went great, but you can see some things needed to be changed. I decided to go with a dedicated server, not virtualize TrueNAS, and go for a newer CPU with a higher Ghz speed.

And so, version 2 was born (built). Keep reading and scrolling for pictures!

The Hardware

On version 2 of the project, the hardware includes:

Notes on the Hardware:

  • While the ML310e Gen8 v2 server is a cheap low entry server, it’s been a fantastic team member of my homelab.
  • HPE Dual 10G Port 560SFP+ adapters can be found brand new in unsealed boxes on eBay at very attractive prices. Using HPE Parts inside of HPE Servers, avoids the fans from spinning up fast.
  • The ML310e Gen8 v2 has some issues with passing through PCIe cards to ESXi. Works perfect when not passing through.

The new NVMe Storage Server

I decided to repurpose an HPE Proliant ML310e Gen8 v2 Server. This server was originally acting as my Nvidia Grid K1 VDI server, because it supported large PCIe cards. With the addition of my new AMD S7150 x2 hacked in/on to one of my DL360p Gen8’s, I no longer needed the GRID card in this server and decided to repurpose it.

Picture of an HPe ML310e Gen8 v2 with NVMe Storage
HPe ML310e Gen8 v2 with NVMe Storage

I installed the IOCREST IO-PEX40152 card in to the PCIe 16x slot, with 4 x 2TB Sabrent Rocket 4 NVME drives.

Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

While the server has a PCIe 16x wide slot, it only has an 8x bus going to the slot. This means we will have half the capable speed vs the true 16x slot. This however does not pose a problem because we’ll be maxing out the 10Gb NICs long before we max out the 8x bus speed.

I also installed an HPE Dual Port 560SFP+ NIC in to the second slot. This will allow a total of 2 x 10Gb network connections from the server to the Ubiquiti UniFi US-16-XG 10Gb network switch, the backbone of my network.

The Server also have 4 x Hot Swappable HD bays on the front. When configured in HBA mode (via the BIOS), these are accessible by TrueNAS and can be used. I plan on populating these with 4 x 4TB HPE MDL SATA Hot Swappable drives to act as a replication destination for the NVMe pool and/or slower magnetic long-term storage.

Front view of HPE ML310e Gen8 v2 with Hotswap Drive bays
HPE ML310e Gen8 v2 with Hotswap Drive bays

I may also try to give WD RED Pro drives a try, but I’m not sure if they will cause the fans to speed up on the server.

TrueNAS Installation and Configuration

For the initial Proof-Of-Concept for version 2, I decided to be quick and dirty and install it to a USB stick. I also waited until I installed TrueNAS on to the USB stick and completed basic configuration before installing the Quad NVMe PCIe card and 10Gb NIC. I’m using a USB 3.0 port on the back of the server for speed, as I can’t verify if the port on the motherboard is USB 2 or USB 3.

Picture of a TrueNAS USB Stick on HPE ML310e Gen8 v2
TrueNAS USB Stick on HPE ML310e Gen8 v2

TrueNAS installation worked without any problems whatsoever on the ML310e. I configured the basic IP, time, accounts, and other generic settings. I then proceeded to install the PCIe cards (storage and networking).

Screenshot of TrueNAS Dashboard Installed on NVMe Storage Server
TrueNAS Installed on NVMe Storage Server

All NVMe drives were recognized, along with the 2 HDDs I had in the front Hot-swap bays (sitting on an HP B120i Controller configured in HBA mode).

Screenshot of available TrueNAS NVMe Disks
TrueNAS NVMe Disks

The 560SFP+ NIC also was detected without any issues and available to configure.

Dashboard Screenshot of TrueNAS 560SFP+ 10Gb NIC
TrueNAS 560SFP+ 10Gb NIC

Storage Configuration

I’ve already done some testing and created a guide on FreeNAS and TrueNAS ZFS Optimizations and Considerations for SSD and NVMe, so I made sure to use what I learned in this version of the project.

I created a striped pool (no redundancy) of all 4 x 2TB NVMe drives. This gave us around 8TB of usable high speed NVMe storage. I also created some datasets and a zVOL for iSCSI.

Screenshot of NVMe TrueNAS Storage Pool with Datasets and zVol
NVMe TrueNAS Storage Pool with Datasets and zVol

I chose to go with the defaults for compression to start with. I will be testing throughput and achievable speeds in the future. You should always test this in every and all custom environments as the results will always vary.

Network Configuration

Initial configuration was done via the 1Gb NIC connection to my main LAN network. I had to change this as the 10Gb NIC will be directly connected to the network backbone and needs to access the LAN and Storage VLANs.

I went ahead and configured a VLAN Interface on VLAN 220 for the Storage network. Connections for iSCSI and NFS will be made on this network as all my ESXi servers have vmknics configured on this VLAN for storage. I also made sure to configure an MTU of 9000 for jumbo frames (packets) to increase performance. Remember that all hosts must have the same MTU to communicate.

Screenshot of 10Gb NIC on Storage VLAN
10Gb NIC on Storage VLAN

Next up, I had to create another VLAN interface for the LAN network. This would be used for management, as well as to provide Windows File Share (SMB/Samba) access to the workstations on the network. We leave the MTU on this adapter as 1500 since that’s what my LAN network is using.

Screenshot of 10Gb NIC on LAN VLAN
10Gb NIC on LAN VLAN

As a note, I had to delete the configuration for the existing management settings (don’t worry, it doesn’t take effect until you hit test) and configure the VLAN interface for my LANs VLAN and IP. I tested the settings, confirmed it was good, and it was all setup.

At this point, only the 10Gb NIC is now being used so I went ahead and disconnected the 1Gb network cable.

Sharing Setup and Configuration

It’s now time to configure the sharing protocols that will be used. As mentioned before, I plan on deploying iSCSI, NFS, and Windows File Shares (SMB/Samba).

iSCSI and NFS Configuration

Normally, for a VMware ESXi virtualization environment, I would always usually prefer iSCSI based storage, however I also wanted to configure NFS to test throughput of both with NVMe flash storage.

Earlier, I created the datasets for all my my NFS exports and a zVOL volume for iSCSI.

Note, that in order to take advantage of the VMware VAAI storage directives (enhancements), you must use a zVOL to present an iSCSI target to an ESXi host.

For NFS, you can simply create a dataset and then export it.

For iSCSI, you need to create a zVol and then configure the iSCSI Target settings and make it available.

SMB (Windows File Shares)

I needed to create a Windows File Share for file based storage from Windows computers. I plan on using the Windows File Share for high-speed storage of files for video editing.

Using the dataset I created earlier, I configured a Windows Share, user accounts, and tested accessing it. Works perfect!

Connecting the host

Connecting the ESXi hosts to the iSCSI targets and the NFS exports is done in the exact same way that you would with any other storage system, so I won’t be including details on that in this post.

We can clearly see the iSCSI target and NFS exports on the ESXi host.

Screenshot of TrueNAS NVMe iSCSI Target on VMware ESXi Host
TrueNAS NVMe iSCSI Target on VMware ESXi Host
Screenshot of NVMe iSCSI and NFS ESXi Datastores
NVMe iSCSI and NFS ESXi Datastores

To access Windows File Shares, we log on and map the network share like you would normally with any file server.

Testing

For testing, I moved (using Storage vMotion) my main VDI desktop to the new NVMe based iSCSI Target LUN on the NVMe Storage Server. After testing iSCSI, I then used Storage vMotion again to move it to the NFS datastore. Please see below for the NVMe storage server speed test results.

Speed Tests

Just to start off, I want to post a screenshot of a few previous benchmarks I compiled when testing and reviewing the Sabrent Rocket 4 NVMe SSD disks installed in my HPE DL360p Gen8 Server and passed through to a VM (Add NVMe capability to an HPE Proliant DL360p Gen8 Server).

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Note, that when I performed these tests, my CPU was maxed out and limiting the actual throughput. Even then, these are some fairly impressive speeds. Also, these tests were directly testing each NVMe drive individually.

Moving on to the NVMe Storage Server, I decided to test iSCSI NVMe throughput and NFS NVMe throughput.

I opened up CrystalDiskMark and started a generic test, running a 16GB test file a total of 6 times on my VDI VM sitting on the iSCSI NVMe LUN.

Screenshot of NVMe Storage Server iSCSI Benchmark with CrystalDiskMark
NVMe Storage Server iSCSI Benchmark with CrystalDiskMark

You can see some impressive speeds maxing out the 10Gb NIC with crazy performance of the NVME storage:

  • 1196MB/sec READ
  • 1145.28MB/sec WRITE (Maxing out the 10GB NIC)
  • 62,725.10 IOPS READ
  • 42,203.13 IOPS WRITE

Additionally, here’s a screenshot of the ix0 NIC on the TrueNAS system during the speed test benchmark: 1.12 GiB/s.

Screenshot of TrueNAS NVME Maxing out 10Gig NIC
TrueNAS NVME Maxing out 10Gig NIC

And remember this is with compression. I’m really excited to see how I can further tweak and optimize this, and also what increases will come with configuring iSCSI MPIO. I’m also going to try to increase the IOPS to get them closer to what each individual NVMe drive can do.

Now on to NFS, the results were horrible when moving the VM to the NFS Export.

Screenshot of NVMe Storage Server NFS Benchmark with CrystalDiskMark
NVMe Storage Server NFS Benchmark with CrystalDiskMark

You can see that the read speed was impressive, but the write speed was not. This is partly due to how writes are handled with NFS exports.

Clearly iSCSI is the best performing method for ESXi host connectivity to a TrueNAS based NVMe Storage Server. This works perfect because we’ll get the VAAI features (like being able to reclaim space).

iSCSI MPIO Speed Test

This is more of an update… I was finally able to connect, configure, and utilize the 2nd 10Gbe port on the 560SFP+ NIC. In my setup, both hosts and the TrueNAS storage server all have 2 connections to the switch, with 2 VLANs and 2 subnets dedicated to storage. Check out the before/after speed tests with enabling iSCSI MPIO.

As you can see I was able to essentially double my read speeds (again maxing out the networking layer), however you’ll notice that the write speeds maxed out at 1598MB/sec. I believe we’ve reached a limitation of the CPU, PCIe bus, or something else inside of the server. Note, that this is not a limitation of the Sabrent Rocket 4 NVME drives, or the IOCREST NVME PCIe card.

Moving Forward

I’ve had this configuration running for around a week now with absolutely no issues, no crashes, and it’s been very stable.

Using a VDI VM on NVMe backed storage is lightning fast and I love the experience.

I plan on running like this for a little while to continue to test the stability of the environment before making more changes and expanding the configuration and usage.

Future Plans (and Configuration)

  • Drive Bays
    • I plan to populate the 4 hot-swappable drive bays with HPE 4TB MDL drives. Configured with RaidZ1, this should give me around 12TB usable storage. I can use this for file storage, backups, replication, and more.
  • NVMe Replication
    • This design was focused on creating non-redundant extremely fast storage. Because I’m limited to a total of 4 NVMe disks in this design, I chose not to use RaidZ and striped the data. If one NVMe drive is lost, all data is lost.
    • I don’t plan on storing anything important, and at this point the storage is only being used for VDI VMs (which are backed up), and Video editing.
    • If I can populate the front drive bays, I can replicate the NVMe storage to the traditional HDD storage on a frequent basis to protect against failure to some level or degree.
  • Version 3 of the NVMe Storage Server
    • More NVMe and Bigger NVMe – I want more storage! I want to test different levels of RaidZ, and connect to the backbone at even faster speeds.
    • NVME Drives with PLP (Power Loss Prevention) for data security and protection.
    • Dual Power Supply

Let me know your thoughts and ideas on this setup!

Jul 082020
 

Need to add 5 SATA drives or SSDs to your system? The IO-PCE585-5I is a solid option!

The IO-PCE585-5I PCIe card adds 5 SATA ports to your system via a single PCIe x4 card using 2 PCIe lanes. Because the card uses PCIe 3.1a, this sounds like a perfect HBA to use to add SSD’s to your system.

This card can be used in workstations, DIY NAS (Network Attached Storage), and servers, however for the sake of this review, we’ll be installing it in a custom built FreeNAS system to see how the card performs and if it provides all the features and functionality we need.

Picture of an IO-PCE585-5I PCIe Card
IOCREST IO-PCE585-5I PCIe Card

A big thank you to IOCREST for shipping me out this card to review, they know I love storage products! 🙂

Use Cases

The IO-PCE585-5I card is strictly an HBA (a Host Bus Adapter). This card provides JBOD access to the disks so that each can be independently accessed by the computer or servers operating system.

Typically HBAs (or RAID cards in IT mode) are used for storage systems to provide direct access to disks, so that that the host operating system can perform software RAID, or deploy a special filesystem like ZFS on the disks.

The IOCREST IO-PCE585-5I is the perfect card to accomplish this task as it supports numerous different operating systems and provides JBOD access of disks to the host operating system.

In addition to the above, the IO-PCE585-5I provides 5 SATA 6Gb/s ports and uses PCIe 3 with 2 PCIe lanes, to provide a theoretical maximum throughput close to 2GB/s, making this card perfect for SSD use as well!

Need more drives or SSDs? With the PCIe 2x interface, simply just add more to your system!

While you could use this card with Windows software RAID, or Linux mdraid, we’ll be testing the card with FreeNAS, a NAS system built on FreeBSD.

First, how can you get this card?

Where to buy the IO-PCE585-5I

You can purchased the IO-PCE585-5I from:

This card is also marketed as the SI-PEX40139 and IO-PEX40139 Part Numbers.

IO-PCE585-5I Specifications

Let’s get in to the technical details and specs on the card.

Picture of an IO-PCE585-5I PCIe Card
IO-PCE585-5I (IO-PEX40139) PCIe Card

According to the packaging, the IO-PCE585-5I features the following:

  • Supports up to two lanes over PCIe 3.0
  • Complies with PCI Express Base Specification Revision 3.1a.
  • Supports PCIe link layer power saving mode
  • Supports 5 SATA 6Gb/s ports
  • Supports command-based and FIS-based for Port Multipliers
  • Complies with SATA Specification Revision 3.2
  • Supports AHCI mode and IDE programming interface
  • Supports Native Command Queue (NCQ)
  • Supports SATA link power saving mode (partial and slumber)
  • Supports SATA plug-in detection capable
  • Supports drive power control and staggered spin-up
  • Supports SATA Partial / Slumber power management state
  • Supports SATA Port Multiplier

Whats included in the packaging?

  • 1 × IO-PCE585-5I (IO-PEX40139) PCIe 3.0 card to 5 SATA 6Gb/s
  • 1 × User Manual
  • 5 × SATA Cables
  • 1 x Low Profile Bracket
  • 1 x Driver CD (not needed, but nice to have)

Unboxing, Installation, and Configuration

It comes in a very small and simple package.

Picture of the IO-PCE585-5I Retail Box
IO-PCE585-5I Retail Box

Opening the box, you’ll see the package contents.

Picture of IO-PCE585-5I Box Contents
IO-PCE585-5I Box Contents Unboxed

And finally the card. Please note that it comes with the full-height PCIe bracket installed. It also ships with the half-height bracket and can easily be replaced.

Picture of an IO-PCE585-5I PCIe Card
IO-PCE585-5I (SI-PEX40139) PCIe Card

Installation in FreeNAS Server and cabling

We’ll be installing this card in to a computer system, in which we will then install the latest version of FreeNAS. The original plan is to connect the IO-PCE585-5I to a 5-Bay SATA Hotswap backplane/drive cage full of Seagate 1TB Barracuda Hard Drives for testing.

The card installed easily, however we ran in to an issue when running the cabling. The included SATA cables have right angel connectors on the end that connects to the drive, which stops us from being able to connect them to the backplane’s connectors. To overcome this we could either buy new cables, or directly connect to the disks. I chose the latter.

I installed the card in the system, and booted it up. The HBA’s BIOS was shown.

IO-PCE585-5I BIOS
IO-PCE585-5I BIOS

I then installed FreeNAS.

Inside of the FreeNAS UI the disks are all detected! I ran an “lspci” to see what the controller is listed as.

Screenshot of IO-PCE585-5I FreeNAS lspci
IO-PCE585-5I FreeNAS lspci
SATA controller: JMicron Technology Corp. Device 0585

I went ahead and created a ZFS striped pool, created a dataset, and got ready for testing.

Speedtest and benchmark

Originally I was planning on providing numerous benchmarks, however in every case I hit the speed limit of the hard disks connected to the controller. Ultimately this is great because the card is fast, but bad because I can’t pinpoint the exact performance numbers.

To get exact numbers, I may possibly write up another blog post in the future when I can connect some SSDs to test the controllers max speed. At this time I don’t have any immediately available.

One thing to note though is that when I installed the card in a system with PCIe 2.0 slots, the card didn’t run at the expected speed limitations of PCIe 2.0, but way under. For some reason I could not exceed 390MB/sec (reads or writes) when technically I should have been able to achieve close to 1GB/sec. I’m assuming this is due to performance loss with backwards compatibility with the slower PCIe standard. I would recommend using this with a motherboard that supports PCIe 3.0 or higher.

The card also has beautiful blue LED activity indicators to show I/O on each disk independently.

Animated GIF of IO-PCE585-5I LED Activity Indicators
IO-PCE585-5I LED Activity Indicators

After some thorough testing, the card proved to be stable and worked great!

Additional Notes & Issues

Two additional pieces of information worth noting:

  1. IO-PCE585-5I Chipset – The IO-PCE585-5I uses a JMicron JMB585 chipset. This chipset is known to work well and stable with FreeNAS.
  2. Boot Support – Installing this card in different systems, I noticed that all of them allowed me to boot from the disks connected to the IO-PCE585-5I.

While this card is great, I would like to point out the following issues and problems I had that are worth mentioning:

  1. SATA Cable Connectors – While it’s nice that this card ships with the SATA cables included, note that the end of the cable that connects to the drive is right-angled. In my situation, I couldn’t use these cables to connect to the 5 drive backplane because there wasn’t clearance for the connector. You can always purchase other cables to use.
  2. Using card on PCIe 2.0 Motherboard – If you use this PCIe 3.0 card on a motherboard with PCIe 2.0 slots it will function, however you will experience a major performance decrease. This performance degradation will be larger than the bandwidth limitations of PCIe 2.0.

Conclusion

This card is a great option to add 5 hard disks or solid state drives to your FreeNAS storage system, or computer for that matter! It’s fast, stable, and inexpensive.

I would definitely recommend the IOCREST IO-PCE585-5I.

Jun 062020
 
Screenshot of NVMe SSD on FreeNAS

Looking at using SSD and NVMe with your FreeNAS or TrueNAS setup and ZFS? There’s considerations and optimizations that must be factored in to make sure you’re not wasting all that sweet performance. In this post I’ll be providing you with my own FreeNAS and TrueNAS ZFS optimizations for SSD and NVMe to create an NVMe Storage Server.

This post will contain observations and tweaks I’ve discovered during testing and production of a FreeNAS ZFS pool sitting on NVMe vdevs, which I have since upgraded to TrueNAS Core. I will update it with more information as I use and test the array more.

Screenshot of FreeNAS ZFS NVMe SSD Pool with multiple datasets
FreeNAS/TrueNAS ZFS NVMe SSD Pool with multiple datasets

Considerations

It’s important to note that while your SSD and/or NVMe ZFS pool technically could reach insane speeds, you will probably always be limited by the network access speeds.

With this in mind, to optimize your ZFS SSD and/or NVMe pool, you may be trading off features and functionality to max out your drives. These optimizations may in fact be wasted if you reach the network speed bottleneck.

Some feature you may be giving up may actually help extend the life or endurance of your SSD such as compression and deduplication, as they reduce the number of writes performed on each of your vdevs (drives).

You may wish to skip these optimizations should your network be the limiting factor, which will allow you to utilize these features with no performance or minimal performance degradation to the final client. You should measure your network throughput to establish the baseline of your network bottleneck.

Deploying SSD and NVMe with FreeNAS or TrueNAS

For reference, the environment I deployed FreeNAS with NVMe SSD consists of:

Update (May 1st, 2021): Since this blog post was created, I have since used what was learned in my new NVMe Storage Server Project. Make sure you check it out after reading this post!

As mentioned above, FreeNAS is virtualizatized on one of the HPE DL360 Proliant servers and has 8 CPUs and 32GB of RAM. The NVME are provided by VMware ESXi as PCI passthrough devices. There has been no issues with stability in the months I’ve had this solution deployed. It is also still working amazing since upgrading FreeNAS to TrueNAS core.

Screenshot of Sabrent Rocket 4 2TB NVMe SSD on FreeNAS
Sabrent Rocket 4 2TB NVMe SSD on FreeNAS

Important notes:

  • VMXNET3 NIC is used on VMs to achieve 10Gb networking
  • Using PCI passthrough, snapshots on FreeNAS VM are disabled (this is fine)
  • NFS VM datastore is used for testing as the host running the FreeNAS VM has the NFS datastore store mounted on itself.

There are a number of considerations that must be factored in when virtualization FreeNAS and TrueNAS however those are beyond the scope of this blog post. I will be creating a separate post for this in the future.

Use Case (Fast and Risky or Slow and Secure)

The use case of your setup will depict which optimizations you can use as some of the optimizations in this post will increase the risk of data loss (such as disabled sync writes and RAIDz levels).

Fast and Risky

Since SSDs are more reliable and less likely to fail, if you’re using the SSD storage as temporary hot storage, you could simply using striping to stripe across multiple vdevs (devices). If a failure occurred, the data would be lost, however if you’re were just using this for “staging” or using hot data and the risk was acceptable, this is an option to drastically increase speeds.

Example use case for fast and risky

  • VDI Pool for clones
  • VMs that can be restored easily from snapshots
  • Video Editing
  • Temporary high speed data dump storage

The risk can be lowered by replicating the pool or dataset to slower storage on a frequent or regular basis.

Slow and Secure

Using RAIDz-1 or higher will allow for vdev (drive) failures, but with each level increase, performance will be lost due to parity calculations.

Example use case for slow and secure

  • Regular storage for all VMs
  • Database (SQL)
  • Exchange
  • Main storage

Slow and Secure storage is the type of storage found in most applications used for SAN or NAS storage.

SSD Endurance and Lifetime

Solid state drives have a lifetime that’s typically measured in lifetime writes. If you’re storing sensitive data, you should plan ahead to mitigate the risk of failure when the drive reaches it’s full lifetime.

Steps to mitigate failures

  • Before putting the stripe or RAIDz pool in to production, perform some large bogus writes and stagger the amount of data written on the SSDs individually. While this will reduce the life counter on the SSDs, it’ll help you offset and stagger the lifetime of each drives so they don’t die at the same time.
  • If using RAIDz-1 or higher, preemptively replace the SSD before the lifetime is hit. Do this well in advance and stagger it to further create a different between the lifetime of each drive.

Decommissioning the drives preemptively and early doesn’t mean you have to throw them away, this is just to secure the data on the ZFS pool. You can can continue to use these drives in other systems with non-critical data, and possibly use the drive well beyond it’s recommended lifetime.

Compression and Deduplication

Using compression and deduplication with ZFS is CPU intensive (and RAM intensive for deduplication).

The CPU usage is negligible when using these features on traditional magnetic storage (traditional magentic platter hard drive storage) because when using traditional hard drives, the drives are the performance bottleneck.

SSD are a total different thing, specifically with NVMe. With storage speeds in the gigabytes per second, CPUs cannot keep up with the deduplication and compression of data being written and become the bottleneck.

I performed a simple test comparing speeds with compression and dedupe with the same VM running CrystalDiskMark on an NFS VMware datastore running over 10Gb networking. The VM was configured with a single drive on a VMware NVME controller.

NVMe SSD with compression and deduplication

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication

NVMe SSD with deduplication only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only
CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only

NVMe SSD with compression only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression only
CrystalDiskMark on FreeNAS NFS SSD datastore with compression only

Now this is really interesting, that we actually see a massive speed increase with compression only. This is because I have a server class CPU with multiple cores and a ton of RAM. With lower performing specs, you may notice a decrease in performance.

NVMe SSD without compression and deduplication

Screenshot of benchmark with CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication

In my case, the 10Gb networking was the bottleneck on read operations as there was virtually no change. It was a different story for write operations as you can see there is a drastic change in write speeds. Write speeds are greatly increased when writes aren’t being compressed or deduped.

Note that on faster networks, read speeds could and will be affected.

If your network connection to the client application is the limiting factor and the system can keep up with that bottleneck then you will be able to get away with using these features.

Higher throughput with compression and deduplication can be reached with higher frequency CPUs (more Ghz), more cores (for more client connections). Remember that large amounts of RAM are required for deduplication.

Using compression and deduplication may also reduce the writes to your SSD vdevs, prolonging the lifetime and reducing the cost of maintaining the solution.

ZFS ZIL and SLOG

When it comes to writes on a filesystem, there a different kinds.

  • Synchronous – Writes that are made to a filesystem that are only marked as completed and successful once it has actually been written to the physical media.
  • Asynchronous – Writes that are made to a filesystem that are marked as completed or successful before the write has actually been completed and committed to the physical media.

The type of write performed can be requested by the application or service that’s performing the write, or it can be explicitly set on the file system itself. In FreeNAS (in our example) you can override this by setting the “sync” option on the zpool, dataset, or zvol.

Disabling sync will allow writes to be marked as completed before they actually are, essentially “caching” writes in a buffer in memory. See below for “Ram Caching and Sync Writes”. Setting this to “standard” will perform the type of write requested by the client, and setting to “always” will result in all writes being synchronous.

We can speed up and assist writes by using a SLOG for ZIL.

ZIL stands for ZFS Intent Log, and SLOG standards for Separated Log which is usually stored on a dedicated SLOG device.

By utilizing a SLOG for ZIL, you can have dedicated SSDs which will act as your intent log for writes to the zpool. On writes that request a synchronous write, they will be marked as completed when sent to the ZIL and written to the SLOG device.

Implementing a SLOG that is slower than the combined speed of your ZFS pool will result in a performance loss. You SLOG should be faster than the pool it’s acting as a ZIL for.

Implementing a SLOG that is faster than the combined speed of your ZFS pool will result in a performance gain on writes, as it essentially act as “write cache” for synchronous writes and will possibly even perform more orderly writes when it commits it to the actual vdevs in the pool.

If using a SLOG for ZIL, it is highly recommend to use an SSD that has PLP (power loss protection) as well as a mirrored set to avoid data loss and/or corruption in the event of a power loss, crash, or freeze.

RAM Caching and Sync Writes

In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.

Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.

In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.

You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).

Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.

SSD Sector Size

Traditional drives typically used 512k physical sector sizes. Newer hard drives and SSDs use 4k sectors, but often emulate 512k logical sectors (called 512e) for compatibility. SSD’s specifically sometimes ship with 512e to increase compatibility with operating systems and the ability to clone your old drive to the new SSD during migrations.

When emulating 512k logical sectors on an HD or SSD that uses 4k physical native sectors, an operation that writes 4k will result in 4 operations instead of 1. This increases overhead and could result in reduced IO and speed, as well as create more wear on the SSD when performing writes.

Some HDs and SSDs come with utilities or tools to change the sector size of the drive. I highly recommend changing it to it’s native sector size.

iSCSI vs NFS

Technically faster speeds should possible using iSCSI instead of NFS, however special care must be made when using iSCSI.

If you’re using iSCSI and the host that is virtualizing the FreeNAS instance is also mounting the iSCSI VMFS target that it’s presenting, you must unmount this iSCSI volume every time you go plan to shut down the FreeNAS instance, or the entire host that is hosting it. Unmounting the iSCSI datastore also means unregistering any VMs that reside on it.

Screenshot of VMware ESXi with FreeNAS NVMe SSD as NFS datastore
VMware ESXi with virtualized FreeNAS as NFS datastore

If you simply shutdown the FreeNAS instance that’s hosting the iSCSI datastore, this will result in a improper unclean unmount of the VMFS volume and could lead to data loss, even if no VMs are running.

NFS provides a cleaner mechanism, as the FreeNAS handles the unmount of the base filesystem cleanly on shutdown and to the ESXi hosts it appears as an NFS disconnect. If VMs are not running (and no I/O is occuring) when the FreeNAS instance is shut down, data loss is not a concern.

iSCSI MPIO (Multipath I/O)

If your TrueNAS server isn’t virtualized, I’d recommend going with iSCSI because you can configure MPIO (Multipath I/O), which allows redundancy as well as round robin load balancing across multiple connections to the iSCSI target. For example, with 2 x 10Gbe NICs, you should be able to achieve redundancy/failover, as well as 20Gbe combined speeds. If you had 4 x 10Gbe, then you could achieve 40Gbps combined.

Never use LAG or LACP when you want fully optimized NFS, pNFS, or iSCSI MPIO.

Jumbo Frames

Since you’re pushing more data, more I/O, and at a faster pace, we need to optimize all layers of the solution as much as possible. To reduce overhead on the networking side of things, if possible, you should implement jumbo frames.

Instead of sending many smaller packets which independently require acknowledgement, you can send fewer larger packets. This significantly reduces overhead and allows for faster speed.

In my case, my FreeNAS instance will be providing both NAS and SAN services to the network, thus has 2 virtual NICs. On my internal LAN where it’s acting as a NAS (NIC 1), it will be using the default MTU of 1500 byte frames to make sure it can communicate with workstations that are accessing the shares. On my SAN network (NIC 2) where it will be acting as a SAN, it will have a configured MTU of 9000 byte frames. All other devices (SANs, client NICs, and iSCSI initiators) on the SAN network have a matching MTU of 9000.

Additional Notes

Please note that consumer SSDs usually do not have PLP (Power Loss Prevention). This means that in the event of a power failure, any data sitting on the write cache on the SSD may be lost. This could put your data at risk. Using enterprise solid state drives remedies this issue as they often come with PLP.

Conclusion

SSD’s are great for storage, whether it be file, block, NFS, or iSCSI! It’s in my opinion that NVMe and all flash arrays is where the future of storage is going.

I hope this information helps, and if you feel I left anything out, or if anything needs to be corrected, please don’t hesitate to leave a comment!

May 262020
 

So you want to add NVMe storage capability to your HPE Proliant DL360p Gen8 (or other Proliant Gen8 server) and don’t know where to start? Well, I was in the same situation until recently. However, after much research, a little bit of spending, I now have 8TB of NVMe storage in my HPE DL360p Gen8 Server thanks to the IOCREST IO-PEX40152.

Unsupported you say? Well, there are some of us who like to live life dangerously, there is also those of us with really cool homelabs. I like to think I’m the latter.

PLEASE NOTE: This is not a supported configuration. You’re doing this at your own risk. Also, note that consumer/prosumer NVME SSDs do not have PLP (Power Loss Prevention) technology. You should always use supported configurations and enterprise grade NVME SSDs in production environments.

Update – May 2nd 2021: Make sure you check out my other post where I install the IOCREST IO-PEX40152 in an HPE ML310e Gen8 v2 server for Version 2 of my NVMe Storage Server.

Update – June 21 2022: I’ve received numerous comments, chats, and questions about whether you can boot your server or computer using this method. Please note that this is all dependent on your server/computer, the BIOS/EFI, and capabilities of the system. In my specific scenario, I did not test booting since I was using the NVME drives purely as additional storage.

DISCLAIMER: If you attempt what I did in this post, you are doing it at your own risk. I won’t be held liable for any damages or issues.

NVMe Storage Server – Use Cases

There’s a number of reasons why you’d want to do this. Some of them include:

  • Server Storage
  • VMware Storage
  • VMware vSAN
  • Virtualized Storage (SDS as example)
  • VDI
  • Flash Cache
  • Special applications (database, high IO)

Adding NVMe capability

Well, after all that research I mentioned at the beginning of the post, I installed an IOCREST IO-PEX40152 inside of an HPE Proliant DL360p Gen8 to add NVMe capabilities to the server.

IOCREST IO-PEX40152 with 4 x 2TB Sabrent Rocket 4 NVME

At first I was concerned about dimensions as technically the card did fit, but technically it didn’t. I bought it anyways, along with 4 X 2TB Sabrent Rocket 4 NVMe SSDs.

The end result?

Picture of an HPE DL360p Gen8 with NVME SSD
HPE DL360p Gen8 with NVME SSD

IMPORTANT: Due to the airflow of the server, I highly recommend disconnecting and removing the fan built in to the IO-PEX40152. The DL360p server will create more than enough airflow and could cause the fan to spin up, generate electricity, and damage the card and NVME SSD.

Also, do not attempt to install the case cover, additional modification is required (see below).

The Fit

Installing the card inside of the PCIe riser was easy, but snug. The metal heatsink actually comes in to contact with the metal on the PCIe riser.

Picture of an IO-PEX40152 installed on DL360p PCIe Riser
IO-PEX40152 installed on DL360p PCIe Riser

You’ll notice how the card just barely fits inside of the 1U server. Some effort needs to be put in to get it installed properly.

Picture of an DL360p Gen8 1U Rack Server with IO-PEX40152 Installed
HPE DL360p Gen8 with IO-PEX40152 Installed

There are ribbon cables (and plastic fittings) directly where the end of the card goes, so you need to gently push these down and push cables to the side where there’s a small amount of thin room available.

We can’t put the case back on… Yet!

Unfortunately, just when I thought I was in the clear, I realized the case of the server cannot be installed. The metal bracket and locking mechanism on the case cover needs the space where a portion of the heatsink goes. Attempting to install this will cause it to hit the card.

Picture of the HPE DL360p Gen8 Case Locking Mechanism
HPE DL360p Gen8 Case Locking Mechanism

The above photo shows the locking mechanism protruding out of the case cover. This will hit the card (with the IOCREST IO-PEX40152 heatsink installed). If the heatsink is removed, the case might gently touch the card in it’s unlocked and recessed position, but from my measurements clears the card when locked fully and fully closed.

I had to come up with a temporary fix while I figure out what to do. Flip the lid and weight it down.

Picture of an HPE DL360p Gen8 case cover upside down
HPE DL360p Gen8 case cover upside down

For stability and other tests, I simply put the case cover on upside down and weighed it down with weights. Cooling is working great and even under high load I haven’t seen the SSD’s go above 38 Celsius.

The plan moving forward was to remove the IO-PEX40152 heatsink, and install individual heatsinks on the NVME SSD as well as the PEX PCIe switch chip. This should clear up enough room for the case cover to be installed properly.

The fix

I went on to Amazon and purchased the following items:

4 x GLOTRENDS M.2 NVMe SSD Heatsink for 2280 M.2 SSD

1 x BNTECHGO 4 Pcs 40mm x 40mm x 11mm Black Aluminum Heat Sink Cooling Fin

They arrived within days with Amazon Prime. I started to install them.

Picture of Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

And now we install it in the DL360p Gen8 PCIe riser and install it in to the server.

You’ll notice it’s a nice fit! I had to compress some of the heat conductive goo on the PFX chip heatsink as the heatsink was slightly too high by 1/16th of an inch. After doing this it fit nicely.

Also, note the one of the cable/ribbon connectors by the SAS connections. I re-routed on of the cables between the SAS connectors they could be folded and lay under the card instead of pushing straight up in to the end of the card.

As I mentioned above, the locking mechanism on the case cover may come in to contact with the bottom of the IOCREST card when it’s in the unlocked and recessed position. With this setup, do not unlock the case or open the case when the server is running/plugged in as it may short the board. I have confirmed when it’s closed and locked, it clears the card. To avoid “accidents” I may come up with a non-conductive cover for the chips it hits (to the left of the fan connector on the card in the image).

And with that, we’ve closed the case on this project…

Picture of a HPE DL360p Gen8 Case Closed
HPE DL360p Gen8 Case Closed

One interesting thing to note is that the NVME SSD are running around 4-6 Celsius cooler post-modification with custom heatsinks than with the stock heatsink. I believe this is due to the awesome airflow achieved in the Proliant DL360 servers.

Conclusion

I’ve been running this configuration for 6 days now stress-testing and it’s been working great. With the server running VMware ESXi 6.5 U3, I am able to passthrough the individual NVME SSD to virtual machines. Best of all, installing this card did not cause the fans to spin up which is often the case when using non-HPE PCIe cards.

This is the perfect mod to add NVME storage to your server, or even try out technology like VMware vSAN. I have a number of cool projects coming up using this that I’m excited to share.

May 252020
 
Picture of an IOCREST IO-PEX40152 PCIe x16 to Quad M.2 NVMe

Looking to add quad (4) NVMe SSDs to your system and don’t have the M.2 slots or a motherboard that supports bifurcation? The IOCREST IO-PEX40152 QUAD NVMe PCIe card, is the card for you!

The IO-PEX40152 PCIe card allows you to add 4 NVMe SSDs to a computer, workstation, or server that has an available PCIe x16 slot. This card has a built in PEX PCIe switch chip, so your motherboard does not need to support bifurcation. This card can essentially be installed and used in any system with a free x16 slot.

This card is also available under the PART# SI-PEX40152.

In this post I’ll be reviewing the IOCREST IO-PEX40152, providing information on how to buy, benchmarks, installation, configuration and more! I’ve also posted tons of pics for your viewing pleasure. I installed this card in an HPE DL360p Gen8 server to add NVME capabilities to create an NVMe based Storage Server.

We’ll be using and reviewing this card populated with 4 x Sabrent Rocket 4 PCIe NVMe SSD, you can see the review on those SSD’s individually here.

Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

Why and How I purchased the card

Originally I purchased this card for a couple of special and interesting projects I’m working on for the blog and my homelab. I needed a card that provided high density NVME flash storage, but didn’t require bifurcation as I planned on using it on a motherboard that didn’t support 4/4/4/4 bifurcation.

By choosing this specific card, I could also use it in any other system that had an available x16 PCIe slot.

I considered many other cards (such as some from SuperMicro and Intel), but in the end chose this one as it seemed most likely to work for my application. The choices from SuperMicro and Intel looked like they are designed to be use on their own systems.

I purchased the IO-PEX40152 from the IOCREST AliExpress store (after verifying it was their genuine online store) and they had the most cost-effective price out of the 4 sources.

They shipped the card with FedEx International Priority, so I received it within a week. Super fast shipping and it was packed perfectly!

Picture of the IOCREST IO-PEX40152 box
IOCREST IO-PEX40152 Box

Where to buy the IO-PEX40152

I found 3 different sources to purchase the IO-PEX40152 from:

  1. IOCREST AliExpress Store – https://www.aliexpress.com/i/4000359673743.html
  2. Amazon.com – https://www.amazon.com/IO-CREST-Non-RAID-Bifurcation-Controller/dp/B083GLR3WL/
  3. Syba USA – Through their network of resellers or distributors at https://www.sybausa.com/index.php?route=information/wheretobuy

Note that Syba USA is selling the IO-PEX40152 as the SI-PEX40152. The card I actually received has branding that identifies it both as an IO-PEX40152 and an SI-PEX40152.

As I mentioned above, I purchased it from the IOCREST AliExpress Online Store for around $299.00USD. From Amazon, the card was around $317.65USD.

IO-PEX40152 Specifications

Now let’s talk about the technical specifications of the card.

Picture of the IOCREST IO-PEX40152 Side Shot with cover on
IO-PEX40152 Side Shot

According to the packaging, the IO-PEX40152 features the following:

  • Installation in a PCIe x16 slot
  • Supports PCIe 3.1, 3.0, 2.0
  • Compliant with PCI Express M.2 specification 1.0, 1.2
  • Supports data transfer up to 2.5Gb (250MB/sec), 5Gb (500MB/sec), 8Gb (1GB/sec)
  • Supports 2230, 2242, 2260, 2280 size NGFF SSD
  • Supports four (4) NGFF M.2 M Key sockets
  • 4 screw holes 2230/2242/2260/2280 available to fix NGFF SSD card
  • 4 screw holes available to fix PCB board to heatsink
  • Supports Windows 10 (and 7, 8, 8.1)
  • Supports Windows Server 2019 (and 2008, 2012, 2016)
  • Supports Linux (Kernel version 4.6.4 or above)

While this list of features and specs are listed on the website and packaging, I’m not sure how accurate some of these statements are (in a good way), I’ll cover that more later in the post.

What’s included in the packaging?

  • 1 x IO-PEX40152 PCIe x 16 to 4 x M.2(M-Key) card
  • 1 x User Manual
  • 1 x M.2 Mounting material
  • 1 x Screwdriver
  • 5 x self-adhesive thermal pad

They also note that contents may vary depending on country and market.

Unboxing, Installation, and Configuration

As menitoned above, our build includes:

  • 1 x IOCREST IO-PEX40152
  • 4 x Sabrent Rocket 4 NVMe PCIe NVMe SSD
Picture of IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD

You’ll notice it’s a very sleek looking card. The heatsink is beefy, heavy, and very metal (duh)! The card is printed on a nice black PCB.

Removing the 4 screws to release the heatsink, we see the card and thermal paste pads. You’ll notice the PCIe switch chip.

Picture of the front side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Frontside of card

And the backside of the card.

Picture of the back side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Backside of card

NVMe Installation

I start to install the Sabrent Rocket 4 NVMe 2TB SSD.

Picture of a IO-PEX40152 with 2 SSD populated
IO-PEX40152 with 2 SSD populated
Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

That’s a good looking 8TB of NVMe SSD!

Note that the cards will wiggle side to side and have play until screw is tightened. Do not over-tighten the screw!

Make sure before installing the heatsink cover that you remove the blue plastic film on the heat transfer material between NVME and heatsink, and the PEX chip and heatsink.

After that, I installed it in the server and was ready to go!

Heatsink and cooling

A quick note on the heatsink and cooling…

While the heatsink and cooling solution it comes with works amazing, you have flexibility if need be to run and operate the card without the heatsink and fan (the fan doesn’t cause any warnings if disconnected). This works out great if you want to use your own cooling solution, or need to use this card in a system where there isn’t much space. The fan can be removed by removing the screws and disconnecting the power connector.

Note, after installing the NVME SSD, and you affix the heatsink, in the future you will notice that the heatsink get’s stuck to the cards if you try to remove it at a later date. If you do need to remove the heatsink, be very patient and careful, and slowly remove the heatsink to avoid damaging or cracking the NVME SSD and the PCIe card itself.

Speedtest and benchmark

Let’s get to one of the best parts of this review, using the card!

Unfortunately due to circumstances I won’t get in to, I only had access to a rack server to test the card. The server was running VMware vSphere and ESXi 6.5 U3.

After shutting down the server, installing the card, and powering on, you could see the NVMe SSD appearing as available to PCI Passthrough to the VMs. I enabled passthrough and restarted again. I then added the individual 4 NVME drives as PCI passthrough devices to the VM.

Picture of IOCREST IO-PEX40152 passthrough with NVMe to VMware guest VM
IO-PEX40152 PCI Passthrough on VMware vSphere and ESXi

Turning on the system, we are presented with the NVMe drives inside of the “Device Manager” on Windows Server 2016.

A screenshot of an IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016
IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016

Now that was easy! Everything’s working perfectly…

Now we need to go in to disk manager and create some volumes for some quick speed tests and benchmarks.

A screenshot of Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Again, no problems and very quick!

Let’s load up CrystalDiskMark and test the speed and IOPS!

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

What’s interesting is that I was able to achieve much higher speeds using this card in an older system, than directly installing one of the SSDs in a new HP Z240 workstation. However, unfortunately due to CPU limitations (maxing the CPU out) of this server used above, I could not fully test, max out, or benchmark the IOPS on an individual SSD.

Additional Notes on the IO-PEX40152

Some additional notes I have on the IO-PEX40152:

The card works perfectly with VMware ESXi PCI passthrough when passing it through to a virtualized VM.

The card according to the specifications states a data transfer up to 1GB/sec, however I achieved over 3GB/sec using the Sabrent Rocket 4 NVME SSD.

While the specifications and features state it supports NVME spec 1.0 and 1.1, I don’t see why it wouldn’t support the newer specifications as it’s simply a PCIe switch which NVMe slots.

Conclusion

This is a fantastic card that you can use reliably if you have a system with a free x16 slot. Because of the fact it has a built in PCIe switch and doesn’t require PCIe bifurcation, you can confidently use it knowing it will work.

I’m looking forward to buying a couple more of these for some special applications and projects I have lined up, stay tuned for those!

May 222020
 
A Picture of the 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Today we’re going to be talking about Sabrent’s newest line of NVMe storage products, particularly the 2TB Rocket NVMe PCIe 4.0 M.2 2280 Internal SSD Solid State Drive or the Sabrent Rocket 4 2TB NVMe stick as I like to call it.

Last week I purchased a quantity of 4 of these for a total of 8TB of NVMe storage to use on an IOCrest IO-PEX40152 Quad NVMe PCIe Card. For the purpose of this review, we’re benchmarking one inside of an HP Z240 Workstation.

While these are targeted for users with a PCIe 4.0 interface, I’ll be using these on PCIe 3 as it’s backwards compatible. I purchased the PCIe 4 option to make sure the investment was future-proofed.

Keep reading for a bunch of pictures, specs, speed tests, benchmarks, information, and more!

A picture of 4 unopened boxes of Sabrent Rocket 4 2TB NVMe sticks
4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Let’s get started with the review!

How and Why I purchased these

I’ve been working on a few special top-secret projects for the blog and YouTube channel, and needed some cost-effective yet high performing NVMe storage.

I needed at least 8TB of NVMe flash and I’m sure as all of you are aware, NVMe isn’t cheap.

After around a month of research I finally decided to pull the trigger and purchase a quantity of 4 x Sabrent Rocket 4 NVMe 2TB SSD. For future projects I’ll be using these in an IOCREST IO-PEX40152 NVME PCIe card.

These NVMe SSDs are targeted for consumers (normal users, gamers, power users, and IT professionals) and are a great fit! Just remember these do not have PLP (power loss protection), which is a feature that isn’t normally found in consumer SSDs.

Specifications

See below for the specifications and features included with the Sabrent Rocket 4 2TB NVMe SSD.

Hardware Specs:

  • Toshiba BiCS4 96L TLC NAND Flash Memory
  • Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD Controller
  • Kioxia 3D TLC NAND
  • M.2 2280 Form Factor
  • PCIe 4.0 Speeds
    • Read Speed of 5000MB/sec
    • Write Speed of 4400MB/sec
  • PCIe 3.0 Speeds
    • Read Speed of 3400MB/sec
    • Write Speed of 2750MB/sec
  • 750,000 IOPS on 2TB Model
  • Endurance: 3,600TBW for 2TB, 1,800TBW for 1TB, 850TBW for 500TB
  • Available in 500GB, 1TB, 2TB
  • Made in Taiwan

Features:

  • NVMe M.2 2280 Interface for PCIe 4.0 (NVMe 1.3 Compliant)
  • APST, ASPM, L1.2 Power Management Support
  • Includes SMART and TRIM Support
  • ONFi 2.3, ONFi 3.0, ONFi 3.2 and ONFi 4.0 interface
  • Includes Advanced Wear Leveling, Bad Block Management, Error Correction Code, and Over-Provision
  • User Upgradeable Firmware
  • Software Tool to change Block Size

Where and how to buy

One of the perks of owning an IT company is that typically you can purchase all of your internal use product at cost or discount, unfortunately this was not the case.

I was unable to find the Sabrent products through any of the standard distribution channels and had to purchase through Amazon. This is unfortunate because I wouldn’t mind being able to sell these units to customers.

Amazon Purchase Links (2TB Model)

The PART#s are as follows for the different sizes:

ProductNVMe Disk SizePART#
No Heatsink
PART#
Heatsink
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive2TBSB-ROCKET-NVMe4-2TBSB-ROCKET-NVMe4-HTSK-2TB
1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive1TBSB-ROCKET-NVMe4-1TBSB-ROCKET-NVMe4-HTSK-1TB
500GB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive500GBSB-ROCKET-NVMe4-500SB-ROCKET-NVMe4-HTSK-500
Sabrent Rocket 4 Part Number Lookup Table

Cost

At the time of creation of this post, purchasing from Amazon Canada the 2TB model would set you back $699.99CAD for a single unit, however there was a sale going on for $529.99CAD per unit.

Additionally, at the time of creation of this post the 2TB model on Amazon USA would set you back $399.98 USD.

A total quantity of 4 set me back around $2,119.96CAD on sale versus $2,799.96 at regular price.

If you’re familiar with NVMe pricing, you’ll notice that this pricing is extremely attractive when comparing to other high performance NVMe SSDs.

Unboxing

I have to say I was very impressed with the packaging! Small, sleek, and impressive!

A picture of Sabrent Rocket 4 2TB NVMe sticks metal case packagin
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive Metal Case Packaging

Initially I was surprised how small the boxes were as they fit in the palm of your hand, but then you realize how small the NVMe sticks are, so it makes sense.

Opening the box you are presented with a beautiful metal case containing the instructions, information on the product warranty, and more.

Picture of a Sabrent Rocket 4 case opened
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive in case

And the NVME stick removed from it’s case

Picture of a Sabrent Rocket 4 case opened and the NVME SSD removed
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive removed from case

While some of the packaging may be unnecesary, after further thought I realized it’s great to have as you can re-use the packaging when storing NVMe drives to keep them safe and/or to put them in to storage.

And here’s a beautiful shot of 8TB of NVMe storage.

A picture of 8TB of total storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive
8TB Total NVMe storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Now let’s move on to usage!

Installation, Setup, and Configuration

Setting one of these up in my HP Workstation was super easy. You simply populate the NVMe M.2 slot, install the screw, and boot up the system.

Picture of a Sabrent Rocket PCIe4 NVMe 2TB SSD Installed in computer
Sabrent Rocket 4 NVMe 2TB SSD in HP Z240 SFF Workstation

Upon booting, the Sabrent SSD was available inside of the Device Manager. I read on their website that they had some utilities so I wanted to see what configuration options I had access to before moving on to speed test benchmarks.

All Sabrent Rocket utilities can be downloaded from their website at https://www.sabrent.com/downloads/.

Sabrent Sector Size Converter

The Sabrent Sector Size Converter utility allows you to configure the sector size of your Sabrent Rocket SSD. Out of the box, I noticed mine was configured with a 512e sector format, which I promptly changed to 4K.

Screenshot using the Sabrent Sector Size Converter to change SSD from 512e to 4K Sector Size
Sabrent Sector Size Converter v1.0

The change was easy, required a restart and I was good to go! You’ll notice it has a drop down to select which drive you want to modify, which is great if you have more than one SSD in your system.

I did notice one issue… When you have multiple (in my case 4) of these in one system, for some reason the sector size change utility had trouble changing one of them from 512e to 4K. It would appear to be succesful, but stay at 512e on reboot. Ultimately I removed all of the NVME sticks with the exception of the problematic one, ran the utility, and the issue was resolved.

Sabrent Rocket Control Panel

Another useful utility that was available for download is the Sabrent Rocket Control Panel.

Screenshot of the Sabrent Rocket Control Panel
Sabrent Rocket Control Panel

The Sabrent Rocket Control Panel provides the following information:

  • Drive Size, Sector Size, Partition Count
  • Serial Number and Drive identifier
  • Feature status (TRIM Support, SMART, Product Name)
  • Drive Temperature
  • Drive Health (Lifespan)

You can also use this app to view S.M.A.R.T. information, flash updated Sabrent firmware, and more!

Now that we have this all configured, let’s move on to testing this SSD out!

Speed Tests and Benchmarks

The system we used to benchmark the Sabrent Rocket 4 2TB NVMe SSD is an HP Z240 SFF (Small Form Factor) workstation.

The specs of the Z240 Workstation:

  • Intel Xeon E3-1240 v5 @ 3.5Ghz
  • 16GB of RAM
  • Samsung EVO 500GB as OS Drive
  • Sabrent Rocket 4 NVMe 2TB SSD as Test Drive

I ran a few tests using both CrystalDiskMark and ATTO Disk Benchmark, and the NVMe SSD performed flawlessly at extreme speeds!

CrystalDiskMark Results

Loading up and benching with CrystalDiskMark, we see the following results:

Screenshot of speedtest and benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark Results

As you can see, the Sabrent Rocket 4 2TB NVMe tested at a read speed of 3175.63MB/sec and write speed of 3019.17MB/sec.

Screenshot of IOPS benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark IOPS Results

Using the Peak Performance profile, we some amazing IO with 613171.14IOPS read and 521861.33IOPS write with RND4K.

While we’re only testing with a PCIe 3.0 system, these numbers are still amazing and inline with what’s advertised.

ATTO Disk Benchmark Results

Switing over to ATTO Disk Benchmark, we test both speed and IOPS.

First, the speed benchmarks with I/O sized 4K to 12MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 4K to 12MB

After taking a short cooldown break (we don’t have a heatsink installed), we tested 12MB to 64MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 12MB to 64MB

And now we move on to analyze the IO/s.

First from 4K to 12MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 4K to 12MB

And then after a short break, 12MB to 64MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 12MB to 64MB

Those numbers are insane!

Additional Notes

When you purchase a new Sabrent Rocket 4 SSD it comes with a 1 year standard warranty, however if you register your product within 90 days of purchase, you can extend it to an awesome 5 year warranty.

To register your product, visit https://www.sabrent.com/product-registration/

The process is easy if you have one device, however it very repettitive and takes time if you have multiuple as the steps have to be repeated for each device you have. Sabrent, if you’re listening a batch registration tool would be nice! 🙂

Remember that after registering your product, you should record your “Registration Unique ID” for future reference and use.

Conclusion

All-in-all I’d definitely recommend the Sabrent Rocket 4 NVMe SSD! It provides extreme performance, is extremely cost-effective, and I wouldn’t see any reason not to buy them.

Just remember that these SSDs (like all consumer SSDs) do not provide power loss protection, meaning you should not use these in enterprise environments (or in a NAS or SAN).

I’m really looking forward to using these in my upcoming blog and YouTube projects.

Apr 122020
 
Picture of Raspberry Pi 4 box and Raspberry Pi 4 board below box

If you’re worried about destroying your SD Cards, need some more space, or just want to learn something new, I’m going to show you how to use an NFS root for the Raspberry Pi 4.

When you use an NFS Root with your Raspberry Pi, it stores the entire root filesystem on a remote NFS export (think of it as a network filesystem share). This means you’ll have as much space as the NFS export, and you’ll probably see way faster performance since it’ll be running at 1Gb/sec instead of the speed of the SD Card.

This also protects your SD card, as the majority of the reading and writing is performed on the physical storage of the NFS export, instead of the SD card in the Pi which has limited reads and writes.

What you’ll need

To get started, you’ll need:

  • Raspberry Pi 4
  • Ubuntu or Raspbian for Raspberry Pi 4 Image
  • A small SD card for the Boot Partition (1-2GB)
  • SD card for the Raspberry Pi Linux image
  • Access to another Linux system (workstation, or a Raspberry Pi)

There are multiple ways to do this, but I’m providing instructions on the easiest way it was for me to do this with the resources I had immediately available.

Instructions

To boot your Raspberry Pi 4 from an NFS root, multiple steps are involved. Below you’ll find the summary, and further down you’ll find the full instructions. You can click on an item below to go directly to the section.

The process:

  1. Write the Linux image to an SD Card
  2. Create boot SD Card for NFS Root
  3. Prep the Linux install for NFS Root
  4. Create the NFS Export
  5. Copy the Linux install to the NFS Export
  6. Copy and Modify the boot SD Card to use NFS Root
  7. Boot using SD Card and test NFS Root

See below for the individual instructions for each step.

Write the Linux image to an SD Card

First, we need to write the SD Card Linux image to your SD card. You’ll need to know which device your SD card will appear to your computer. In my case it was /dev/sdb, make sure you verify the right device or you could damage your current Linux install.

  1. Download Ubuntu or Raspbian for Raspberry Pi.
  2. unzip or unxz depending on distribution to uncompress the image file.
  3. Write the SD card image to SD card.
    dd if=imagename.img of=/dev/sdb bs=4M

You now have an SD Card Linux install for your Raspberry Pi. We will later modify and then copy this to the NFS root and boot SD card.

Create boot SD Card for NFS Root

In this step, we’re going to create a bootable SD card that contains the Linux kernel and other needed files for the Raspberry Pi to boot.

This card will be installed in the Pi, load the kernel, and then kick off the boot process to load the NFS root.

I previously created a post to create a boot partition layout for a Raspberry Pi. Please follow those instructions to complete this step.

Later on in this guide, you’ll be copying the boot partition from the SD Card Linux image, on to this newly created boot SD Card for the NFS Root.

Prep the Linux install for NFS Root

There’s a few things we have to do to prep the Ubuntu or Raspbian Linux install to be usable as an NFS Root.

  1. Boot the Raspbian or Ubuntu SD Card you create in the first step on your Raspberry Pi.
  2. Complete the first boot procedures. Create your account, and complete the setup.
  3. Enable and confirm SSH is working so you can troubleshoot.
  4. Install the NFS client files using the following command:
    apt install nfs-common
  5. Open the /etc/network/interfaces file, and add the following line so that the Pi only get’s an IP once during boot:
    iface eth0 inet manual
  6. Modify your /etc/fstab entries to reflect the NFS root and the new boot SD card as per below.

For step 6, we need to modify the /etc/fstab entry for the root fs. It is different depending on whether you’re using Ubuntu or Raspbian.

For Raspbian, your /etc/fstab should look like this:

proc /proc proc defaults 0 0
LABEL=boot /boot vfat defaults 0 2
NFS-SERVER-IP:/nfs-export/PI-Raspbian / nfs defaults 0 0

For Ubuntu, your /etc/fstab should look like this:

LABEL=system-boot /boot/firmware vfat defaults 0 2
/dev/nfs / nfs defaults 0 0

After you do this, the Linux SD image may not boot again if directly installed in the Raspberry Pi, so make sure you’ve made the proper modifications before powering it down.

Create the NFS Export

In my case I used a Synology DS1813+ as an NFS server to host my Raspberry Pi NFS root images. But you can use any Linux server to host it.

If you’re using a synology disk station, create a shared folder, disable the recycling bin, leave everything else default. Head over to the “NFS Permissions” tab and create an ACL entry for your PI and workstations. You can also add a network segment for your entire network (ex. 192.168.0.0/24″) instead of specifying individual IPs.

Screenshot of Synology Create NFS rule for ACL
Create an NFS ACL Rule for Synology NFS Access

Once you create an entry, it’ll look like this. Note the “Mount path” in the lower part of the window.

Screenshot of NFS Shared Folder Permissions and Mount Point on Synology NAS
NFS Permissions and Mount Path for NFS Export

Now, if you’re using a standard Linux server the steps are different.

  1. Install the require NFS packages:
    apt install nfs-kernel-server
  2. Create a directory, we’ll call it “nfs-export” on your root fs on the server:
    mkdir /nfs-export/
  3. Then create a directory for the Raspberry Pi NFS Root:
    mkdir /nfs-export/PI-ImageName
  4. Now edit your /etc/exports file and add this line to the file to export the path:
    /nfs-export/PI-ImageName     IPorNetworkRange(rw,no_root_squash,async,insecure)
  5. Reload the NFS exports to take affect:
    exportfs -ra

Take note of the mount point and/or NFS export path, as this is the directory your Raspberry Pi will need to mount to access it’s NFS root. This is also the directory you will be copying your SD Card Linux install root FS to.

Copy the Linux install to the NFS Export

When you’re ready to copy your SD Card Linux install to your NFS Export, you’ll need to do the following. In my case I’ll be using an Ubuntu desktop computer to perform these steps.

When I insert the SD Card containing the Raspberry Pi Linux image, it appeared as /dev/sdb on my system. Please make sure you are using your proper device names to avoid using the wrong one to avoid writing or using the wrong disk.

Instructions to copy the root fs from the SD card to the NFS root export:

  1. Mount the root partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb2 old/
  2. Mount the NFS Export for the NFS Root to a directory. In my case I used a directory called “nfs”.
    mount IPADDRESS:/nfs-export/PI-ImageName nfs/
  3. Use the rsync command to transfer the SD card Linux install to the NFS Root Export.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ nfs/
  4. Unmount the directories.
    umount old/
    umount nfs/

Once this is complete, your OS root is now copied to the NFS root.

Copy and Modify the boot SD Card to use NFS Root

First we have to copy the boot partition from the SD Card Linux install to the boot SD card, then we need to modify the contents of the new boot SD card.

Top copy the boot files, follow these instructions.

  1. Mount the boot partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb1 old/
  2. Mount the new boot partition of the boot SD card to a new directory. In my case I used the directory called “new”.
    mount /dev/sdc1 new/
  3. Use the rsync command to transfer the SD card Linux install boot partition to the new boot SD card.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ new/
  4. Unmount the directories.
    umount old/
    umount new/

Now there are few steps we have to take to make to the boot SD card boot to an NFS Root.

We have to make a modification to the PI boot command. It is different depending on which Linux image (Ubuntu or Raspbian) you’re using.

First, insert the boot SD card, and mount it to a temporary directory.

mount /dev/sdc1 new/

If you’re running Ubuntu, your existing nobtcmd.txt should look like this:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Ubuntu,tcp,rw ip=dhcp rootfstype=nfs elevator=deadline rootwait

For Raspbian, your existing cmdline.txt should look like this:

console=serial0,115200 console=tty1 root=PARTUUID=97709164-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

console=serial0,115200 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Raspbian,tcp rw vers=3 ip=dhcp rootfstype=nfs elevator=deadline rootwait

Once you make the modifications, save the file and unmount the SD card.

Your SD card is now ready to boot.

Boot using SD Card and test NFS Root

At this point, insert the boot SD Card in your Raspberry Pi and attempt to boot. All should be working now and it should boot and use the NFS root!

If you’re having issues, if the boot process stalls, or something doesn’t work right, look back and confirm you followed all the steps above properly.

You’re done!

You’re now complete and have a fully working NFS root for your Raspberry Pi. You’ll no longer worry about storage, have high speed access to it, and you’ll have some new skills!

And don’t forget to check out these Handy Tips, Tricks, and Commands for the Raspberry Pi 4!

Aug 122019
 
DS1813+

Around a month ago I decided to turn on and start utilizing NFS v4.1 (Version 4.1) in DSM on my Synology DS1813+ NAS. As most of you know, I have a vSphere cluster with 3 ESXi hosts, which are backed by an HPE MSA 2040 SAN, and my Synology DS1813+ NAS.

The reason why I did this was to test the new version out, and attempt to increase both throughput and redundancy in my environment.

If you’re a regular reader you know that from my original plans (post here), and than from my issues later with iSCSI (post here), that I finally ultimately setup my Synology NAS to act as a NFS datastore. At the moment I use my HPE MSA 2040 SAN for my hot storage, and I use the Synology DS1813+ for cold storage. I’ve been running this way for a few years now.

Why NFS?

Some of you may ask why I chose to use NFS? Well, I’m an iSCSI kinda guy, but I’ve had tons of issues with iSCSI on DSM, especially MPIO on the Synology NAS. The overhead was horrible on the unit (result of the lack of hardware specs on the NAS) for both block and file access to iSCSI targets (block target, vs virtualized (fileio) target).

I also found a major issue, where if one of the drives were dying or dead, the NAS wouldn’t report it as dead, and it would bring the iSCSI target to a complete halt, resulting in days spending time finding out what’s going on, and then finally replacing the drive when you found out it was the issue.

After spending forever trying to tweak and optimize, I found that NFS was best for the Synology NAS unit of mine.

What’s this new NFS v4.1 thing?

Well, it’s not actually that new! NFS v4.1 was released in January 2010 and aimed to support clustered environments (such as virtualized environments, vSphere, ESXi). It includes a feature called Session trunking mechanism, which is also known as NFS Multipathing.

We all love the word multipathing, don’t we? As most of you iSCSI and virtualization people know, we want multipathing on everything. It provides redundancy as well as increased throughput.

How do we turn on NFS Multipathing?

According to the VMware vSphere product documentation (here)

While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths.


NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or hostname for the NFS server, and does not support multiple paths. Depending on your network infrastructure and configuration, you can use the network stack to configure multiple connections to the storage targets. In this case, you must have multiple datastores, each datastore using separate network connections between the host and the storage.


NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported.

So it is supported! Now what?

In order to use NFS multipathing, the following must be present:

  • Multiple NICs configured on your NAS with functioning IP addresses
  • A gateway is only configured on ONE of those NICs
  • NFS v4.1 is turned on inside of the DSM web interface
  • A NFS export exists on your DSM
  • You have a version of ESXi that supports NFS v4.1

So let’s get to it! Enabling NFS v4.1 Multipathing

  1. First log in to the DSM web interface, and configure your NIC adapters in the Control Panel. As mentioned above, only configure the default gateway on one of your adapters.Synology Multiple NICs Configured Screenshot
  2. While still in the Control Panel, navigate to “File Services” on the left, expand NFS, and check both “Enable NFS” and “Enable NFSv4.1 support”. You can leave the NFSv4 domain blank.Enabling NFSv4.1 on Synology DSM
  3. If you haven’t already configured an NFS export on the NAS, do so now. No further special configuration for v4.1 is required other than the norm.
  4. Log on to your ESXi host, go to storage, and add a new datastore. Choose to add an NFS datastore.
  5. On the “Select NFS version”, select “NFS 4.1”, and select next.Selecting the NFS version on the Add Datastore Dialog box on ESXi
  6. Enter the datastore name, the folder on the NAS, and enter the Synology NAS IP addresses, separated by commas. Example below:New NFS Datastore details and configuration on ESXi dialog box
  7. Press the Green “+” and you’ll see it spreads them to the “Servers to be added”, each server entry reflecting an IP on the NAS. (please note I made a typo on one of the IPs).List of Servers/IPs for NFS Multipathing on ESXi Add Datastore dialog box
  8. Follow through with the wizard, and it will be added as a datastore.

That’s it! You’re done and are now using NFS Multipathing on your ESXi host!

In my case, I have all 4 NICs in my DS1813+ configured and connected to a switch. My ESXi hosts have 10Gb DAC connections to that switch, and can now utilize it at faster speeds. During intensive I/O loads, I’ve seen the full aggregated network throughput hit and sustain around 370MB/s.

After resolving the issues mentioned below, I’ve been running for weeks with absolutely no problems, and I’m enjoying the increased speed to the NAS.

Additional Important Information

After enabling this, I noticed that RAM and Memory usage had drastically increased on the Synology NAS. This would peak when my ESXi hosts would restart. This issue escalated to the NAS running out of memory (both physical and swap) and ultimately crashing.

After weeks of troubleshooting I found the processes that were causing this. While the processes were unrelated, this issue would only occur when using NFS Multipathing and NFS v4.1. To resolve this, I had to remove the “pkgctl-SynoFinder” package, and disable the services. I could do this in my environment because I only use the NAS for NFS and iSCSI. This resolved the issue. I created a blog post here to outline how to resolve this. I also further optimized the NAS and memory usage by disabling other unneeded services in a post here, targeted for other users like myself, who only use it for NFS/iSCSI.

Leave a comment and let me know if this post helped!

Jul 312019
 

If you’re like me and use a Synology NAS as an NFS or iSCSI datastore for your VMware environment, you want to optimize it as much as possible to reduce any hardware resource utilization.

Specifically we want to disable any services that we aren’t using which may use CPU or memory resources. On my DS1813+ I was having issues with a bug that was causing memory overflows (the post is here), and while dealing with that, I decided to take it a step further and optimize my unit.

Optimize the NAS

In my case, I don’t use any file services, and only use my Synology NAS (Synology DS1813+) as an NFS and iSCSI datastore. Specifically I use multipath for NFSv4.1 and iSCSI.

If you don’t use SMB (Samba / Windows File Shares), you can make some optimizations which will free up substantial system resources.

Disable and/or uninstall unneeded packages

First step, open up the “Package Center” in the web GUI and either disable, or uninstall all the packages that you don’t need, require, or use.

To disable a package, select the package in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Disable” or “Stop” will appear if you can turn off the service. This may or may not be persistent on a fresh boot.

To uninstall a package, select the packet in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Uninstall” will appear. Selecting this will uninstall the package.

Disable the indexing service

As mentioned here, the indexing service can consume quite a bit of RAM/memory and CPU on your Synology unit.

To stop this service, SSH in to the unit as admin, then us the command “sudo su” to get a root shell, and finally run this command.

synoservice --disable pkgctl-SynoFinder

The above command will probably not persist on boot, and needs to be ran each fresh boot. You can however uninstall the package with the command below to completely remove it.

synopkg uninstall SynoFinder

Doing this will free up substantial resources.

Disable SMB (Samba), and NMBD

I noticed that both smbd and nmbd (Samba/Windows File Share Services) were consuming quite a bit of CPU and memory as well. I don’t use these, so I can disable them.

To disable them, I ran the following command in an SSH session (remember to “sudo su” from admin to root).

synoservice --disable nmbd
synoservice --disable samba

Keep in mind that while this should be persistent on boot, it wasn’t on my system. Please see the section below on how to make it persistent on booth.

Disable thumbnail generation (thumbd)

When viewing processes on the Synology NAS and sorting by memory, there are numerous “thumbd” processes (sometimes over 10). These processes deal with thumbnail generation for the filestation viewer.

Since I’m not using this, I can disable it. To do this, we either have to rename or delete the following file. I do recommend making a backup of the file.

/var/packages/FileStation/target/etc/conf/thumbd.conf

I’m going to rename it so that the service daemon can’t find it when it initializes, which causes the process not to start on boot.

cd /var/packages/FileStation/target/etc/conf/
mv thumbd.conf thumbd.conf.bak

Doing the above will stop it from running on boot.

Make the optimizations persistent on boot

In this section, I will show you how to make all the settings above persistent on boot. Even though I have removed the SynoFinder package, I still will create a startup script on the Synology NAS to “disable” it just to be safe.

First, SSH in to the unit, and run “sudo su” to get a root shell.

Run the following commands to change directory to the startup script, and open a text editor to create a startup script.

cd /usr/local/etc/rc.d/
vi speedup.sh

While in the vi file editor, press “i” to enter insert mode. Copy and paste the code below:

case "$1" in
    start)
                echo "Turning off memory garbage"
                        synoservice --disable nmbd
                        synoservice --disable samba
                        synoservice --disable pkgctl-SynoFinder
                        ;;
    stop)
                        echo "Pertend we care and are turning something on"
                        ;;
        *)
        echo "Usage: $1 {start|stop}"
                exit 1
esac
exit 0

Now press escape, then type “:wq” and hit enter to save and close the vi text editor. Run the following command to make the script executable.

chmod 755 speedup.sh

That’s it!

Conclusion

After making the above changes, you should see a substantial performance increase and reduction in system resources!

In the future I plan on digging deeper in to optimization as I still see other services I may be able to trim down, after confirming they aren’t essential to the function of the NAS.

Feel like you can add anything? Leave a comment!