May 252020
 
Picture of an IOCREST IO-PEX40152 PCIe x16 to Quad M.2 NVMe

Looking to add quad (4) NVMe SSDs to your system and don’t have the M.2 slots or a motherboard that supports bifurcation? The IOCREST IO-PEX40152 QUAD NVMe PCIe card, is the card for you!

The IO-PEX40152 PCIe card allows you to add 4 NVMe SSDs to a computer, workstation, or server that has an available PCIe x16 slot. This card has a built in PEX PCIe switch chip, so your motherboard does not need to support bifurcation. This card can essentially be installed and used in any system with a free x16 slot.

This card is also available under the PART# SI-PEX40152.

In this post I’ll be reviewing the IOCREST IO-PEX40152, providing information on how to buy, benchmarks, installation, configuration and more! I’ve also posted tons of pics for your viewing pleasure. I installed this card in an HPE DL360p Gen8 server to add NVME capabilities to create an NVMe based Storage Server.

We’ll be using and reviewing this card populated with 4 x Sabrent Rocket 4 PCIe NVMe SSD, you can see the review on those SSD’s individually here.

Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

Why and How I purchased the card

Originally I purchased this card for a couple of special and interesting projects I’m working on for the blog and my homelab. I needed a card that provided high density NVME flash storage, but didn’t require bifurcation as I planned on using it on a motherboard that didn’t support 4/4/4/4 bifurcation.

By choosing this specific card, I could also use it in any other system that had an available x16 PCIe slot.

I considered many other cards (such as some from SuperMicro and Intel), but in the end chose this one as it seemed most likely to work for my application. The choices from SuperMicro and Intel looked like they are designed to be use on their own systems.

I purchased the IO-PEX40152 from the IOCREST AliExpress store (after verifying it was their genuine online store) and they had the most cost-effective price out of the 4 sources.

They shipped the card with FedEx International Priority, so I received it within a week. Super fast shipping and it was packed perfectly!

Picture of the IOCREST IO-PEX40152 box
IOCREST IO-PEX40152 Box

Where to buy the IO-PEX40152

I found 3 different sources to purchase the IO-PEX40152 from:

  1. IOCREST AliExpress Store – https://www.aliexpress.com/i/4000359673743.html
  2. Amazon.com – https://www.amazon.com/IO-CREST-Non-RAID-Bifurcation-Controller/dp/B083GLR3WL/
  3. Syba USA – Through their network of resellers or distributors at https://www.sybausa.com/index.php?route=information/wheretobuy

Note that Syba USA is selling the IO-PEX40152 as the SI-PEX40152. The card I actually received has branding that identifies it both as an IO-PEX40152 and an SI-PEX40152.

As I mentioned above, I purchased it from the IOCREST AliExpress Online Store for around $299.00USD. From Amazon, the card was around $317.65USD.

IO-PEX40152 Specifications

Now let’s talk about the technical specifications of the card.

Picture of the IOCREST IO-PEX40152 Side Shot with cover on
IO-PEX40152 Side Shot

According to the packaging, the IO-PEX40152 features the following:

  • Installation in a PCIe x16 slot
  • Supports PCIe 3.1, 3.0, 2.0
  • Compliant with PCI Express M.2 specification 1.0, 1.2
  • Supports data transfer up to 2.5Gb (250MB/sec), 5Gb (500MB/sec), 8Gb (1GB/sec)
  • Supports 2230, 2242, 2260, 2280 size NGFF SSD
  • Supports four (4) NGFF M.2 M Key sockets
  • 4 screw holes 2230/2242/2260/2280 available to fix NGFF SSD card
  • 4 screw holes available to fix PCB board to heatsink
  • Supports Windows 10 (and 7, 8, 8.1)
  • Supports Windows Server 2019 (and 2008, 2012, 2016)
  • Supports Linux (Kernel version 4.6.4 or above)

While this list of features and specs are listed on the website and packaging, I’m not sure how accurate some of these statements are (in a good way), I’ll cover that more later in the post.

What’s included in the packaging?

  • 1 x IO-PEX40152 PCIe x 16 to 4 x M.2(M-Key) card
  • 1 x User Manual
  • 1 x M.2 Mounting material
  • 1 x Screwdriver
  • 5 x self-adhesive thermal pad

They also note that contents may vary depending on country and market.

Unboxing, Installation, and Configuration

As menitoned above, our build includes:

  • 1 x IOCREST IO-PEX40152
  • 4 x Sabrent Rocket 4 NVMe PCIe NVMe SSD
Picture of IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD

You’ll notice it’s a very sleek looking card. The heatsink is beefy, heavy, and very metal (duh)! The card is printed on a nice black PCB.

Removing the 4 screws to release the heatsink, we see the card and thermal paste pads. You’ll notice the PCIe switch chip.

Picture of the front side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Frontside of card

And the backside of the card.

Picture of the back side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Backside of card

NVMe Installation

I start to install the Sabrent Rocket 4 NVMe 2TB SSD.

Picture of a IO-PEX40152 with 2 SSD populated
IO-PEX40152 with 2 SSD populated
Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

That’s a good looking 8TB of NVMe SSD!

Note that the cards will wiggle side to side and have play until screw is tightened. Do not over-tighten the screw!

Make sure before installing the heatsink cover that you remove the blue plastic film on the heat transfer material between NVME and heatsink, and the PEX chip and heatsink.

After that, I installed it in the server and was ready to go!

Heatsink and cooling

A quick note on the heatsink and cooling…

While the heatsink and cooling solution it comes with works amazing, you have flexibility if need be to run and operate the card without the heatsink and fan (the fan doesn’t cause any warnings if disconnected). This works out great if you want to use your own cooling solution, or need to use this card in a system where there isn’t much space. The fan can be removed by removing the screws and disconnecting the power connector.

Note, after installing the NVME SSD, and you affix the heatsink, in the future you will notice that the heatsink get’s stuck to the cards if you try to remove it at a later date. If you do need to remove the heatsink, be very patient and careful, and slowly remove the heatsink to avoid damaging or cracking the NVME SSD and the PCIe card itself.

Speedtest and benchmark

Let’s get to one of the best parts of this review, using the card!

Unfortunately due to circumstances I won’t get in to, I only had access to a rack server to test the card. The server was running VMware vSphere and ESXi 6.5 U3.

After shutting down the server, installing the card, and powering on, you could see the NVMe SSD appearing as available to PCI Passthrough to the VMs. I enabled passthrough and restarted again. I then added the individual 4 NVME drives as PCI passthrough devices to the VM.

Picture of IOCREST IO-PEX40152 passthrough with NVMe to VMware guest VM
IO-PEX40152 PCI Passthrough on VMware vSphere and ESXi

Turning on the system, we are presented with the NVMe drives inside of the “Device Manager” on Windows Server 2016.

A screenshot of an IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016
IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016

Now that was easy! Everything’s working perfectly…

Now we need to go in to disk manager and create some volumes for some quick speed tests and benchmarks.

A screenshot of Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Again, no problems and very quick!

Let’s load up CrystalDiskMark and test the speed and IOPS!

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

What’s interesting is that I was able to achieve much higher speeds using this card in an older system, than directly installing one of the SSDs in a new HP Z240 workstation. However, unfortunately due to CPU limitations (maxing the CPU out) of this server used above, I could not fully test, max out, or benchmark the IOPS on an individual SSD.

Additional Notes on the IO-PEX40152

Some additional notes I have on the IO-PEX40152:

The card works perfectly with VMware ESXi PCI passthrough when passing it through to a virtualized VM.

The card according to the specifications states a data transfer up to 1GB/sec, however I achieved over 3GB/sec using the Sabrent Rocket 4 NVME SSD.

While the specifications and features state it supports NVME spec 1.0 and 1.1, I don’t see why it wouldn’t support the newer specifications as it’s simply a PCIe switch which NVMe slots.

Conclusion

This is a fantastic card that you can use reliably if you have a system with a free x16 slot. Because of the fact it has a built in PCIe switch and doesn’t require PCIe bifurcation, you can confidently use it knowing it will work.

I’m looking forward to buying a couple more of these for some special applications and projects I have lined up, stay tuned for those!

May 222020
 
A Picture of the 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Today we’re going to be talking about Sabrent’s newest line of NVMe storage products, particularly the 2TB Rocket NVMe PCIe 4.0 M.2 2280 Internal SSD Solid State Drive or the Sabrent Rocket 4 2TB NVMe stick as I like to call it.

Last week I purchased a quantity of 4 of these for a total of 8TB of NVMe storage to use on an IOCrest IO-PEX40152 Quad NVMe PCIe Card. For the purpose of this review, we’re benchmarking one inside of an HP Z240 Workstation.

While these are targeted for users with a PCIe 4.0 interface, I’ll be using these on PCIe 3 as it’s backwards compatible. I purchased the PCIe 4 option to make sure the investment was future-proofed.

Keep reading for a bunch of pictures, specs, speed tests, benchmarks, information, and more!

A picture of 4 unopened boxes of Sabrent Rocket 4 2TB NVMe sticks
4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Let’s get started with the review!

How and Why I purchased these

I’ve been working on a few special top-secret projects for the blog and YouTube channel, and needed some cost-effective yet high performing NVMe storage.

I needed at least 8TB of NVMe flash and I’m sure as all of you are aware, NVMe isn’t cheap.

After around a month of research I finally decided to pull the trigger and purchase a quantity of 4 x Sabrent Rocket 4 NVMe 2TB SSD. For future projects I’ll be using these in an IOCREST IO-PEX40152 NVME PCIe card.

These NVMe SSDs are targeted for consumers (normal users, gamers, power users, and IT professionals) and are a great fit! Just remember these do not have PLP (power loss protection), which is a feature that isn’t normally found in consumer SSDs.

Specifications

See below for the specifications and features included with the Sabrent Rocket 4 2TB NVMe SSD.

Hardware Specs:

  • Toshiba BiCS4 96L TLC NAND Flash Memory
  • Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD Controller
  • Kioxia 3D TLC NAND
  • M.2 2280 Form Factor
  • PCIe 4.0 Speeds
    • Read Speed of 5000MB/sec
    • Write Speed of 4400MB/sec
  • PCIe 3.0 Speeds
    • Read Speed of 3400MB/sec
    • Write Speed of 2750MB/sec
  • 750,000 IOPS on 2TB Model
  • Endurance: 3,600TBW for 2TB, 1,800TBW for 1TB, 850TBW for 500TB
  • Available in 500GB, 1TB, 2TB
  • Made in Taiwan

Features:

  • NVMe M.2 2280 Interface for PCIe 4.0 (NVMe 1.3 Compliant)
  • APST, ASPM, L1.2 Power Management Support
  • Includes SMART and TRIM Support
  • ONFi 2.3, ONFi 3.0, ONFi 3.2 and ONFi 4.0 interface
  • Includes Advanced Wear Leveling, Bad Block Management, Error Correction Code, and Over-Provision
  • User Upgradeable Firmware
  • Software Tool to change Block Size

Where and how to buy

One of the perks of owning an IT company is that typically you can purchase all of your internal use product at cost or discount, unfortunately this was not the case.

I was unable to find the Sabrent products through any of the standard distribution channels and had to purchase through Amazon. This is unfortunate because I wouldn’t mind being able to sell these units to customers.

Amazon Purchase Links (2TB Model)

The PART#s are as follows for the different sizes:

ProductNVMe Disk SizePART#
No Heatsink
PART#
Heatsink
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive2TBSB-ROCKET-NVMe4-2TBSB-ROCKET-NVMe4-HTSK-2TB
1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive1TBSB-ROCKET-NVMe4-1TBSB-ROCKET-NVMe4-HTSK-1TB
500GB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive500GBSB-ROCKET-NVMe4-500SB-ROCKET-NVMe4-HTSK-500
Sabrent Rocket 4 Part Number Lookup Table

Cost

At the time of creation of this post, purchasing from Amazon Canada the 2TB model would set you back $699.99CAD for a single unit, however there was a sale going on for $529.99CAD per unit.

Additionally, at the time of creation of this post the 2TB model on Amazon USA would set you back $399.98 USD.

A total quantity of 4 set me back around $2,119.96CAD on sale versus $2,799.96 at regular price.

If you’re familiar with NVMe pricing, you’ll notice that this pricing is extremely attractive when comparing to other high performance NVMe SSDs.

Unboxing

I have to say I was very impressed with the packaging! Small, sleek, and impressive!

A picture of Sabrent Rocket 4 2TB NVMe sticks metal case packagin
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive Metal Case Packaging

Initially I was surprised how small the boxes were as they fit in the palm of your hand, but then you realize how small the NVMe sticks are, so it makes sense.

Opening the box you are presented with a beautiful metal case containing the instructions, information on the product warranty, and more.

Picture of a Sabrent Rocket 4 case opened
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive in case

And the NVME stick removed from it’s case

Picture of a Sabrent Rocket 4 case opened and the NVME SSD removed
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive removed from case

While some of the packaging may be unnecesary, after further thought I realized it’s great to have as you can re-use the packaging when storing NVMe drives to keep them safe and/or to put them in to storage.

And here’s a beautiful shot of 8TB of NVMe storage.

A picture of 8TB of total storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive
8TB Total NVMe storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Now let’s move on to usage!

Installation, Setup, and Configuration

Setting one of these up in my HP Workstation was super easy. You simply populate the NVMe M.2 slot, install the screw, and boot up the system.

Picture of a Sabrent Rocket PCIe4 NVMe 2TB SSD Installed in computer
Sabrent Rocket 4 NVMe 2TB SSD in HP Z240 SFF Workstation

Upon booting, the Sabrent SSD was available inside of the Device Manager. I read on their website that they had some utilities so I wanted to see what configuration options I had access to before moving on to speed test benchmarks.

All Sabrent Rocket utilities can be downloaded from their website at https://www.sabrent.com/downloads/.

Sabrent Sector Size Converter

The Sabrent Sector Size Converter utility allows you to configure the sector size of your Sabrent Rocket SSD. Out of the box, I noticed mine was configured with a 512e sector format, which I promptly changed to 4K.

Screenshot using the Sabrent Sector Size Converter to change SSD from 512e to 4K Sector Size
Sabrent Sector Size Converter v1.0

The change was easy, required a restart and I was good to go! You’ll notice it has a drop down to select which drive you want to modify, which is great if you have more than one SSD in your system.

I did notice one issue… When you have multiple (in my case 4) of these in one system, for some reason the sector size change utility had trouble changing one of them from 512e to 4K. It would appear to be succesful, but stay at 512e on reboot. Ultimately I removed all of the NVME sticks with the exception of the problematic one, ran the utility, and the issue was resolved.

Sabrent Rocket Control Panel

Another useful utility that was available for download is the Sabrent Rocket Control Panel.

Screenshot of the Sabrent Rocket Control Panel
Sabrent Rocket Control Panel

The Sabrent Rocket Control Panel provides the following information:

  • Drive Size, Sector Size, Partition Count
  • Serial Number and Drive identifier
  • Feature status (TRIM Support, SMART, Product Name)
  • Drive Temperature
  • Drive Health (Lifespan)

You can also use this app to view S.M.A.R.T. information, flash updated Sabrent firmware, and more!

Now that we have this all configured, let’s move on to testing this SSD out!

Speed Tests and Benchmarks

The system we used to benchmark the Sabrent Rocket 4 2TB NVMe SSD is an HP Z240 SFF (Small Form Factor) workstation.

The specs of the Z240 Workstation:

  • Intel Xeon E3-1240 v5 @ 3.5Ghz
  • 16GB of RAM
  • Samsung EVO 500GB as OS Drive
  • Sabrent Rocket 4 NVMe 2TB SSD as Test Drive

I ran a few tests using both CrystalDiskMark and ATTO Disk Benchmark, and the NVMe SSD performed flawlessly at extreme speeds!

CrystalDiskMark Results

Loading up and benching with CrystalDiskMark, we see the following results:

Screenshot of speedtest and benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark Results

As you can see, the Sabrent Rocket 4 2TB NVMe tested at a read speed of 3175.63MB/sec and write speed of 3019.17MB/sec.

Screenshot of IOPS benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark IOPS Results

Using the Peak Performance profile, we some amazing IO with 613171.14IOPS read and 521861.33IOPS write with RND4K.

While we’re only testing with a PCIe 3.0 system, these numbers are still amazing and inline with what’s advertised.

ATTO Disk Benchmark Results

Switing over to ATTO Disk Benchmark, we test both speed and IOPS.

First, the speed benchmarks with I/O sized 4K to 12MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 4K to 12MB

After taking a short cooldown break (we don’t have a heatsink installed), we tested 12MB to 64MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 12MB to 64MB

And now we move on to analyze the IO/s.

First from 4K to 12MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 4K to 12MB

And then after a short break, 12MB to 64MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 12MB to 64MB

Those numbers are insane!

Additional Notes

When you purchase a new Sabrent Rocket 4 SSD it comes with a 1 year standard warranty, however if you register your product within 90 days of purchase, you can extend it to an awesome 5 year warranty.

To register your product, visit https://www.sabrent.com/product-registration/

The process is easy if you have one device, however it very repettitive and takes time if you have multiuple as the steps have to be repeated for each device you have. Sabrent, if you’re listening a batch registration tool would be nice! 🙂

Remember that after registering your product, you should record your “Registration Unique ID” for future reference and use.

Conclusion

All-in-all I’d definitely recommend the Sabrent Rocket 4 NVMe SSD! It provides extreme performance, is extremely cost-effective, and I wouldn’t see any reason not to buy them.

Just remember that these SSDs (like all consumer SSDs) do not provide power loss protection, meaning you should not use these in enterprise environments (or in a NAS or SAN).

I’m really looking forward to using these in my upcoming blog and YouTube projects.

Apr 122020
 
Picture of Raspberry Pi 4 box and Raspberry Pi 4 board below box

If you’re worried about destroying your SD Cards, need some more space, or just want to learn something new, I’m going to show you how to use an NFS root for the Raspberry Pi 4.

When you use an NFS Root with your Raspberry Pi, it stores the entire root filesystem on a remote NFS export (think of it as a network filesystem share). This means you’ll have as much space as the NFS export, and you’ll probably see way faster performance since it’ll be running at 1Gb/sec instead of the speed of the SD Card.

This also protects your SD card, as the majority of the reading and writing is performed on the physical storage of the NFS export, instead of the SD card in the Pi which has limited reads and writes.

What you’ll need

To get started, you’ll need:

  • Raspberry Pi 4
  • Ubuntu or Raspbian for Raspberry Pi 4 Image
  • A small SD card for the Boot Partition (1-2GB)
  • SD card for the Raspberry Pi Linux image
  • Access to another Linux system (workstation, or a Raspberry Pi)

There are multiple ways to do this, but I’m providing instructions on the easiest way it was for me to do this with the resources I had immediately available.

Instructions

To boot your Raspberry Pi 4 from an NFS root, multiple steps are involved. Below you’ll find the summary, and further down you’ll find the full instructions. You can click on an item below to go directly to the section.

The process:

  1. Write the Linux image to an SD Card
  2. Create boot SD Card for NFS Root
  3. Prep the Linux install for NFS Root
  4. Create the NFS Export
  5. Copy the Linux install to the NFS Export
  6. Copy and Modify the boot SD Card to use NFS Root
  7. Boot using SD Card and test NFS Root

See below for the individual instructions for each step.

Write the Linux image to an SD Card

First, we need to write the SD Card Linux image to your SD card. You’ll need to know which device your SD card will appear to your computer. In my case it was /dev/sdb, make sure you verify the right device or you could damage your current Linux install.

  1. Download Ubuntu or Raspbian for Raspberry Pi.
  2. unzip or unxz depending on distribution to uncompress the image file.
  3. Write the SD card image to SD card.
    dd if=imagename.img of=/dev/sdb bs=4M

You now have an SD Card Linux install for your Raspberry Pi. We will later modify and then copy this to the NFS root and boot SD card.

Create boot SD Card for NFS Root

In this step, we’re going to create a bootable SD card that contains the Linux kernel and other needed files for the Raspberry Pi to boot.

This card will be installed in the Pi, load the kernel, and then kick off the boot process to load the NFS root.

I previously created a post to create a boot partition layout for a Raspberry Pi. Please follow those instructions to complete this step.

Later on in this guide, you’ll be copying the boot partition from the SD Card Linux image, on to this newly created boot SD Card for the NFS Root.

Prep the Linux install for NFS Root

There’s a few things we have to do to prep the Ubuntu or Raspbian Linux install to be usable as an NFS Root.

  1. Boot the Raspbian or Ubuntu SD Card you create in the first step on your Raspberry Pi.
  2. Complete the first boot procedures. Create your account, and complete the setup.
  3. Enable and confirm SSH is working so you can troubleshoot.
  4. Install the NFS client files using the following command:
    apt install nfs-common
  5. Open the /etc/network/interfaces file, and add the following line so that the Pi only get’s an IP once during boot:
    iface eth0 inet manual
  6. Modify your /etc/fstab entries to reflect the NFS root and the new boot SD card as per below.

For step 6, we need to modify the /etc/fstab entry for the root fs. It is different depending on whether you’re using Ubuntu or Raspbian.

For Raspbian, your /etc/fstab should look like this:

proc /proc proc defaults 0 0
LABEL=boot /boot vfat defaults 0 2
NFS-SERVER-IP:/nfs-export/PI-Raspbian / nfs defaults 0 0

For Ubuntu, your /etc/fstab should look like this:

LABEL=system-boot /boot/firmware vfat defaults 0 2
/dev/nfs / nfs defaults 0 0

After you do this, the Linux SD image may not boot again if directly installed in the Raspberry Pi, so make sure you’ve made the proper modifications before powering it down.

Create the NFS Export

In my case I used a Synology DS1813+ as an NFS server to host my Raspberry Pi NFS root images. But you can use any Linux server to host it.

If you’re using a synology disk station, create a shared folder, disable the recycling bin, leave everything else default. Head over to the “NFS Permissions” tab and create an ACL entry for your PI and workstations. You can also add a network segment for your entire network (ex. 192.168.0.0/24″) instead of specifying individual IPs.

Screenshot of Synology Create NFS rule for ACL
Create an NFS ACL Rule for Synology NFS Access

Once you create an entry, it’ll look like this. Note the “Mount path” in the lower part of the window.

Screenshot of NFS Shared Folder Permissions and Mount Point on Synology NAS
NFS Permissions and Mount Path for NFS Export

Now, if you’re using a standard Linux server the steps are different.

  1. Install the require NFS packages:
    apt install nfs-kernel-server
  2. Create a directory, we’ll call it “nfs-export” on your root fs on the server:
    mkdir /nfs-export/
  3. Then create a directory for the Raspberry Pi NFS Root:
    mkdir /nfs-export/PI-ImageName
  4. Now edit your /etc/exports file and add this line to the file to export the path:
    /nfs-export/PI-ImageName     IPorNetworkRange(rw,no_root_squash,async,insecure)
  5. Reload the NFS exports to take affect:
    exportfs -ra

Take note of the mount point and/or NFS export path, as this is the directory your Raspberry Pi will need to mount to access it’s NFS root. This is also the directory you will be copying your SD Card Linux install root FS to.

Copy the Linux install to the NFS Export

When you’re ready to copy your SD Card Linux install to your NFS Export, you’ll need to do the following. In my case I’ll be using an Ubuntu desktop computer to perform these steps.

When I insert the SD Card containing the Raspberry Pi Linux image, it appeared as /dev/sdb on my system. Please make sure you are using your proper device names to avoid using the wrong one to avoid writing or using the wrong disk.

Instructions to copy the root fs from the SD card to the NFS root export:

  1. Mount the root partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb2 old/
  2. Mount the NFS Export for the NFS Root to a directory. In my case I used a directory called “nfs”.
    mount IPADDRESS:/nfs-export/PI-ImageName nfs/
  3. Use the rsync command to transfer the SD card Linux install to the NFS Root Export.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ nfs/
  4. Unmount the directories.
    umount old/
    umount nfs/

Once this is complete, your OS root is now copied to the NFS root.

Copy and Modify the boot SD Card to use NFS Root

First we have to copy the boot partition from the SD Card Linux install to the boot SD card, then we need to modify the contents of the new boot SD card.

Top copy the boot files, follow these instructions.

  1. Mount the boot partition of the SD Card Linux install to a directory. In my case I used directory called “old”.
    mount /dev/sdb1 old/
  2. Mount the new boot partition of the boot SD card to a new directory. In my case I used the directory called “new”.
    mount /dev/sdc1 new/
  3. Use the rsync command to transfer the SD card Linux install boot partition to the new boot SD card.
    rsync -avxHAXS --numeric-ids --info=progress2 --progress old/ new/
  4. Unmount the directories.
    umount old/
    umount new/

Now there are few steps we have to take to make to the boot SD card boot to an NFS Root.

We have to make a modification to the PI boot command. It is different depending on which Linux image (Ubuntu or Raspbian) you’re using.

First, insert the boot SD card, and mount it to a temporary directory.

mount /dev/sdc1 new/

If you’re running Ubuntu, your existing nobtcmd.txt should look like this:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 elevator=deadline rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

dwc_otg.lpm_enable=0 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Ubuntu,tcp,rw ip=dhcp rootfstype=nfs elevator=deadline rootwait

For Raspbian, your existing cmdline.txt should look like this:

console=serial0,115200 console=tty1 root=PARTUUID=97709164-02 rootfstype=ext4 elevator=deadline fsck.repair=yes rootwait

We’ll modify and replace some text to make it look like this. Don’t forget to change the command to reflect your IP and directory:

console=serial0,115200 console=tty1 root=/dev/nfs nfsroot=IPADDRESS:/nfs-export/PI-Raspbian,tcp rw vers=3 ip=dhcp rootfstype=nfs elevator=deadline rootwait

Once you make the modifications, save the file and unmount the SD card.

Your SD card is now ready to boot.

Boot using SD Card and test NFS Root

At this point, insert the boot SD Card in your Raspberry Pi and attempt to boot. All should be working now and it should boot and use the NFS root!

If you’re having issues, if the boot process stalls, or something doesn’t work right, look back and confirm you followed all the steps above properly.

You’re done!

You’re now complete and have a fully working NFS root for your Raspberry Pi. You’ll no longer worry about storage, have high speed access to it, and you’ll have some new skills!

And don’t forget to check out these Handy Tips, Tricks, and Commands for the Raspberry Pi 4!

Aug 122019
 
DS1813+

Around a month ago I decided to turn on and start utilizing NFS v4.1 (Version 4.1) in DSM on my Synology DS1813+ NAS. As most of you know, I have a vSphere cluster with 3 ESXi hosts, which are backed by an HPE MSA 2040 SAN, and my Synology DS1813+ NAS.

The reason why I did this was to test the new version out, and attempt to increase both throughput and redundancy in my environment.

If you’re a regular reader you know that from my original plans (post here), and than from my issues later with iSCSI (post here), that I finally ultimately setup my Synology NAS to act as a NFS datastore. At the moment I use my HPE MSA 2040 SAN for my hot storage, and I use the Synology DS1813+ for cold storage. I’ve been running this way for a few years now.

Why NFS?

Some of you may ask why I chose to use NFS? Well, I’m an iSCSI kinda guy, but I’ve had tons of issues with iSCSI on DSM, especially MPIO on the Synology NAS. The overhead was horrible on the unit (result of the lack of hardware specs on the NAS) for both block and file access to iSCSI targets (block target, vs virtualized (fileio) target).

I also found a major issue, where if one of the drives were dying or dead, the NAS wouldn’t report it as dead, and it would bring the iSCSI target to a complete halt, resulting in days spending time finding out what’s going on, and then finally replacing the drive when you found out it was the issue.

After spending forever trying to tweak and optimize, I found that NFS was best for the Synology NAS unit of mine.

What’s this new NFS v4.1 thing?

Well, it’s not actually that new! NFS v4.1 was released in January 2010 and aimed to support clustered environments (such as virtualized environments, vSphere, ESXi). It includes a feature called Session trunking mechanism, which is also known as NFS Multipathing.

We all love the word multipathing, don’t we? As most of you iSCSI and virtualization people know, we want multipathing on everything. It provides redundancy as well as increased throughput.

How do we turn on NFS Multipathing?

According to the VMware vSphere product documentation (here)

While NFS 3 with ESXi does not provide multipathing support, NFS 4.1 supports multiple paths.


NFS 3 uses one TCP connection for I/O. As a result, ESXi supports I/O on only one IP address or hostname for the NFS server, and does not support multiple paths. Depending on your network infrastructure and configuration, you can use the network stack to configure multiple connections to the storage targets. In this case, you must have multiple datastores, each datastore using separate network connections between the host and the storage.


NFS 4.1 provides multipathing for servers that support the session trunking. When the trunking is available, you can use multiple IP addresses to access a single NFS volume. Client ID trunking is not supported.

So it is supported! Now what?

In order to use NFS multipathing, the following must be present:

  • Multiple NICs configured on your NAS with functioning IP addresses
  • A gateway is only configured on ONE of those NICs
  • NFS v4.1 is turned on inside of the DSM web interface
  • A NFS export exists on your DSM
  • You have a version of ESXi that supports NFS v4.1

So let’s get to it! Enabling NFS v4.1 Multipathing

  1. First log in to the DSM web interface, and configure your NIC adapters in the Control Panel. As mentioned above, only configure the default gateway on one of your adapters.Synology Multiple NICs Configured Screenshot
  2. While still in the Control Panel, navigate to “File Services” on the left, expand NFS, and check both “Enable NFS” and “Enable NFSv4.1 support”. You can leave the NFSv4 domain blank.Enabling NFSv4.1 on Synology DSM
  3. If you haven’t already configured an NFS export on the NAS, do so now. No further special configuration for v4.1 is required other than the norm.
  4. Log on to your ESXi host, go to storage, and add a new datastore. Choose to add an NFS datastore.
  5. On the “Select NFS version”, select “NFS 4.1”, and select next.Selecting the NFS version on the Add Datastore Dialog box on ESXi
  6. Enter the datastore name, the folder on the NAS, and enter the Synology NAS IP addresses, separated by commas. Example below:New NFS Datastore details and configuration on ESXi dialog box
  7. Press the Green “+” and you’ll see it spreads them to the “Servers to be added”, each server entry reflecting an IP on the NAS. (please note I made a typo on one of the IPs).List of Servers/IPs for NFS Multipathing on ESXi Add Datastore dialog box
  8. Follow through with the wizard, and it will be added as a datastore.

That’s it! You’re done and are now using NFS Multipathing on your ESXi host!

In my case, I have all 4 NICs in my DS1813+ configured and connected to a switch. My ESXi hosts have 10Gb DAC connections to that switch, and can now utilize it at faster speeds. During intensive I/O loads, I’ve seen the full aggregated network throughput hit and sustain around 370MB/s.

After resolving the issues mentioned below, I’ve been running for weeks with absolutely no problems, and I’m enjoying the increased speed to the NAS.

Additional Important Information

After enabling this, I noticed that RAM and Memory usage had drastically increased on the Synology NAS. This would peak when my ESXi hosts would restart. This issue escalated to the NAS running out of memory (both physical and swap) and ultimately crashing.

After weeks of troubleshooting I found the processes that were causing this. While the processes were unrelated, this issue would only occur when using NFS Multipathing and NFS v4.1. To resolve this, I had to remove the “pkgctl-SynoFinder” package, and disable the services. I could do this in my environment because I only use the NAS for NFS and iSCSI. This resolved the issue. I created a blog post here to outline how to resolve this. I also further optimized the NAS and memory usage by disabling other unneeded services in a post here, targeted for other users like myself, who only use it for NFS/iSCSI.

Leave a comment and let me know if this post helped!

Jul 312019
 

If you’re like me and use a Synology NAS as an NFS or iSCSI datastore for your VMware environment, you want to optimize it as much as possible to reduce any hardware resource utilization.

Specifically we want to disable any services that we aren’t using which may use CPU or memory resources. On my DS1813+ I was having issues with a bug that was causing memory overflows (the post is here), and while dealing with that, I decided to take it a step further and optimize my unit.

Optimize the NAS

In my case, I don’t use any file services, and only use my Synology NAS (Synology DS1813+) as an NFS and iSCSI datastore. Specifically I use multipath for NFSv4.1 and iSCSI.

If you don’t use SMB (Samba / Windows File Shares), you can make some optimizations which will free up substantial system resources.

Disable and/or uninstall unneeded packages

First step, open up the “Package Center” in the web GUI and either disable, or uninstall all the packages that you don’t need, require, or use.

To disable a package, select the package in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Disable” or “Stop” will appear if you can turn off the service. This may or may not be persistent on a fresh boot.

To uninstall a package, select the packet in Package Center, then click on the arrow beside “Open”. A drop down will open up, and “Uninstall” will appear. Selecting this will uninstall the package.

Disable the indexing service

As mentioned here, the indexing service can consume quite a bit of RAM/memory and CPU on your Synology unit.

To stop this service, SSH in to the unit as admin, then us the command “sudo su” to get a root shell, and finally run this command.

synoservice --disable pkgctl-SynoFinder

The above command will probably not persist on boot, and needs to be ran each fresh boot. You can however uninstall the package with the command below to completely remove it.

synopkg uninstall SynoFinder

Doing this will free up substantial resources.

Disable SMB (Samba), and NMBD

I noticed that both smbd and nmbd (Samba/Windows File Share Services) were consuming quite a bit of CPU and memory as well. I don’t use these, so I can disable them.

To disable them, I ran the following command in an SSH session (remember to “sudo su” from admin to root).

synoservice --disable nmbd
synoservice --disable samba

Keep in mind that while this should be persistent on boot, it wasn’t on my system. Please see the section below on how to make it persistent on booth.

Disable thumbnail generation (thumbd)

When viewing processes on the Synology NAS and sorting by memory, there are numerous “thumbd” processes (sometimes over 10). These processes deal with thumbnail generation for the filestation viewer.

Since I’m not using this, I can disable it. To do this, we either have to rename or delete the following file. I do recommend making a backup of the file.

/var/packages/FileStation/target/etc/conf/thumbd.conf

I’m going to rename it so that the service daemon can’t find it when it initializes, which causes the process not to start on boot.

cd /var/packages/FileStation/target/etc/conf/
mv thumbd.conf thumbd.conf.bak

Doing the above will stop it from running on boot.

Make the optimizations persistent on boot

In this section, I will show you how to make all the settings above persistent on boot. Even though I have removed the SynoFinder package, I still will create a startup script on the Synology NAS to “disable” it just to be safe.

First, SSH in to the unit, and run “sudo su” to get a root shell.

Run the following commands to change directory to the startup script, and open a text editor to create a startup script.

cd /usr/local/etc/rc.d/
vi speedup.sh

While in the vi file editor, press “i” to enter insert mode. Copy and paste the code below:

case "$1" in
    start)
                echo "Turning off memory garbage"
                        synoservice --disable nmbd
                        synoservice --disable samba
                        synoservice --disable pkgctl-SynoFinder
                        ;;
    stop)
                        echo "Pertend we care and are turning something on"
                        ;;
        *)
        echo "Usage: $1 {start|stop}"
                exit 1
esac
exit 0

Now press escape, then type “:wq” and hit enter to save and close the vi text editor. Run the following command to make the script executable.

chmod 755 speedup.sh

That’s it!

Conclusion

After making the above changes, you should see a substantial performance increase and reduction in system resources!

In the future I plan on digging deeper in to optimization as I still see other services I may be able to trim down, after confirming they aren’t essential to the function of the NAS.

Feel like you can add anything? Leave a comment!