Jun 062020
 
Screenshot of NVMe SSD on FreeNAS

Looking at using SSD and NVMe with your FreeNAS setup and ZFS? There’s considerations and optimizations that must be factored in to make sure you’re not wasting all that sweet performance. In this post I’ll be providing you with my own FreeNAS ZFS optimizations for SSD and NVMe.

This post will contain observations and tweaks I’ve discovered during testing and production of a FreeNAS ZFS pool sitting on NVMe vdevs. I will update it with more information as I use and test the array more.

Screenshot of FreeNAS ZFS NVMe SSD Pool with multiple datasets
FreeNAS ZFS NVMe SSD Pool with multiple datasets

Considerations

It’s important to note that while your SSD and/or NVMe ZFS pool technically could reach insane speeds, you will probably always be limited by the network access speeds.

With this in mind, to optimize your ZFS SSD and/or NVMe pool, you may be trading off features and functionality to max out your drives. These optimizations may in fact be wasted if you reach the network speed bottleneck.

Some feature you may be giving up may actually help extend the life or endurance of your SSD such as compression and deduplication, as they reduce the number of writes performed on each of your vdevs (drives).

You may wish to skip these optimizations should your network be the limiting factor, which will allow you to utilize these features with no performance or minimal performance degradation to the final client. You should measure your network throughput to establish the baseline of your network bottleneck.

Deploying SSD and NVMe with FreeNAS

For reference, the environment I deployed FreeNAS with NVMe SSD consists of:

As mentioned above, FreeNAS is virtualizatized on one of the HPE DL360 Proliant servers and has 8 CPUs and 32GB of RAM. The NVME are provided by VMware ESXi as PCI passthrough devices. There has been no issues with stability in 3 weeks of testing.

Screenshot of Sabrent Rocket 4 2TB NVMe SSD on FreeNAS
Sabrent Rocket 4 2TB NVMe SSD on FreeNAS

Important notes:

  • VMXNET3 NIC is used on VMs to achieve 10Gb networking
  • Using PCI passthrough, snapshots on FreeNAS VM are disabled (this is fine)
  • NFS VM datastore is used for testing as the host running the FreeNAS VM has the NFS datastore store mounted on itself.

There are a number of considerations that must be factored in when virtualization FreeNAS however those are beyond the scope of this blog post. I will be creating a separate post for this in the future.

Use Case (Fast and Risky or Slow and Secure)

The use case of your setup will depict which optimizations you can use as some of the optimizations in this post will increase the risk of data loss (such as disabled sync writes and RAIDz levels).

Fast and Risky

Since SSDs are more reliable and less likely to fail, if you’re using the SSD storage as temporary hot storage, you could simply using striping to stripe across multiple vdevs (devices). If a failure occurred, the data would be lost, however if you’re were just using this for “staging” or using hot data and the risk was acceptable, this is an option to drastically increase speeds.

Example use case for fast and risky

  • VDI Pool for clones
  • VMs that can be restored easily from snapshots
  • Video Editing
  • Temporary high speed data dump storage

The risk can be lowered by replicating the pool or dataset to slower storage on a frequent or regular basis.

Slow and Secure

Using RAIDz-1 or higher will allow for vdev (drive) failures, but with each level increase, performance will be lost due to parity calculations.

Example use case for slow and secure

  • Regular storage for all VMs
  • Database (SQL)
  • Exchange
  • Main storage

Slow and Secure storage is the type of storage found in most applications used for SAN or NAS storage.

SSD Endurance and Lifetime

Solid state drives have a lifetime that’s typically measured in lifetime writes. If you’re storing sensitive data, you should plan ahead to mitigate the risk of failure when the drive reaches it’s full lifetime.

Steps to mitigate failures

  • Before putting the stripe or RAIDz pool in to production, perform some large bogus writes and stagger the amount of data written on the SSDs individually. While this will reduce the life counter on the SSDs, it’ll help you offset and stagger the lifetime of each drives so they don’t die at the same time.
  • If using RAIDz-1 or higher, preemptively replace the SSD before the lifetime is hit. Do this well in advance and stagger it to further create a different between the lifetime of each drive.

Decommissioning the drives preemptively and early doesn’t mean you have to throw them away, this is just to secure the data on the ZFS pool. You can can continue to use these drives in other systems with non-critical data, and possibly use the drive well beyond it’s recommended lifetime.

Compression and Deduplication

Using compression and deduplication with ZFS is CPU intensive (and RAM intensive for deduplication).

The CPU usage is negligible when using these features on traditional magnetic storage (traditional magentic platter hard drive storage) because when using traditional hard drives, the drives are the performance bottleneck.

SSD are a total different thing, specifically with NVMe. With storage speeds in the gigabytes per second, CPUs cannot keep up with the deduplication and compression of data being written and become the bottleneck.

I performed a simple test comparing speeds with compression and dedupe with the same VM running CrystalDiskMark on an NFS VMware datastore running over 10Gb networking. The VM was configured with a single drive on a VMware NVME controller.

NVMe SSD with compression and deduplication

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication

NVMe SSD with deduplication only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only
CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only

NVMe SSD with compression only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression only
CrystalDiskMark on FreeNAS NFS SSD datastore with compression only

Now this is really interesting, that we actually see a massive speed increase with compression only. This is because I have a server class CPU with multiple cores and a ton of RAM. With lower performing specs, you may notice a decrease in performance.

NVMe SSD without compression and deduplication

Screenshot of benchmark with CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication

In my case, the 10Gb networking was the bottleneck on read operations as there was virtually no change. It was a different story for write operations as you can see there is a drastic change in write speeds. Write speeds are greatly increased when writes aren’t being compressed or deduped.

Note that on faster networks, read speeds could and will be affected.

If your network connection to the client application is the limiting factor and the system can keep up with that bottleneck then you will be able to get away with using these features.

Higher throughput with compression and deduplication can be reached with higher frequency CPUs (more Ghz), more cores (for more client connections). Remember that large amounts of RAM are required for deduplication.

Using compression and deduplication may also reduce the writes to your SSD vdevs, prolonging the lifetime and reducing the cost of maintaining the solution.

ZFS ZIL and SLOG

When it comes to writes on a filesystem, there a different kinds.

  • Synchronous – Writes that are made to a filesystem that are only marked as completed and successful once it has actually been written to the physical media.
  • Asynchronous – Writes that are made to a filesystem that are marked as completed or successful before the write has actually been completed and committed to the physical media.

The type of write performed can be requested by the application or service that’s performing the write, or it can be explicitly set on the file system itself. In FreeNAS (in our example) you can override this by setting the “sync” option on the zpool, dataset, or zvol.

Disabling sync will allow writes to be marked as completed before they actually are, essentially “caching” writes in a buffer in memory. See below for “Ram Caching and Sync Writes”. Setting this to “standard” will perform the type of write requested by the client, and setting to “always” will result in all writes being synchronous.

We can speed up and assist writes by using a SLOG for ZIL.

ZIL stands for ZFS Intent Log, and SLOG standards for Separated Log which is usually stored on a dedicated SLOG device.

By utilizing a SLOG for ZIL, you can have dedicated SSDs which will act as your intent log for writes to the zpool. On writes that request a synchronous write, they will be marked as completed when sent to the ZIL and written to the SLOG device.

Implementing a SLOG that is slower than the combined speed of your ZFS pool will result in a performance loss. You SLOG should be faster than the pool it’s acting as a ZIL for.

Implementing a SLOG that is faster than the combined speed of your ZFS pool will result in a performance gain on writes, as it essentially act as “write cache” for synchronous writes and will possibly even perform more orderly writes when it commits it to the actual vdevs in the pool.

If using a SLOG for ZIL, it is highly recommend to use an SSD that has PLP (power loss protection) as well as a mirrored set to avoid data loss and/or corruption in the event of a power loss, crash, or freeze.

RAM Caching and Sync Writes

In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.

Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.

In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.

You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).

Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.

SSD Sector Size

Traditional drives typically used 512k physical sector sizes. Newer hard drives and SSDs use 4k sectors, but often emulate 512k logical sectors (called 512e) for compatibility. SSD’s specifically sometimes ship with 512e to increase compatibility with operating systems and the ability to clone your old drive to the new SSD during migrations.

When emulating 512k logical sectors on an HD or SSD that uses 4k physical native sectors, an operation that writes 4k will result in 4 operations instead of 1. This increases overhead and could result in reduced IO and speed, as well as create more wear on the SSD when performing writes.

Some HDs and SSDs come with utilities or tools to change the sector size of the drive. I highly recommend changing it to it’s native sector size.

iSCSI vs NFS

Technically faster speeds should possible using iSCSI instead of NFS, however special care must be made when using iSCSI.

If you’re using iSCSI and the host that is virtualizing the FreeNAS instance is also mounting the iSCSI VMFS target that it’s presenting, you must unmount this iSCSI volume every time you go plan to shut down the FreeNAS instance, or the entire host that is hosting it. Unmounting the iSCSI datastore also means unregistering any VMs that reside on it.

Screenshot of VMware ESXi with FreeNAS NVMe SSD as NFS datastore
VMware ESXi with virtualized FreeNAS as NFS datastore

If you simply shutdown the FreeNAS instance that’s hosting the iSCSI datastore, this will result in a improper unclean unmount of the VMFS volume and could lead to data loss, even if no VMs are running.

NFS provides a cleaner mechanism, as the FreeNAS handles the unmount of the base filesystem cleanly on shutdown and to the ESXi hosts it appears as an NFS disconnect. If VMs are not running (and no I/O is occuring) when the FreeNAS instance is shut down, data loss is not a concern.

Jumbo Frames

Since you’re pushing more data, more I/O, and at a faster pace, we need to optimize all layers of the solution as much as possible. To reduce overhead on the networking side of things, if possible, you should implement jumbo frames.

Instead of sending many smaller packets which independently require acknowledgement, you can send fewer larger packets. This significantly reduces overhead and allows for faster speed.

In my case, my FreeNAS instance will be providing both NAS and SAN services to the network, thus has 2 virtual NICs. On my internal LAN where it’s acting as a NAS (NIC 1), it will be using the default MTU of 1500 byte frames to make sure it can communicate with workstations that are accessing the shares. On my SAN network (NIC 2) where it will be acting as a SAN, it will have a configured MTU of 9000 byte frames. All other devices (SANs, client NICs, and iSCSI initiators) on the SAN network have a matching MTU of 9000.

Additional Notes

Please note that consumer SSDs usually do not have PLP (Power Loss Prevention). This means that in the event of a power failure, any data sitting on the write cache on the SSD may be lost. This could put your data at risk. Using enterprise solid state drives remedies this issue as they often come with PLP.

Conclusion

SSD’s are great for storage, whether it be file, block, NFS, or iSCSI! It’s in my opinion that NVMe and all flash arrays is where the future of storage is going.

I hope this information helps, and if you feel I left anything out, or if anything needs to be corrected, please don’t hesitate to leave a comment!

May 252020
 
Picture of an IOCREST IO-PEX40152 PCIe x16 to Quad M.2 NVMe

Looking to add quad (4) NVMe SSDs to your system and don’t have the M.2 slots or a motherboard that supports bifurcation? The IOCREST IO-PEX40152 QUAD NVMe PCIe card, is the card for you!

The IO-PEX40152 PCIe card allows you to add 4 NVMe SSDs to a computer, workstation, or server that has an available PCIe x16 slot. This card has a built in PEX PCIe switch chip, so your motherboard does not need to support bifurcation. This card can essentially be installed and used in any system with a free x16 slot.

This card is also available under the PART# SI-PEX40152.

In this post I’ll be reviewing the IOCREST IO-PEX40152, providing information on how to buy, benchmarks, installation, configuration and more! I’ve also posted tons of pics for your viewing pleasure. I installed this card in an HPE DL360p Gen8 server to add NVME capabilities.

We’ll be using and reviewing this card populated with 4 x Sabrent Rocket 4 PCIe NVMe SSD, you can see the review on those SSD’s individually here.

Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

Why and How I purchased the card

Originally I purchased this card for a couple of special and interesting projects I’m working on for the blog and my homelab. I needed a card that provided high density NVME flash storage, but didn’t require bifurcation as I planned on using it on a motherboard that didn’t support 4/4/4/4 bifurcation.

By choosing this specific card, I could also use it in any other system that had an available x16 PCIe slot.

I considered many other cards (such as some from SuperMicro and Intel), but in the end chose this one as it seemed most likely to work for my application. The choices from SuperMicro and Intel looked like they are designed to be use on their own systems.

I purchased the IO-PEX40152 from the IOCREST AliExpress store (after verifying it was their genuine online store) and they had the most cost-effective price out of the 4 sources.

They shipped the card with FedEx International Priority, so I received it within a week. Super fast shipping and it was packed perfectly!

Picture of the IOCREST IO-PEX40152 box
IOCREST IO-PEX40152 Box

Where to buy the IO-PEX40152

I found 3 different sources to purchase the IO-PEX40152 from:

  1. IOCREST AliExpress Store – https://www.aliexpress.com/i/4000359673743.html
  2. Amazon.com – https://www.amazon.com/IO-CREST-Non-RAID-Bifurcation-Controller/dp/B083GLR3WL/
  3. Syba USA – Through their network of resellers or distributors at https://www.sybausa.com/index.php?route=information/wheretobuy

Note that Syba USA is selling the IO-PEX40152 as the SI-PEX40152. The card I actually received has branding that identifies it both as an IO-PEX40152 and an SI-PEX40152.

As I mentioned above, I purchased it from the IOCREST AliExpress Online Store for around $299.00USD. From Amazon, the card was around $317.65USD.

IO-PEX40152 Specifications

Now let’s talk about the technical specifications of the card.

Picture of the IOCREST IO-PEX40152 Side Shot with cover on
IO-PEX40152 Side Shot

According to the packaging, the IO-PEX40152 features the following:

  • Installation in a PCIe x16 slot
  • Supports PCIe 3.1, 3.0, 2.0
  • Compliant with PCI Express M.2 specification 1.0, 1.2
  • Supports data transfer up to 2.5Gb (250MB/sec), 5Gb (500MB/sec), 8Gb (1GB/sec)
  • Supports 2230, 2242, 2260, 2280 size NGFF SSD
  • Supports four (4) NGFF M.2 M Key sockets
  • 4 screw holes 2230/2242/2260/2280 available to fix NGFF SSD card
  • 4 screw holes available to fix PCB board to heatsink
  • Supports Windows 10 (and 7, 8, 8.1)
  • Supports Windows Server 2019 (and 2008, 2012, 2016)
  • Supports Linux (Kernel version 4.6.4 or above)

While this list of features and specs are listed on the website and packaging, I’m not sure how accurate some of these statements are (in a good way), I’ll cover that more later in the post.

What’s included in the packaging?

  • 1 x IO-PEX40152 PCIe x 16 to 4 x M.2(M-Key) card
  • 1 x User Manual
  • 1 x M.2 Mounting material
  • 1 x Screwdriver
  • 5 x self-adhesive thermal pad

They also note that contents may vary depending on country and market.

Unboxing, Installation, and Configuration

As menitoned above, our build includes:

  • 1 x IOCREST IO-PEX40152
  • 4 x Sabrent Rocket 4 NVMe PCIe NVMe SSD
Picture of IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD

You’ll notice it’s a very sleek looking card. The heatsink is beefy, heavy, and very metal (duh)! The card is printed on a nice black PCB.

Removing the 4 screws to release the heatsink, we see the card and thermal paste pads. You’ll notice the PCIe switch chip.

Picture of the front side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Frontside of card

And the backside of the card.

Picture of the back side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Backside of card

NVMe Installation

I start to install the Sabrent Rocket 4 NVMe 2TB SSD.

Picture of a IO-PEX40152 with 2 SSD populated
IO-PEX40152 with 2 SSD populated
Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

That’s a good looking 8TB of NVMe SSD!

Note that the cards will wiggle side to side and have play until screw is tightened. Do not over-tighten the screw!

Make sure before installing the heatsink cover that you remove the blue plastic film on the heat transfer material between NVME and heatsink, and the PEX chip and heatsink.

After that, I installed it in the server and was ready to go!

Heatsink and cooling

A quick note on the heatsink and cooling…

While the heatsink and cooling solution it comes with works amazing, you have flexibility if need be to run and operate the card without the heatsink and fan (the fan doesn’t cause any warnings if disconnected). This works out great if you want to use your own cooling solution, or need to use this card in a system where there isn’t much space. The fan can be removed by removing the screws and disconnecting the power connector.

Note, after installing the NVME SSD, and you affix the heatsink, in the future you will notice that the heatsink get’s stuck to the cards if you try to remove it at a later date. If you do need to remove the heatsink, be very patient and careful, and slowly remove the heatsink to avoid damaging or cracking the NVME SSD and the PCIe card itself.

Speedtest and benchmark

Let’s get to one of the best parts of this review, using the card!

Unfortunately due to circumstances I won’t get in to, I only had access to a rack server to test the card. The server was running VMware vSphere and ESXi 6.5 U3.

After shutting down the server, installing the card, and powering on, you could see the NVMe SSD appearing as available to PCI Passthrough to the VMs. I enabled passthrough and restarted again. I then added the individual 4 NVME drives as PCI passthrough devices to the VM.

Picture of IOCREST IO-PEX40152 passthrough with NVMe to VMware guest VM
IO-PEX40152 PCI Passthrough on VMware vSphere and ESXi

Turning on the system, we are presented with the NVMe drives inside of the “Device Manager” on Windows Server 2016.

A screenshot of an IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016
IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016

Now that was easy! Everything’s working perfectly…

Now we need to go in to disk manager and create some volumes for some quick speed tests and benchmarks.

A screenshot of Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Again, no problems and very quick!

Let’s load up CrystalDiskMark and test the speed and IOPS!

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

What’s interesting is that I was able to achieve much higher speeds using this card in an older system, than directly installing one of the SSDs in a new HP Z240 workstation. However, unfortunately due to CPU limitations (maxing the CPU out) of this server used above, I could not fully test, max out, or benchmark the IOPS on an individual SSD.

Additional Notes on the IO-PEX40152

Some additional notes I have on the IO-PEX40152:

The card works perfectly with VMware ESXi PCI passthrough when passing it through to a virtualized VM.

The card according to the specifications states a data transfer up to 1GB/sec, however I achieved over 3GB/sec using the Sabrent Rocket 4 NVME SSD.

While the specifications and features state it supports NVME spec 1.0 and 1.1, I don’t see why it wouldn’t support the newer specifications as it’s simply a PCIe switch which NVMe slots.

Conclusion

This is a fantastic card that you can use reliably if you have a system with a free x16 slot. Because of the fact it has a built in PCIe switch and doesn’t require PCIe bifurcation, you can confidently use it knowing it will work.

I’m looking forward to buying a couple more of these for some special applications and projects I have lined up, stay tuned for those!

May 222020
 
A Picture of the 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Today we’re going to be talking about Sabrent’s newest line of NVMe storage products, particularly the 2TB Rocket NVMe PCIe 4.0 M.2 2280 Internal SSD Solid State Drive or the Sabrent Rocket 4 2TB NVMe stick as I like to call it.

Last week I purchased a quantity of 4 of these for a total of 8TB of NVMe storage to use on an IOCrest IO-PEX40152 Quad NVMe PCIe Card. For the purpose of this review, we’re benchmarking one inside of an HP Z240 Workstation.

While these are targeted for users with a PCIe 4.0 interface, I’ll be using these on PCIe 3 as it’s backwards compatible. I purchased the PCIe 4 option to make sure the investment was future-proofed.

Keep reading for a bunch of pictures, specs, speed tests, benchmarks, information, and more!

A picture of 4 unopened boxes of Sabrent Rocket 4 2TB NVMe sticks
4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Let’s get started with the review!

How and Why I purchased these

I’ve been working on a few special top-secret projects for the blog and YouTube channel, and needed some cost-effective yet high performing NVMe storage.

I needed at least 8TB of NVMe flash and I’m sure as all of you are aware, NVMe isn’t cheap.

After around a month of research I finally decided to pull the trigger and purchase a quantity of 4 x Sabrent Rocket 4 NVMe 2TB SSD. For future projects I’ll be using these in an IOCREST IO-PEX40152 NVME PCIe card.

These NVMe SSDs are targeted for consumers (normal users, gamers, power users, and IT professionals) and are a great fit! Just remember these do not have PLP (power loss protection), which is a feature that isn’t normally found in consumer SSDs.

Specifications

See below for the specifications and features included with the Sabrent Rocket 4 2TB NVMe SSD.

Hardware Specs:

  • Toshiba BiCS4 96L TLC NAND Flash Memory
  • Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD Controller
  • Kioxia 3D TLC NAND
  • M.2 2280 Form Factor
  • PCIe 4.0 Speeds
    • Read Speed of 5000MB/sec
    • Write Speed of 4400MB/sec
  • PCIe 3.0 Speeds
    • Read Speed of 3400MB/sec
    • Write Speed of 2750MB/sec
  • 750,000 IOPS on 2TB Model
  • Endurance: 3,600TBW for 2TB, 1,800TBW for 1TB, 850TBW for 500TB
  • Available in 500GB, 1TB, 2TB
  • Made in Taiwan

Features:

  • NVMe M.2 2280 Interface for PCIe 4.0 (NVMe 1.3 Compliant)
  • APST, ASPM, L1.2 Power Management Support
  • Includes SMART and TRIM Support
  • ONFi 2.3, ONFi 3.0, ONFi 3.2 and ONFi 4.0 interface
  • Includes Advanced Wear Leveling, Bad Block Management, Error Correction Code, and Over-Provision
  • User Upgradeable Firmware
  • Software Tool to change Block Size

Where and how to buy

One of the perks of owning an IT company is that typically you can purchase all of your internal use product at cost or discount, unfortunately this was not the case.

I was unable to find the Sabrent products through any of the standard distribution channels and had to purchase through Amazon. This is unfortunate because I wouldn’t mind being able to sell these units to customers.

Amazon Purchase Links (2TB Model)

The PART#s are as follows for the different sizes:

ProductNVMe Disk SizePART#
No Heatsink
PART#
Heatsink
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive2TBSB-ROCKET-NVMe4-2TBSB-ROCKET-NVMe4-HTSK-2TB
1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive1TBSB-ROCKET-NVMe4-1TBSB-ROCKET-NVMe4-HTSK-1TB
500GB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive500GBSB-ROCKET-NVMe4-500SB-ROCKET-NVMe4-HTSK-500
Sabrent Rocket 4 Part Number Lookup Table

Cost

At the time of creation of this post, purchasing from Amazon Canada the 2TB model would set you back $699.99CAD for a single unit, however there was a sale going on for $529.99CAD per unit.

Additionally, at the time of creation of this post the 2TB model on Amazon USA would set you back $399.98 USD.

A total quantity of 4 set me back around $2,119.96CAD on sale versus $2,799.96 at regular price.

If you’re familiar with NVMe pricing, you’ll notice that this pricing is extremely attractive when comparing to other high performance NVMe SSDs.

Unboxing

I have to say I was very impressed with the packaging! Small, sleek, and impressive!

A picture of Sabrent Rocket 4 2TB NVMe sticks metal case packagin
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive Metal Case Packaging

Initially I was surprised how small the boxes were as they fit in the palm of your hand, but then you realize how small the NVMe sticks are, so it makes sense.

Opening the box you are presented with a beautiful metal case containing the instructions, information on the product warranty, and more.

Picture of a Sabrent Rocket 4 case opened
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive in case

And the NVME stick removed from it’s case

Picture of a Sabrent Rocket 4 case opened and the NVME SSD removed
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive removed from case

While some of the packaging may be unnecesary, after further thought I realized it’s great to have as you can re-use the packaging when storing NVMe drives to keep them safe and/or to put them in to storage.

And here’s a beautiful shot of 8TB of NVMe storage.

A picture of 8TB of total storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive
8TB Total NVMe storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Now let’s move on to usage!

Installation, Setup, and Configuration

Setting one of these up in my HP Workstation was super easy. You simply populate the NVMe M.2 slot, install the screw, and boot up the system.

Picture of a Sabrent Rocket PCIe4 NVMe 2TB SSD Installed in computer
Sabrent Rocket 4 NVMe 2TB SSD in HP Z240 SFF Workstation

Upon booting, the Sabrent SSD was available inside of the Device Manager. I read on their website that they had some utilities so I wanted to see what configuration options I had access to before moving on to speed test benchmarks.

All Sabrent Rocket utilities can be downloaded from their website at https://www.sabrent.com/downloads/.

Sabrent Sector Size Converter

The Sabrent Sector Size Converter utility allows you to configure the sector size of your Sabrent Rocket SSD. Out of the box, I noticed mine was configured with a 512e sector format, which I promptly changed to 4K.

Screenshot using the Sabrent Sector Size Converter to change SSD from 512e to 4K Sector Size
Sabrent Sector Size Converter v1.0

The change was easy, required a restart and I was good to go! You’ll notice it has a drop down to select which drive you want to modify, which is great if you have more than one SSD in your system.

I did notice one issue… When you have multiple (in my case 4) of these in one system, for some reason the sector size change utility had trouble changing one of them from 512e to 4K. It would appear to be succesful, but stay at 512e on reboot. Ultimately I removed all of the NVME sticks with the exception of the problematic one, ran the utility, and the issue was resolved.

Sabrent Rocket Control Panel

Another useful utility that was available for download is the Sabrent Rocket Control Panel.

Screenshot of the Sabrent Rocket Control Panel
Sabrent Rocket Control Panel

The Sabrent Rocket Control Panel provides the following information:

  • Drive Size, Sector Size, Partition Count
  • Serial Number and Drive identifier
  • Feature status (TRIM Support, SMART, Product Name)
  • Drive Temperature
  • Drive Health (Lifespan)

You can also use this app to view S.M.A.R.T. information, flash updated Sabrent firmware, and more!

Now that we have this all configured, let’s move on to testing this SSD out!

Speed Tests and Benchmarks

The system we used to benchmark the Sabrent Rocket 4 2TB NVMe SSD is an HP Z240 SFF (Small Form Factor) workstation.

The specs of the Z240 Workstation:

  • Intel Xeon E3-1240 v5 @ 3.5Ghz
  • 16GB of RAM
  • Samsung EVO 500GB as OS Drive
  • Sabrent Rocket 4 NVMe 2TB SSD as Test Drive

I ran a few tests using both CrystalDiskMark and ATTO Disk Benchmark, and the NVMe SSD performed flawlessly at extreme speeds!

CrystalDiskMark Results

Loading up and benching with CrystalDiskMark, we see the following results:

Screenshot of speedtest and benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark Results

As you can see, the Sabrent Rocket 4 2TB NVMe tested at a read speed of 3175.63MB/sec and write speed of 3019.17MB/sec.

Screenshot of IOPS benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark IOPS Results

Using the Peak Performance profile, we some amazing IO with 613171.14IOPS read and 521861.33IOPS write with RND4K.

While we’re only testing with a PCIe 3.0 system, these numbers are still amazing and inline with what’s advertised.

ATTO Disk Benchmark Results

Switing over to ATTO Disk Benchmark, we test both speed and IOPS.

First, the speed benchmarks with I/O sized 4K to 12MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 4K to 12MB

After taking a short cooldown break (we don’t have a heatsink installed), we tested 12MB to 64MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 12MB to 64MB

And now we move on to analyze the IO/s.

First from 4K to 12MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 4K to 12MB

And then after a short break, 12MB to 64MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 12MB to 64MB

Those numbers are insane!

Additional Notes

When you purchase a new Sabrent Rocket 4 SSD it comes with a 1 year standard warranty, however if you register your product within 90 days of purchase, you can extend it to an awesome 5 year warranty.

To register your product, visit https://www.sabrent.com/product-registration/

The process is easy if you have one device, however it very repettitive and takes time if you have multiuple as the steps have to be repeated for each device you have. Sabrent, if you’re listening a batch registration tool would be nice! 🙂

Remember that after registering your product, you should record your “Registration Unique ID” for future reference and use.

Conclusion

All-in-all I’d definitely recommend the Sabrent Rocket 4 NVMe SSD! It provides extreme performance, is extremely cost-effective, and I wouldn’t see any reason not to buy them.

Just remember that these SSDs (like all consumer SSDs) do not provide power loss protection, meaning you should not use these in enterprise environments (or in a NAS or SAN).

I’m really looking forward to using these in my upcoming blog and YouTube projects.