Jun 172020
 

One thing I love doing is mixing technology with sport.

In my free time I’m often hiking, cycling, running, or working out. I regularly use technology to supplement and track my activities. It helps to record, remember, track, and compete with myself.

I use a combo of hardware and software to do so, including watches, phones, software, etc but today I wanted to put emphasis on the Snapchat Spectacles.

The Snapchat Spectacles

Picture of Snapchat Spectacles and Charging Case
Snapchat Spectacles

I’ve had a pair of the 1st generation Snapchat Spectacles since they were released (I had to use my US shipping address to bring them over to Canada). Over the years I’ve used them to collect videos and haven’t really done much with them, with the exception of sending snaps to friends.

Thankfully I save everything I record and as of the past year, incorporating my new hobby with video, I’ve been able to use some of the old footage to generate some AMAZING videos!

See below for a video I put together of 3 beautiful mountain summits I hiked in one month, first person from the Snapchat Spectacles.

Snapchat Spectacles: 3 Mountain Summits in 31 Days

If you keep reading through to the end of the post there’s another video.

First person view

As you can see, even the first version of the Snapchat Spectacles generates some beautiful HD video, providing a first person view of the wearers field of vision.

You might say it’s similar to wearing a GoPro, but what I like about the Spectacles is that the camera is mounted beside your eyes, which makes the video capture that much more personal.

My wishlist

What I’d really like is the ability to continuously record HD video non-stop and even possibly record to my mobile device. Even if this couldn’t be accomplished wirelessly and required a wire to my mobile device, I would still be using it all the time.

Another thing that would be nice would be more size options, as the first generation are way too small for my head, LOL! 🙂

Conlusion

Tech is awesome, and I love using tech like this to share personal experiences!

Snapchat Spectacles: Hiking Grotto Mountain September 2017

Snapchat, if you’re listening, I’d love to help with the design of future versions of the Snapchat Spectacles…

Jun 152020
 

We all love speed, whether it’s our internet connection or our home network. And as our internet speeds approach gigabits per second, it’s about time our networks hit 10Gb per second…

High speed networking, particularly 10Gig network is becoming more cost-effective day by day, and with vendors releasing affordable switches, there hasn’t been a better time to upgrade.

Today we’re going 10Gig with the Ubiquiti UniFi US-16-XG switch.

Picture of a Ubiquiti UniFi US-16-XG Switch retail box
Ubiquiti UniFi US-16-XG Switch Box shot

I’ll be discussing my configuration and setup, why you should use this switch for your homelab and/or business, as well as providing a review on the equipment.

Make sure you check out the video below and read the entire post!

Going 10Gig with the Ubiquiti UniFi US-16-XG Network Switch

Let’s get to it!

The back story

Just like the backstory with my original Ubiquiti UniFi Review, I wanted to optimize my network, increase speeds, and remove bottlenecks.

Most of my servers have 10Gig network adapters (through 10GbaseT or SFP+ ports), and I wanted to upgrade my other servers. I always wanted the ability to add more uplinks to allow a single host/server to have redundant connections to my network.

Up until now, I had 2 hosts connected via my Ubiquiti UniFi US-48 switch via the SFP+ ports with a SFP+ to 10GbaseT module. Using both of the 10Gig ports disallows anymore 10Gig devices being connected. Also, the converter module adds latency.

The goal

Ultimately I wanted to implement a solution that included a new 10Gb network switch acting as a backbone for the network, with connections to my servers, storage, 10Gig devices, and secondary 1Gb switches.

While not needed, it would be nice to have access to both SFP+ connections, as well as 10GbaseT as I have devices that use both.

At the same time, I wanted something that would be easy to manage, affordable, and compatible with equipment from other vendors.

Picture of Ubiquiti UniFi 16 XG Switch with UDC-1 DAC SFP+ Cables
Ubiquiti UniFi 16 XG Switch with UDC-1 DAC SFP+ Cables

I chose the Ubiquiti UniFi US-16-XG Switch for the task, along with an assortment of cables.

Ubiquiti UniFi US-16-XG Switch

After already being extremely please with the Ubiquiti UniFi product line, I was happy to purchase a unit for internal use, as my company sells Ubiquiti products.

Receiving the product, I was very impressed with the packaging and shipping.

Ubiquiti UniFi US-16-XG Switch Unboxing
Ubiquiti UniFi US-16-XG Switch Unboxing
Picture of Ubiquiti UniFi US-16-XG Switch Unboxing and Package contents
Ubiquiti UniFi US-16-XG Switch Unboxing and Package contents

And here I present the Ubiquiti UniFi 16 XG Switch…

Picture of Ubiquiti UniFi US-16-XG Switch
Ubiquiti UniFi US-16-XG Switch

You’ll notice the trademark UniFi product design. On the front, the UniFi 16 XG switch has 12 x 10Gb SFP+ ports, along with 4 x 10GbaseT ports. All ports can be used at the same time as none are shared.

Picture of the backside of a Ubiquiti UniFi US-16-XG Switch
Ubiquiti UniFi US-16-XG Switch Backside

The backside of the switch has a console port, along with 2 fans, DC power input, and the AC power.

Overall, it’s a good looking unit. It has even better looking specs…

Specs

The UniFi 16 XG switch specifications:

  • 12 x 10Gb SFP+ Ports
  • 4 x 10GbaseT Ports
  • 160 Gbps Total Non-Blocking Line Rate
  • 1U Form Factor
  • Layer 2 Switching
  • Fully Managed via UniFi Controller

The SFP+ ports allow you to use a DAC (Direct Attach Cable) for connectivity, or fiber modules. You can also populate them with converters, such as the Ubiquiti 10GBASE-T SFP+ CopperModule.

Picture of the Ubiquiti UniFi 16 XG Switch Ports
Ubiquiti UniFi 16 XG Switch Ports

You can also attach 4 devices to the 10GbaseT ports.

UDC-3 “FiberCable” DAC

I also purchased 2 x Ubiquiti UDC-3 SFP+ DAC cables. These cables provide connectivity between 2 devices with DAC ports. These can be purchased in lengths of 1 meter, 2 meter, and 3 meters with the part numbers of UDC-1, UDC-2, and UDC-3 respectively.

10Gtek Cable DAC

To test compatibility and have cables from other vendors (in case of any future issues), I also purchased an assortment of 10Gtek SFP+ DAC cables. I specifically chose these as I wanted to have a couple of half meter cables to connect the switches with an aggregated LAG.

UniFi Controller Adoption

To get quickly up and running, I setup the US-16-XG on my workbench, plugged in a network cable in to one of the 10GbaseT ports, and powered it on.

Picture of Ubiquiti US-16-XG Initial Provisioning on workbench
Ubiquiti US-16-XG Initial Provisioning

Boot-up was quick and it appeared in the UniFi Controller immediately. It required a firmware update before being able to adopt it to the controller.

Screenshot of UniFi US-16-XG UniFi Controller Pre-adoption
UniFi US-16-XG UniFi Controller Pre-adoption

After a quick firmware update, I was able to adopt and configure the switch.

Screenshot of UniFi US-16-XG UniFi Configured
UniFi US-16-XG UniFi Configured

The device had a “Test date” of March 2020 on the box, and the UniFi controller reported it as a hardware revision 13.

Configuration and Setup

Implementing, configuration, and setup will be an ongoing process over the next few weeks as I add more storage, servers, and devices to the switch.

The main priority was to test cable compatibility, connect the US-16-XG to my US-48, test throughput, and put my servers directly on the new switch.

I decided to just go ahead and start hooking it up. I decided to do this live without shutting anything down. I went ahead and perfomed the following:

  1. Put the US-16-XG on top of the US-48
  2. Disconnect servers from SFP+ CopperModules on US-48 switch
  3. Plug servers in to 10GbaseT ports on US-16-XG
  4. Remove SFP+ to 10GbaseT CopperModule from US-48 SFP+ ports
  5. Connect both switches with a SFP+ DAC cable
Picture of US-16-XG and US-48 Connected and Configured
US-16-XG and US-48 Connected and Configured

Performing these steps only took a few seconds and everything was up and running. One particular thing I’d like to note is that the port auto-negotiation time on the US-16-XG was extremely quick.

Taking a look at the UniFi Controller view of the US-16-XG, we see the following.

Screenshot of US-16-XG Configured and Online with UniFi Controller
US-16-XG Configured and Online with UniFi Controller

Everything is looking good! Ports auto-detected the correct speed, traffic was being passed, and all is good.

After running like this for a few days, I went ahead and tested the 10Gtek cables which worked perfectly.

To increase redundancy and throughput, I used 2 x 0.5-Meter 10Gtek SFP+ DAC cables and configured an aggregated LAG between the two switches which has also been working perfectly!

In the coming weeks I will be connecting more servers as well as my SAN, so keep checking back for updated posts.

Overall Review

This is a great switch at an amazing price-point to take your business network or homelab network to 10Gig speeds. I highly recommend it!

Use Cases:

  • Small network 10Gig switch
  • 10Gig backbone for numerous other switches
  • SAN switch for small SAN network

What I liked the most:

  • 10Gig speeds
  • Easy setup as always with all the UniFi equipment
  • Beautiful management interface via the UniFi Controller
  • Near silent running
  • Ability to use both SFP+ and 10GbaseT
  • Compatibility with SFP+ DAC Cables

What could be improved:

  • Redundant power supplies
  • Option for more ports
  • Bug with mobile app showing 10Mbps manual speed for 10Gig ports

Additional Resources and Blog Posts:

Manufacturer Product Links

Jun 072020
 

This month on June 23rd, HPE is hosting their annual HPE Discover event. This year is a little bit different as COVID-19 has resulted in a change of the usual in-person event, and this year’s event is now being hosted as a virtual experience.

I expect it’ll be the same great content as they have every year, only difference is you’ll be able to virtually experience it from the comfort of your own home.

I’m especially excited to say that I’ve been invited to be special VIP Influencer for the event, so I’ll be posting some content on Twitter, LinkedIn, and of course generating some posts on my blog.

HPE Discover Register Now Graphic
Register for HPE Discover Now

Stay tuned, and don’t forget to register at: https://www.hpe.com/us/en/discover.html

The content catalog is now live!

Jun 062020
 
Screenshot of NVMe SSD on FreeNAS

Looking at using SSD and NVMe with your FreeNAS or TrueNAS setup and ZFS? There’s considerations and optimizations that must be factored in to make sure you’re not wasting all that sweet performance. In this post I’ll be providing you with my own FreeNAS and TrueNAS ZFS optimizations for SSD and NVMe.

This post will contain observations and tweaks I’ve discovered during testing and production of a FreeNAS ZFS pool sitting on NVMe vdevs, which I have since upgraded to TrueNAS Core. I will update it with more information as I use and test the array more.

Screenshot of FreeNAS ZFS NVMe SSD Pool with multiple datasets
FreeNAS/TrueNAS ZFS NVMe SSD Pool with multiple datasets

Considerations

It’s important to note that while your SSD and/or NVMe ZFS pool technically could reach insane speeds, you will probably always be limited by the network access speeds.

With this in mind, to optimize your ZFS SSD and/or NVMe pool, you may be trading off features and functionality to max out your drives. These optimizations may in fact be wasted if you reach the network speed bottleneck.

Some feature you may be giving up may actually help extend the life or endurance of your SSD such as compression and deduplication, as they reduce the number of writes performed on each of your vdevs (drives).

You may wish to skip these optimizations should your network be the limiting factor, which will allow you to utilize these features with no performance or minimal performance degradation to the final client. You should measure your network throughput to establish the baseline of your network bottleneck.

Deploying SSD and NVMe with FreeNAS or TrueNAS

For reference, the environment I deployed FreeNAS with NVMe SSD consists of:

As mentioned above, FreeNAS is virtualizatized on one of the HPE DL360 Proliant servers and has 8 CPUs and 32GB of RAM. The NVME are provided by VMware ESXi as PCI passthrough devices. There has been no issues with stability in the months I’ve had this solution deployed. It is also still working amazing since upgrading FreeNAS to TrueNAS core.

Screenshot of Sabrent Rocket 4 2TB NVMe SSD on FreeNAS
Sabrent Rocket 4 2TB NVMe SSD on FreeNAS

Important notes:

  • VMXNET3 NIC is used on VMs to achieve 10Gb networking
  • Using PCI passthrough, snapshots on FreeNAS VM are disabled (this is fine)
  • NFS VM datastore is used for testing as the host running the FreeNAS VM has the NFS datastore store mounted on itself.

There are a number of considerations that must be factored in when virtualization FreeNAS and TrueNAS however those are beyond the scope of this blog post. I will be creating a separate post for this in the future.

Use Case (Fast and Risky or Slow and Secure)

The use case of your setup will depict which optimizations you can use as some of the optimizations in this post will increase the risk of data loss (such as disabled sync writes and RAIDz levels).

Fast and Risky

Since SSDs are more reliable and less likely to fail, if you’re using the SSD storage as temporary hot storage, you could simply using striping to stripe across multiple vdevs (devices). If a failure occurred, the data would be lost, however if you’re were just using this for “staging” or using hot data and the risk was acceptable, this is an option to drastically increase speeds.

Example use case for fast and risky

  • VDI Pool for clones
  • VMs that can be restored easily from snapshots
  • Video Editing
  • Temporary high speed data dump storage

The risk can be lowered by replicating the pool or dataset to slower storage on a frequent or regular basis.

Slow and Secure

Using RAIDz-1 or higher will allow for vdev (drive) failures, but with each level increase, performance will be lost due to parity calculations.

Example use case for slow and secure

  • Regular storage for all VMs
  • Database (SQL)
  • Exchange
  • Main storage

Slow and Secure storage is the type of storage found in most applications used for SAN or NAS storage.

SSD Endurance and Lifetime

Solid state drives have a lifetime that’s typically measured in lifetime writes. If you’re storing sensitive data, you should plan ahead to mitigate the risk of failure when the drive reaches it’s full lifetime.

Steps to mitigate failures

  • Before putting the stripe or RAIDz pool in to production, perform some large bogus writes and stagger the amount of data written on the SSDs individually. While this will reduce the life counter on the SSDs, it’ll help you offset and stagger the lifetime of each drives so they don’t die at the same time.
  • If using RAIDz-1 or higher, preemptively replace the SSD before the lifetime is hit. Do this well in advance and stagger it to further create a different between the lifetime of each drive.

Decommissioning the drives preemptively and early doesn’t mean you have to throw them away, this is just to secure the data on the ZFS pool. You can can continue to use these drives in other systems with non-critical data, and possibly use the drive well beyond it’s recommended lifetime.

Compression and Deduplication

Using compression and deduplication with ZFS is CPU intensive (and RAM intensive for deduplication).

The CPU usage is negligible when using these features on traditional magnetic storage (traditional magentic platter hard drive storage) because when using traditional hard drives, the drives are the performance bottleneck.

SSD are a total different thing, specifically with NVMe. With storage speeds in the gigabytes per second, CPUs cannot keep up with the deduplication and compression of data being written and become the bottleneck.

I performed a simple test comparing speeds with compression and dedupe with the same VM running CrystalDiskMark on an NFS VMware datastore running over 10Gb networking. The VM was configured with a single drive on a VMware NVME controller.

NVMe SSD with compression and deduplication

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore with compression and deduplication

NVMe SSD with deduplication only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only
CrystalDiskMark on FreeNAS NFS SSD datastore with deduplication only

NVMe SSD with compression only

Screenshot of benchmark of CrystalDiskMark on FreeNAS NFS SSD datastore with compression only
CrystalDiskMark on FreeNAS NFS SSD datastore with compression only

Now this is really interesting, that we actually see a massive speed increase with compression only. This is because I have a server class CPU with multiple cores and a ton of RAM. With lower performing specs, you may notice a decrease in performance.

NVMe SSD without compression and deduplication

Screenshot of benchmark with CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication
CrystalDiskMark on FreeNAS NFS SSD datastore without compression and deduplication

In my case, the 10Gb networking was the bottleneck on read operations as there was virtually no change. It was a different story for write operations as you can see there is a drastic change in write speeds. Write speeds are greatly increased when writes aren’t being compressed or deduped.

Note that on faster networks, read speeds could and will be affected.

If your network connection to the client application is the limiting factor and the system can keep up with that bottleneck then you will be able to get away with using these features.

Higher throughput with compression and deduplication can be reached with higher frequency CPUs (more Ghz), more cores (for more client connections). Remember that large amounts of RAM are required for deduplication.

Using compression and deduplication may also reduce the writes to your SSD vdevs, prolonging the lifetime and reducing the cost of maintaining the solution.

ZFS ZIL and SLOG

When it comes to writes on a filesystem, there a different kinds.

  • Synchronous – Writes that are made to a filesystem that are only marked as completed and successful once it has actually been written to the physical media.
  • Asynchronous – Writes that are made to a filesystem that are marked as completed or successful before the write has actually been completed and committed to the physical media.

The type of write performed can be requested by the application or service that’s performing the write, or it can be explicitly set on the file system itself. In FreeNAS (in our example) you can override this by setting the “sync” option on the zpool, dataset, or zvol.

Disabling sync will allow writes to be marked as completed before they actually are, essentially “caching” writes in a buffer in memory. See below for “Ram Caching and Sync Writes”. Setting this to “standard” will perform the type of write requested by the client, and setting to “always” will result in all writes being synchronous.

We can speed up and assist writes by using a SLOG for ZIL.

ZIL stands for ZFS Intent Log, and SLOG standards for Separated Log which is usually stored on a dedicated SLOG device.

By utilizing a SLOG for ZIL, you can have dedicated SSDs which will act as your intent log for writes to the zpool. On writes that request a synchronous write, they will be marked as completed when sent to the ZIL and written to the SLOG device.

Implementing a SLOG that is slower than the combined speed of your ZFS pool will result in a performance loss. You SLOG should be faster than the pool it’s acting as a ZIL for.

Implementing a SLOG that is faster than the combined speed of your ZFS pool will result in a performance gain on writes, as it essentially act as “write cache” for synchronous writes and will possibly even perform more orderly writes when it commits it to the actual vdevs in the pool.

If using a SLOG for ZIL, it is highly recommend to use an SSD that has PLP (power loss protection) as well as a mirrored set to avoid data loss and/or corruption in the event of a power loss, crash, or freeze.

RAM Caching and Sync Writes

In the event you do not have a SLOG device to provide a ZIL to your zpool, and you have a substantial amount of memory, you can disable sync writes on the pool which will drastically increase write operations as they will be buffered in RAM memory.

Disabling sync on your zpool, dataset, or zvol, will tell the client application that all writes has been complete and committed to disk (HD or SSD) before it has actually done so. This allows the system to cache writes in the system memory.

In the event of a power loss, crash, or freeze, this data will be lost and/or possibly result in corruption.

You would only want to do this if you had the need for fast storage where data loss would is acceptable (such as video editing, a VDI clone desktop pool, etc).

Utilizing a SLOG for ZIL is much better (and safer) then this method, however I still wanted to provide this for informational purposes as it does apply to some use cases.

SSD Sector Size

Traditional drives typically used 512k physical sector sizes. Newer hard drives and SSDs use 4k sectors, but often emulate 512k logical sectors (called 512e) for compatibility. SSD’s specifically sometimes ship with 512e to increase compatibility with operating systems and the ability to clone your old drive to the new SSD during migrations.

When emulating 512k logical sectors on an HD or SSD that uses 4k physical native sectors, an operation that writes 4k will result in 4 operations instead of 1. This increases overhead and could result in reduced IO and speed, as well as create more wear on the SSD when performing writes.

Some HDs and SSDs come with utilities or tools to change the sector size of the drive. I highly recommend changing it to it’s native sector size.

iSCSI vs NFS

Technically faster speeds should possible using iSCSI instead of NFS, however special care must be made when using iSCSI.

If you’re using iSCSI and the host that is virtualizing the FreeNAS instance is also mounting the iSCSI VMFS target that it’s presenting, you must unmount this iSCSI volume every time you go plan to shut down the FreeNAS instance, or the entire host that is hosting it. Unmounting the iSCSI datastore also means unregistering any VMs that reside on it.

Screenshot of VMware ESXi with FreeNAS NVMe SSD as NFS datastore
VMware ESXi with virtualized FreeNAS as NFS datastore

If you simply shutdown the FreeNAS instance that’s hosting the iSCSI datastore, this will result in a improper unclean unmount of the VMFS volume and could lead to data loss, even if no VMs are running.

NFS provides a cleaner mechanism, as the FreeNAS handles the unmount of the base filesystem cleanly on shutdown and to the ESXi hosts it appears as an NFS disconnect. If VMs are not running (and no I/O is occuring) when the FreeNAS instance is shut down, data loss is not a concern.

Jumbo Frames

Since you’re pushing more data, more I/O, and at a faster pace, we need to optimize all layers of the solution as much as possible. To reduce overhead on the networking side of things, if possible, you should implement jumbo frames.

Instead of sending many smaller packets which independently require acknowledgement, you can send fewer larger packets. This significantly reduces overhead and allows for faster speed.

In my case, my FreeNAS instance will be providing both NAS and SAN services to the network, thus has 2 virtual NICs. On my internal LAN where it’s acting as a NAS (NIC 1), it will be using the default MTU of 1500 byte frames to make sure it can communicate with workstations that are accessing the shares. On my SAN network (NIC 2) where it will be acting as a SAN, it will have a configured MTU of 9000 byte frames. All other devices (SANs, client NICs, and iSCSI initiators) on the SAN network have a matching MTU of 9000.

Additional Notes

Please note that consumer SSDs usually do not have PLP (Power Loss Prevention). This means that in the event of a power failure, any data sitting on the write cache on the SSD may be lost. This could put your data at risk. Using enterprise solid state drives remedies this issue as they often come with PLP.

Conclusion

SSD’s are great for storage, whether it be file, block, NFS, or iSCSI! It’s in my opinion that NVMe and all flash arrays is where the future of storage is going.

I hope this information helps, and if you feel I left anything out, or if anything needs to be corrected, please don’t hesitate to leave a comment!

Jun 012020
 

If you’re running a Sophos UTM firewall, you may start noticing websites not loading properly, or presenting an error reporting that a root CA has expired.

The error presented is below:

Sectigo COMODO CA Cetificate Untrusted Website Certificate has expired

Webpages that do not present an error may fail to load, or only load partial parts of the page.

Update June 3rd 2020 – There are reports that this issue is also occurring with other vendors security solutions as well (such as Palo Alto Firewalls).

The Issue

This is due to some root CA (Certificate Authority) certificates expiring.

Particularly this involves the following Root CA Certificates:

  • AddTrust AB – AddTrust External CA Root
  • The USERTRUST Network – USERTrust RSA Certification Authority
  • The USERTRUST Network – USERTrust ECC Certification Authority

Read more about the particular issue here: https://support.sectigo.com/Com_KnowledgeDetailPage?Id=kA03l00000117LT

The Fix

To resolve this, we must first disable 3 of the factory shipped Root CA’s (listed above) on the Sophos UTM, and then upload the new Root CAs.

You’ll need to go to the following links (as referenced on Sectigo’s page above, and download the new Root CAs:

USERTrust RSA Root CA (Updated) – https://crt.sh/?id=1199354

USERTrust ECC Root CA (Updated) – https://crt.sh/?id=2841410

When you go to each of the above pages, click on “Certificate” as shown below, to download the Root CA cert.

Download Updated Root CAs Example

Do this for both certificates and save to your system.

Now we must update and fix the Sophos UTM:

  1. Log on to your Sophos UTM Web Interface
  2. Navigate to “Web Protection”, then “Filtering Options”, then select the “HTTPS CAs” tab.
  3. Browse through the list of “Global Verification CAs” and disable the following certificates:
    1. AddTrust AB – AddTrust External CA Root
    2. The USERTRUST Network – USERTrust RSA Certification Authority
    3. The USERTRUST Network – USERTrust ECC Certification Authority
  4. Scroll up and under “Local Verification CAs”, use the “Upload local CA” to upload the 2 new certificates you just downloaded.
  5. Make sure they are enabled.

After you complete these steps, verify they are in the list.

New USERTrust Root CAs Enabled

After performing these steps you must restart the HTTPS Web filter Scanning services or restart your Sophos UTM.

The issue should now be resolved. Leave a comment and let me know if it worked for you!

May 262020
 

So you want to add NVMe storage capability to your HPE Proliant DL360p Gen8 (or other Proliant Gen8 server) and don’t know where to start? Well, I was in the same situation until recently. However, after much research, a little bit of spending, I now have 8TB of NVMe storage in my HPE DL360p Gen8 Server thanks to the IOCREST IO-PEX40152.

Unsupported you say? Well, there are some of us who like to live life dangerously, there is also those of us with really cool homelabs. I like to think I’m the latter.

PLEASE NOTE: This is not a supported configuration. You’re doing this at your own risk. Also, note that consumer/prosumer NVME SSDs do not have PLP (Power Loss Prevention) technology. You should always use supported configurations and enterprise grade NVME SSDs in production environments.

DISCLAIMER: If you attempt what I did in this post, you are doing it at your own risk. I won’t be held liable for any damages or issues.

Use Cases

There’s a number of reasons why you’d want to do this. Some of them include:

  • Server Storage
  • VMware Storage
  • VMware vSAN
  • Virtualized Storage (SDS as example)
  • VDI
  • Flash Cache
  • Special applications (database, high IO)

Adding NVMe capability

Well, after all that research I mentioned at the beginning of the post, I installed an IOCREST IO-PEX40152 inside of an HPE Proliant DL360p Gen8 to add NVMe capabilities to the server.

IOCREST IO-PEX40152 with 4 x 2TB Sabrent Rocket 4 NVME

At first I was concerned about dimensions as technically the card did fit, but technically it didn’t. I bought it anyways, along with 4 X 2TB Sabrent Rocket 4 NVMe SSDs.

The end result?

Picture of an HPE DL360p Gen8 with NVME SSD
HPE DL360p Gen8 with NVME SSD

IMPORTANT: Due to the airflow of the server, I highly recommend disconnecting and removing the fan built in to the IO-PEX40152. The DL360p server will create more than enough airflow and could cause the fan to spin up, generate electricity, and damage the card and NVME SSD.

Also, do not attempt to install the case cover, additional modification is required (see below).

The Fit

Installing the card inside of the PCIe riser was easy, but snug. The metal heatsink actually comes in to contact with the metal on the PCIe riser.

Picture of an IO-PEX40152 installed on DL360p PCIe Riser
IO-PEX40152 installed on DL360p PCIe Riser

You’ll notice how the card just barely fits inside of the 1U server. Some effort needs to be put in to get it installed properly.

Picture of an DL360p Gen8 1U Rack Server with IO-PEX40152 Installed
HPE DL360p Gen8 with IO-PEX40152 Installed

There are ribbon cables (and plastic fittings) directly where the end of the card goes, so you need to gently push these down and push cables to the side where there’s a small amount of thin room available.

We can’t put the case back on… Yet!

Unfortunately, just when I thought I was in the clear, I realized the case of the server cannot be installed. The metal bracket and locking mechanism on the case cover needs the space where a portion of the heatsink goes. Attempting to install this will cause it to hit the card.

Picture of the HPE DL360p Gen8 Case Locking Mechanism
HPE DL360p Gen8 Case Locking Mechanism

The above photo shows the locking mechanism protruding out of the case cover. This will hit the card (with the IOCREST IO-PEX40152 heatsink installed). If the heatsink is removed, the case might gently touch the card in it’s unlocked and recessed position, but from my measurements clears the card when locked fully and fully closed.

I had to come up with a temporary fix while I figure out what to do. Flip the lid and weight it down.

Picture of an HPE DL360p Gen8 case cover upside down
HPE DL360p Gen8 case cover upside down

For stability and other tests, I simply put the case cover on upside down and weighed it down with weights. Cooling is working great and even under high load I haven’t seen the SSD’s go above 38 Celsius.

The plan moving forward was to remove the IO-PEX40152 heatsink, and install individual heatsinks on the NVME SSD as well as the PEX PCIe switch chip. This should clear up enough room for the case cover to be installed properly.

The fix

I went on to Amazon and purchased the following items:

4 x GLOTRENDS M.2 NVMe SSD Heatsink for 2280 M.2 SSD

1 x BNTECHGO 4 Pcs 40mm x 40mm x 11mm Black Aluminum Heat Sink Cooling Fin

They arrived within days with Amazon Prime. I started to install them.

Picture of Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Installing GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
Picture of IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME
IOCREST IO-PEX40152 with GLOTRENDS M.2 NVMe SSD Heatsink on Sabrent Rocket 4 NVME

And now we install it in the DL360p Gen8 PCIe riser and install it in to the server.

You’ll notice it’s a nice fit! I had to compress some of the heat conductive goo on the PFX chip heatsink as the heatsink was slightly too high by 1/16th of an inch. After doing this it fit nicely.

Also, note the one of the cable/ribbon connectors by the SAS connections. I re-routed on of the cables between the SAS connectors they could be folded and lay under the card instead of pushing straight up in to the end of the card.

As I mentioned above, the locking mechanism on the case cover may come in to contact with the bottom of the IOCREST card when it’s in the unlocked and recessed position. With this setup, do not unlock the case or open the case when the server is running/plugged in as it may short the board. I have confirmed when it’s closed and locked, it clears the card. To avoid “accidents” I may come up with a non-conductive cover for the chips it hits (to the left of the fan connector on the card in the image).

And with that, we’ve closed the case on this project…

Picture of a HPE DL360p Gen8 Case Closed
HPE DL360p Gen8 Case Closed

One interesting thing to note is that the NVME SSD are running around 4-6 Celsius cooler post-modification with custom heatsinks than with the stock heatsink. I believe this is due to the awesome airflow achieved in the Proliant DL360 servers.

Conclusion

I’ve been running this configuration for 6 days now stress-testing and it’s been working great. With the server running VMware ESXi 6.5 U3, I am able to passthrough the individual NVME SSD to virtual machines. Best of all, installing this card did not cause the fans to spin up which is often the case when using non-HPE PCIe cards.

This is the perfect mod to add NVME storage to your server, or even try out technology like VMware vSAN. I have a number of cool projects coming up using this that I’m excited to share.

May 252020
 
Picture of an IOCREST IO-PEX40152 PCIe x16 to Quad M.2 NVMe

Looking to add quad (4) NVMe SSDs to your system and don’t have the M.2 slots or a motherboard that supports bifurcation? The IOCREST IO-PEX40152 QUAD NVMe PCIe card, is the card for you!

The IO-PEX40152 PCIe card allows you to add 4 NVMe SSDs to a computer, workstation, or server that has an available PCIe x16 slot. This card has a built in PEX PCIe switch chip, so your motherboard does not need to support bifurcation. This card can essentially be installed and used in any system with a free x16 slot.

This card is also available under the PART# SI-PEX40152.

In this post I’ll be reviewing the IOCREST IO-PEX40152, providing information on how to buy, benchmarks, installation, configuration and more! I’ve also posted tons of pics for your viewing pleasure. I installed this card in an HPE DL360p Gen8 server to add NVME capabilities.

We’ll be using and reviewing this card populated with 4 x Sabrent Rocket 4 PCIe NVMe SSD, you can see the review on those SSD’s individually here.

Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

Why and How I purchased the card

Originally I purchased this card for a couple of special and interesting projects I’m working on for the blog and my homelab. I needed a card that provided high density NVME flash storage, but didn’t require bifurcation as I planned on using it on a motherboard that didn’t support 4/4/4/4 bifurcation.

By choosing this specific card, I could also use it in any other system that had an available x16 PCIe slot.

I considered many other cards (such as some from SuperMicro and Intel), but in the end chose this one as it seemed most likely to work for my application. The choices from SuperMicro and Intel looked like they are designed to be use on their own systems.

I purchased the IO-PEX40152 from the IOCREST AliExpress store (after verifying it was their genuine online store) and they had the most cost-effective price out of the 4 sources.

They shipped the card with FedEx International Priority, so I received it within a week. Super fast shipping and it was packed perfectly!

Picture of the IOCREST IO-PEX40152 box
IOCREST IO-PEX40152 Box

Where to buy the IO-PEX40152

I found 3 different sources to purchase the IO-PEX40152 from:

  1. IOCREST AliExpress Store – https://www.aliexpress.com/i/4000359673743.html
  2. Amazon.com – https://www.amazon.com/IO-CREST-Non-RAID-Bifurcation-Controller/dp/B083GLR3WL/
  3. Syba USA – Through their network of resellers or distributors at https://www.sybausa.com/index.php?route=information/wheretobuy

Note that Syba USA is selling the IO-PEX40152 as the SI-PEX40152. The card I actually received has branding that identifies it both as an IO-PEX40152 and an SI-PEX40152.

As I mentioned above, I purchased it from the IOCREST AliExpress Online Store for around $299.00USD. From Amazon, the card was around $317.65USD.

IO-PEX40152 Specifications

Now let’s talk about the technical specifications of the card.

Picture of the IOCREST IO-PEX40152 Side Shot with cover on
IO-PEX40152 Side Shot

According to the packaging, the IO-PEX40152 features the following:

  • Installation in a PCIe x16 slot
  • Supports PCIe 3.1, 3.0, 2.0
  • Compliant with PCI Express M.2 specification 1.0, 1.2
  • Supports data transfer up to 2.5Gb (250MB/sec), 5Gb (500MB/sec), 8Gb (1GB/sec)
  • Supports 2230, 2242, 2260, 2280 size NGFF SSD
  • Supports four (4) NGFF M.2 M Key sockets
  • 4 screw holes 2230/2242/2260/2280 available to fix NGFF SSD card
  • 4 screw holes available to fix PCB board to heatsink
  • Supports Windows 10 (and 7, 8, 8.1)
  • Supports Windows Server 2019 (and 2008, 2012, 2016)
  • Supports Linux (Kernel version 4.6.4 or above)

While this list of features and specs are listed on the website and packaging, I’m not sure how accurate some of these statements are (in a good way), I’ll cover that more later in the post.

What’s included in the packaging?

  • 1 x IO-PEX40152 PCIe x 16 to 4 x M.2(M-Key) card
  • 1 x User Manual
  • 1 x M.2 Mounting material
  • 1 x Screwdriver
  • 5 x self-adhesive thermal pad

They also note that contents may vary depending on country and market.

Unboxing, Installation, and Configuration

As menitoned above, our build includes:

  • 1 x IOCREST IO-PEX40152
  • 4 x Sabrent Rocket 4 NVMe PCIe NVMe SSD
Picture of IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
IO-PEX40152 Unboxing with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD
Picture of IO-PEX40152 with 4 x Sabrent Rocket 4 NVMe 2TB SSD

You’ll notice it’s a very sleek looking card. The heatsink is beefy, heavy, and very metal (duh)! The card is printed on a nice black PCB.

Removing the 4 screws to release the heatsink, we see the card and thermal paste pads. You’ll notice the PCIe switch chip.

Picture of the front side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Frontside of card

And the backside of the card.

Picture of the back side of an IOCREST IO-PEX40152
IOCREST IO-PEX40152 Backside of card

NVMe Installation

I start to install the Sabrent Rocket 4 NVMe 2TB SSD.

Picture of a IO-PEX40152 with 2 SSD populated
IO-PEX40152 with 2 SSD populated
Picture of an IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD
IOCREST IO-PEX40152 PCIe Card loaded with 4 x Sabrent Rocket 4 2TB NVMe SSD

That’s a good looking 8TB of NVMe SSD!

Note that the cards will wiggle side to side and have play until screw is tightened. Do not over-tighten the screw!

Make sure before installing the heatsink cover that you remove the blue plastic film on the heat transfer material between NVME and heatsink, and the PEX chip and heatsink.

After that, I installed it in the server and was ready to go!

Heatsink and cooling

A quick note on the heatsink and cooling…

While the heatsink and cooling solution it comes with works amazing, you have flexibility if need be to run and operate the card without the heatsink and fan (the fan doesn’t cause any warnings if disconnected). This works out great if you want to use your own cooling solution, or need to use this card in a system where there isn’t much space. The fan can be removed by removing the screws and disconnecting the power connector.

Note, after installing the NVME SSD, and you affix the heatsink, in the future you will notice that the heatsink get’s stuck to the cards if you try to remove it at a later date. If you do need to remove the heatsink, be very patient and careful, and slowly remove the heatsink to avoid damaging or cracking the NVME SSD and the PCIe card itself.

Speedtest and benchmark

Let’s get to one of the best parts of this review, using the card!

Unfortunately due to circumstances I won’t get in to, I only had access to a rack server to test the card. The server was running VMware vSphere and ESXi 6.5 U3.

After shutting down the server, installing the card, and powering on, you could see the NVMe SSD appearing as available to PCI Passthrough to the VMs. I enabled passthrough and restarted again. I then added the individual 4 NVME drives as PCI passthrough devices to the VM.

Picture of IOCREST IO-PEX40152 passthrough with NVMe to VMware guest VM
IO-PEX40152 PCI Passthrough on VMware vSphere and ESXi

Turning on the system, we are presented with the NVMe drives inside of the “Device Manager” on Windows Server 2016.

A screenshot of an IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016
IOCREST IO-PEX40152 presenting 4 Sabrent NVME to Windows Server 2016

Now that was easy! Everything’s working perfectly…

Now we need to go in to disk manager and create some volumes for some quick speed tests and benchmarks.

A screenshot of Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Windows Server 2016 Disk Manager with IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

Again, no problems and very quick!

Let’s load up CrystalDiskMark and test the speed and IOPS!

Screenshot of CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD for speed
CrystalDiskMark testing an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
Screenshot of CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD
CrystalDiskMark testing IOPS on an IOCREST IO-PEX40152 and Sabrent Rocket 4 NVME SSD

What’s interesting is that I was able to achieve much higher speeds using this card in an older system, than directly installing one of the SSDs in a new HP Z240 workstation. However, unfortunately due to CPU limitations (maxing the CPU out) of this server used above, I could not fully test, max out, or benchmark the IOPS on an individual SSD.

Additional Notes on the IO-PEX40152

Some additional notes I have on the IO-PEX40152:

The card works perfectly with VMware ESXi PCI passthrough when passing it through to a virtualized VM.

The card according to the specifications states a data transfer up to 1GB/sec, however I achieved over 3GB/sec using the Sabrent Rocket 4 NVME SSD.

While the specifications and features state it supports NVME spec 1.0 and 1.1, I don’t see why it wouldn’t support the newer specifications as it’s simply a PCIe switch which NVMe slots.

Conclusion

This is a fantastic card that you can use reliably if you have a system with a free x16 slot. Because of the fact it has a built in PCIe switch and doesn’t require PCIe bifurcation, you can confidently use it knowing it will work.

I’m looking forward to buying a couple more of these for some special applications and projects I have lined up, stay tuned for those!

May 252020
 
vSphere Logo Image

When troubleshooting connectivity issues with your vMotion network (or vMotion VLAN), you may notice that you’re unable to ping using the ping or vmkping command on your ESXi and VMware hosts.

This occurs when you’re suing the vMotion TCP/IP stack on your vmkernel (vmk) adapters that are configured for vMotion.

This also applies if you’re using long distance vMotion (LDVM).

Why

The vMotion TCP/IP stack requires special syntax for ping and ICMP tests on the vmk adapters.

A screenshot of vmk adapters, one of which is using the vMotion TCP/IP Stack
VMK using vMotion TCP/IP Stack

Above is an example where a vmk adapter (vmk3) is configured to use the vMotion TCP/IP stack.

How

To “ping” and test your vMotion network that uses the vMotion TCP/IP stack, you’ll need to use the special command below:

esxcli network diag ping -I vmk1 --netstack=vmotion -H ip.add.re.ss

In the command above, change “vmk1” to the vmkernel adapter you want to send the pings from. Additionally, change “ip.add.re.ss” to the IP address of the host you want to ping.

Using this method, you can fully verify network connectivity between the vMotion vmks using the vMotion stack.

Additional information and examples can be found at https://kb.vmware.com/s/article/59590.

May 222020
 
A Picture of the 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Today we’re going to be talking about Sabrent’s newest line of NVMe storage products, particularly the 2TB Rocket NVMe PCIe 4.0 M.2 2280 Internal SSD Solid State Drive or the Sabrent Rocket 4 2TB NVMe stick as I like to call it.

Last week I purchased a quantity of 4 of these for a total of 8TB of NVMe storage to use on an IOCrest IO-PEX40152 Quad NVMe PCIe Card. For the purpose of this review, we’re benchmarking one inside of an HP Z240 Workstation.

While these are targeted for users with a PCIe 4.0 interface, I’ll be using these on PCIe 3 as it’s backwards compatible. I purchased the PCIe 4 option to make sure the investment was future-proofed.

Keep reading for a bunch of pictures, specs, speed tests, benchmarks, information, and more!

A picture of 4 unopened boxes of Sabrent Rocket 4 2TB NVMe sticks
4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Let’s get started with the review!

How and Why I purchased these

I’ve been working on a few special top-secret projects for the blog and YouTube channel, and needed some cost-effective yet high performing NVMe storage.

I needed at least 8TB of NVMe flash and I’m sure as all of you are aware, NVMe isn’t cheap.

After around a month of research I finally decided to pull the trigger and purchase a quantity of 4 x Sabrent Rocket 4 NVMe 2TB SSD. For future projects I’ll be using these in an IOCREST IO-PEX40152 NVME PCIe card.

These NVMe SSDs are targeted for consumers (normal users, gamers, power users, and IT professionals) and are a great fit! Just remember these do not have PLP (power loss protection), which is a feature that isn’t normally found in consumer SSDs.

Specifications

See below for the specifications and features included with the Sabrent Rocket 4 2TB NVMe SSD.

Hardware Specs:

  • Toshiba BiCS4 96L TLC NAND Flash Memory
  • Phison PS5016-E16 PCIe 4.0 x4 NVMe 1.3 SSD Controller
  • Kioxia 3D TLC NAND
  • M.2 2280 Form Factor
  • PCIe 4.0 Speeds
    • Read Speed of 5000MB/sec
    • Write Speed of 4400MB/sec
  • PCIe 3.0 Speeds
    • Read Speed of 3400MB/sec
    • Write Speed of 2750MB/sec
  • 750,000 IOPS on 2TB Model
  • Endurance: 3,600TBW for 2TB, 1,800TBW for 1TB, 850TBW for 500TB
  • Available in 500GB, 1TB, 2TB
  • Made in Taiwan

Features:

  • NVMe M.2 2280 Interface for PCIe 4.0 (NVMe 1.3 Compliant)
  • APST, ASPM, L1.2 Power Management Support
  • Includes SMART and TRIM Support
  • ONFi 2.3, ONFi 3.0, ONFi 3.2 and ONFi 4.0 interface
  • Includes Advanced Wear Leveling, Bad Block Management, Error Correction Code, and Over-Provision
  • User Upgradeable Firmware
  • Software Tool to change Block Size

Where and how to buy

One of the perks of owning an IT company is that typically you can purchase all of your internal use product at cost or discount, unfortunately this was not the case.

I was unable to find the Sabrent products through any of the standard distribution channels and had to purchase through Amazon. This is unfortunate because I wouldn’t mind being able to sell these units to customers.

Amazon Purchase Links (2TB Model)

The PART#s are as follows for the different sizes:

ProductNVMe Disk SizePART#
No Heatsink
PART#
Heatsink
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive2TBSB-ROCKET-NVMe4-2TBSB-ROCKET-NVMe4-HTSK-2TB
1TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive1TBSB-ROCKET-NVMe4-1TBSB-ROCKET-NVMe4-HTSK-1TB
500GB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive500GBSB-ROCKET-NVMe4-500SB-ROCKET-NVMe4-HTSK-500
Sabrent Rocket 4 Part Number Lookup Table

Cost

At the time of creation of this post, purchasing from Amazon Canada the 2TB model would set you back $699.99CAD for a single unit, however there was a sale going on for $529.99CAD per unit.

Additionally, at the time of creation of this post the 2TB model on Amazon USA would set you back $399.98 USD.

A total quantity of 4 set me back around $2,119.96CAD on sale versus $2,799.96 at regular price.

If you’re familiar with NVMe pricing, you’ll notice that this pricing is extremely attractive when comparing to other high performance NVMe SSDs.

Unboxing

I have to say I was very impressed with the packaging! Small, sleek, and impressive!

A picture of Sabrent Rocket 4 2TB NVMe sticks metal case packagin
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive Metal Case Packaging

Initially I was surprised how small the boxes were as they fit in the palm of your hand, but then you realize how small the NVMe sticks are, so it makes sense.

Opening the box you are presented with a beautiful metal case containing the instructions, information on the product warranty, and more.

Picture of a Sabrent Rocket 4 case opened
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive in case

And the NVME stick removed from it’s case

Picture of a Sabrent Rocket 4 case opened and the NVME SSD removed
2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive removed from case

While some of the packaging may be unnecesary, after further thought I realized it’s great to have as you can re-use the packaging when storing NVMe drives to keep them safe and/or to put them in to storage.

And here’s a beautiful shot of 8TB of NVMe storage.

A picture of 8TB of total storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive
8TB Total NVMe storage across 4 x 2TB Rocket Nvme PCIe 4.0 M.2 2280 Internal SSD Solid State Drive

Now let’s move on to usage!

Installation, Setup, and Configuration

Setting one of these up in my HP Workstation was super easy. You simply populate the NVMe M.2 slot, install the screw, and boot up the system.

Picture of a Sabrent Rocket PCIe4 NVMe 2TB SSD Installed in computer
Sabrent Rocket 4 NVMe 2TB SSD in HP Z240 SFF Workstation

Upon booting, the Sabrent SSD was available inside of the Device Manager. I read on their website that they had some utilities so I wanted to see what configuration options I had access to before moving on to speed test benchmarks.

All Sabrent Rocket utilities can be downloaded from their website at https://www.sabrent.com/downloads/.

Sabrent Sector Size Converter

The Sabrent Sector Size Converter utility allows you to configure the sector size of your Sabrent Rocket SSD. Out of the box, I noticed mine was configured with a 512e sector format, which I promptly changed to 4K.

Screenshot using the Sabrent Sector Size Converter to change SSD from 512e to 4K Sector Size
Sabrent Sector Size Converter v1.0

The change was easy, required a restart and I was good to go! You’ll notice it has a drop down to select which drive you want to modify, which is great if you have more than one SSD in your system.

I did notice one issue… When you have multiple (in my case 4) of these in one system, for some reason the sector size change utility had trouble changing one of them from 512e to 4K. It would appear to be succesful, but stay at 512e on reboot. Ultimately I removed all of the NVME sticks with the exception of the problematic one, ran the utility, and the issue was resolved.

Sabrent Rocket Control Panel

Another useful utility that was available for download is the Sabrent Rocket Control Panel.

Screenshot of the Sabrent Rocket Control Panel
Sabrent Rocket Control Panel

The Sabrent Rocket Control Panel provides the following information:

  • Drive Size, Sector Size, Partition Count
  • Serial Number and Drive identifier
  • Feature status (TRIM Support, SMART, Product Name)
  • Drive Temperature
  • Drive Health (Lifespan)

You can also use this app to view S.M.A.R.T. information, flash updated Sabrent firmware, and more!

Now that we have this all configured, let’s move on to testing this SSD out!

Speed Tests and Benchmarks

The system we used to benchmark the Sabrent Rocket 4 2TB NVMe SSD is an HP Z240 SFF (Small Form Factor) workstation.

The specs of the Z240 Workstation:

  • Intel Xeon E3-1240 v5 @ 3.5Ghz
  • 16GB of RAM
  • Samsung EVO 500GB as OS Drive
  • Sabrent Rocket 4 NVMe 2TB SSD as Test Drive

I ran a few tests using both CrystalDiskMark and ATTO Disk Benchmark, and the NVMe SSD performed flawlessly at extreme speeds!

CrystalDiskMark Results

Loading up and benching with CrystalDiskMark, we see the following results:

Screenshot of speedtest and benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark Results

As you can see, the Sabrent Rocket 4 2TB NVMe tested at a read speed of 3175.63MB/sec and write speed of 3019.17MB/sec.

Screenshot of IOPS benchmark of Sabrent Rocket PCIe 4 2TB SSD
Sabrent Rocket PCIe 4 2TB CrystalDiskMark IOPS Results

Using the Peak Performance profile, we some amazing IO with 613171.14IOPS read and 521861.33IOPS write with RND4K.

While we’re only testing with a PCIe 3.0 system, these numbers are still amazing and inline with what’s advertised.

ATTO Disk Benchmark Results

Switing over to ATTO Disk Benchmark, we test both speed and IOPS.

First, the speed benchmarks with I/O sized 4K to 12MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 4K to 12MB

After taking a short cooldown break (we don’t have a heatsink installed), we tested 12MB to 64MB.

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark 12MB to 64MB

And now we move on to analyze the IO/s.

First from 4K to 12MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 4K to 12MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 4K to 12MB

And then after a short break, 12MB to 64MB:

Screenshot of ATTO Benchmark of Sabrent Rocket PCIe 4 2TB testing IOPS 12MB to 64MB
Sabrent Rocket PCIe 4 2TB ATTO Benchmark IOPS 12MB to 64MB

Those numbers are insane!

Additional Notes

When you purchase a new Sabrent Rocket 4 SSD it comes with a 1 year standard warranty, however if you register your product within 90 days of purchase, you can extend it to an awesome 5 year warranty.

To register your product, visit https://www.sabrent.com/product-registration/

The process is easy if you have one device, however it very repettitive and takes time if you have multiuple as the steps have to be repeated for each device you have. Sabrent, if you’re listening a batch registration tool would be nice! 🙂

Remember that after registering your product, you should record your “Registration Unique ID” for future reference and use.

Conclusion

All-in-all I’d definitely recommend the Sabrent Rocket 4 NVMe SSD! It provides extreme performance, is extremely cost-effective, and I wouldn’t see any reason not to buy them.

Just remember that these SSDs (like all consumer SSDs) do not provide power loss protection, meaning you should not use these in enterprise environments (or in a NAS or SAN).

I’m really looking forward to using these in my upcoming blog and YouTube projects.

May 172020
 
Microsoft Windows Server Logo Image

Today we take it back to basics with a guide on how to create an Active Directory Domain on Windows Server 2019. These instructions are also valid for previous versions of Microsoft Windows Server.

This video will demonstrate and explain the process of installing, configuring, and deploying a Windows Server 2019 instance as a Domain Controller, DNS Server, and DHCP Server and then setting up a standard user.

Check it out and feel free to leave a comment! Scroll down below for more information and details on the guide.

Windows Server 2019: How to Create an Active Directory Domain

Who’s this guide for

No matter if you’re an IT professional who’s just getting started or if you’re a small business owner (on a budget) setting up your first network, this guide is for you!

What’s included in the video

In this guide I will walk you through the following:

  • Installing Windows Server 2019
  • Documenting a new Server installation
  • Configuring Network Settings
  • Installation and configuration of Microsoft Active Directory
  • Promote a server as a new domain controller
  • Installation and configuration of DNS Role
  • Installation and configuration of DHCP Role
  • Setup and configuration of a new user account

What’s required

To get started you’ll need:

How to create an Active Directory Domain (The Video)

Hardware/Software used in this demonstration

  • VMware vSphere
  • HPE DL360p Gen8 Server
  • Microsoft Windows Server 2019
  • pfSense Firewall

Other blog posts referenced in the video

The following blog posts are mentioned in the video: