I’ve had the HPE MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks. Let’s take a look at the HPE MSA 2040 benchmarks on read, write, and IOPS.
First some info on the setup:
-2 X HPE Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)
-HPE MSA 2040 Dual Controller – Configured for iSCSI
-HPE MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives
-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)
-2 vDisks are configured, each owned by a separate controller
-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)
-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)
-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested
-Utilized “VMWare I/O Analyzer” (https://labs.vmware.com/flings/io-analyzer) which uses IOMeter for testing
-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.
Sequential Read Speed:
Sequential Write Speed:
See below for IOPS (Max Throughput) testing:
Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.
IOPS Read Testing:
IOPS Write Testing:
-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.
-I was running a VMWare vDP replication in the background (My bad, I know…).
-Sum is combined throughput of both hosts, Average is per host throughput.
Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!
Are you running an HPE Nimble or HPE Alletra 6000 SAN in your VMware environment with iSCSI? A commonly overlooked component of the solution architecture and configuration when using these… Read More
You might ask if/what the procedure is for updating Enhanced Linked Mode vCenter Server Instances, or is there even any considerations that apply? vCenter Enhanced Link Mode is feature… Read More
In this NVIDIA vGPU Troubleshooting Guide, I'll help show you how to troubleshoot vGPU issues on VMware platforms, including VMware Horizon and VMware Tanzu. This guide applies to the full… Read More
When using VMware vSphere, you may notice vCenter OVF Import and Datastore File Access Issues, when performing various tasks with OVF Imports, as well as uploading and/or downloading files from… Read More
When attempting to log in to your VMware vCenter using the HPE Simplivity Upgrade Manager to perform an upgrade on your Simplivity Infrastructure, the login may fail with Access Denied,… Read More
When using VMware vSAN 7.0 Update 3 (7U3) and using the graceful shutdown (and restart) of your entire vSAN cluster, you may experience an issue resulting with all VMs inaccessible… Read More
Hey Stephen, fun to read. I'm waiting for my MSA 2040 with 24x900GB 10k SAS arrives. I choose for the 12Gb SAS controllers, I can't wait to run some tests on that one. I will let you know :)
Hi Stephen and Lars,
just did the same benchmarks !
MSA2040 12GB SAS - 24x 600GB 10k
RAID5 - 7 spindles 600GB 10k
Read 100% - 1733
Write 100% - 1856
Max IOPS - 39633
Max WR IOPS - 33517
RAID 10 - 16 spindles 600GB 10k
Read 100% - 5228
Write 100% - 2824
Max IOPS - 92399
Max WR IOPS - 33338
Glad to hear Mark!
I'm still absolutely loving the HP MSA 2040. Great investment on my side of things!
I'm curious, what cable you are using to direct attach the SAN? I am using 2 X X242 DAC cables (J9285B). I have set the controllers to iSCSI.
I have an MSA 2040 (C8R15A) and am using an NC550SFP adapter to connect to the two controllers i.e. a dual port card connected to the first port on each controller. My controllers complain that I am using an unsupported SFP, Error 464.
Any more info you can provide about your hardware config would be appreciated, I'm not getting any definitive response from HP.
I'm using 4 x 487655-B21 cables. I'm noticing that both of our cables are listed inside of the HP Quickspecs document for the MSA 2040.
I know that it's supposed to be a requirement to purchase a 4-pack SFP+ transceiver. In my case, I chose the 4 X 1Gb RJ-45 modules (even though I'm not using them, nor have anything plugged in to them). Did you also do this?
Also, do you have the firmware on your unit fully up to date?
Just for the record, I'm using 2 X 700760-B21 (10Gb NICs) for both of my servers.
I'm wondering if it's the number of cables you're using. Have you thought about using 4 cables? Maybe it's a weird requirement to have a minimum of 4. For the sake of troubleshooting, can you plug both cables in to controller A to see if it changes the behavior?
Please refer to my other blog post at: http://www.stephenwagner.com/?p=791 for more information on my setup.
I didn't purchase a 4-pack, but I didn't buy a bundle either as none were in stock. My unit was put together, bit by bit, so maybe the 4 transceiver minimum was not mandatory.
I only have the first port in each unit filled with the DAC cables. HP asked me to do the same thing you are and I will do so tomorrow. I have two more cables and may try to plug one in to see if it makes a difference.
Will let you know.
I'm not sure whether the act of plugging in another DAC cable into controller 1 port 2 eliminated the warning message or rebooting both controllers did it, but the warning is gone.
Glad to hear you resolved the issue. Let us know if you end up finding out if it was the second cable or not. It's good to know that the 4-pack of transceivers isn't needed (as I didn't test this).
Is there any chance that there was a pending firmware update on the unit?
curious....can anyone post numbers for more randomized workloads?
Thanks for the results, I'm looking for some comparison with my test VSAN setup I'm going through at the moment.
Did you use the standard disk sizes that come with VMware IO Analyzer? As standard it writes to a 100mb vmdk, when I increase this to 100gb my iops figures drop like a lead balloon.
Thanks in advance, i'll post my stats here.
just done same test on my MSA2040 SAS (with new GL200 firmware & performance tiering option).
1 Virtual pool / controller.
Pool A :
RAID 1 2x SSD 400Go
RAID 5 10x 600Go 10k
RAID 1 2 x SSD 400Go
RAID 5 9 x 600Go 10k
2 separate workers. One on each esx, each one accessing one of the 2 datastores
esx-MSA connection with SAS 6Gb (but limited to 4Go due to the pci 2.0 server ports).
Max Write IOPS
Workload Spec Write IOPS
Max read speed :
Workload Spec Read MBPS
Max write speed :
Workload Spec Write MBPS
I made the test with 100Mb iometer disks, then 16Gb iometer disks, same results.
all ios where provided by the SSDs.
I'm pretty sure the SSDs are able to provide more iops,
With SSDS, the MSA2040 bottleneck is clearly the controller CPUs (single core intel gladden celeron 725C). that where at 98-99% during the tests.
That's an amazing price/performance ratio.
I'm disappointed by the stats part. We Only got 15mn samples, and no recording of controllers cpu usage.
I have stumbled across this page a few times trying to get my cluster performance up and found it a very good read. however your MTU settings on the MSA are wrong. The MSA only supports Packets upto 8900.
I just checked that document and there was no page 36... Also, I searched jumbo frames and that document mentioned an MTU of 9000 being the maximum for Jumbo Frames.
Did you provide the right document?
I'll check it out, but in my initial configuration of the unit, the documents I references mentioned MTU's of 9000 which the unit accepted.
Yeah I just clicked on the link and it took me to the right doco, the second last page says
A normal Ethernet frame can contain 1500 bytes whereas a jumbo frame can contain a maximum of 9000 bytes for larger data transfers. The MSA reserves some of this frame size; the current maximum frame size is 1400 for a normal frame and 8900 for a jumbo frame. This frame maximum can change without notification. If you are using jumbo frames, make sure to enable jumbo frames on all network components in the data path.
Yes, that is correct. When setting the Jumbo max packet size on devices, any settings done by the user is usually always the max size (which includes overhead).
The max setting on the MSA is 9000 (even though only 8900 may be data, the rest is overhead). All devices connected are set to a max of 9000.