Connect with me!

Have a question? Want to hire me? Reach out and Connect!
I'm available for remote and onsite consulting!
To live chat with me, Click Here!
VMware

HPE MSA 2040 Benchmark – Read, Write, and IOPS

I’ve had the HPE MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks. Let’s take a look at the HPE MSA 2040 benchmarks on read, write, and IOPS.

First some info on the setup:

-2 X HPE Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)

-HPE MSA 2040 Dual Controller – Configured for iSCSI

-HPE MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives

-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)

-2 vDisks are configured, each owned by a separate controller

-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)

-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)

-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested

-Utilized “VMWare I/O Analyzer” (https://labs.vmware.com/flings/io-analyzer) which uses IOMeter for testing

-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.

Sequential Read Speed:

Max Read: 1480.28MB/sec

Sequential Write Speed:

Max Write: 1313.38MB/sec

See below for IOPS (Max Throughput) testing:

Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.

IOPS Read Testing:

Max Read IOPS: 70679.91IOPS

IOPS Write Testing:

Max Write IOPS: 29452.35IOPS

PLEASE NOTE:

-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.

-I was running a VMWare vDP replication in the background (My bad, I know…).

-Sum is combined throughput of both hosts, Average is per host throughput.

Conclusion:

Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!

Cheers

Stephen Wagner

Stephen Wagner is President of Digitally Accurate Inc., an IT Consulting, IT Services and IT Solutions company. Stephen Wagner is also a VMware vExpert, NVIDIA NGCA Advisor, and HPE Influencer, and also specializes in a number of technologies including Virtualization and VDI.

View Comments

  • Hey Stephen, fun to read. I'm waiting for my MSA 2040 with 24x900GB 10k SAS arrives. I choose for the 12Gb SAS controllers, I can't wait to run some tests on that one. I will let you know :)

    Greetings, Lars

  • Hi Stephen and Lars,

    just did the same benchmarks !
    MSA2040 12GB SAS - 24x 600GB 10k

    Datastore 1:
    RAID5 - 7 spindles 600GB 10k
    Combined speeds:
    Read 100% - 1733
    Write 100% - 1856
    Max IOPS - 39633
    Max WR IOPS - 33517

    Datastore 2:
    RAID 10 - 16 spindles 600GB 10k
    Combined speeds:
    Read 100% - 5228
    Write 100% - 2824
    Max IOPS - 92399
    Max WR IOPS - 33338

    cheers, Mark

  • Glad to hear Mark!

    I'm still absolutely loving the HP MSA 2040. Great investment on my side of things!

    Cheers,
    Stephen

  • Hi Stephen,

    I'm curious, what cable you are using to direct attach the SAN? I am using 2 X X242 DAC cables (J9285B). I have set the controllers to iSCSI.
    I have an MSA 2040 (C8R15A) and am using an NC550SFP adapter to connect to the two controllers i.e. a dual port card connected to the first port on each controller. My controllers complain that I am using an unsupported SFP, Error 464.

    Any more info you can provide about your hardware config would be appreciated, I'm not getting any definitive response from HP.

    Thanks.

    J

  • Hi Julian,

    I'm using 4 x 487655-B21 cables. I'm noticing that both of our cables are listed inside of the HP Quickspecs document for the MSA 2040.

    I know that it's supposed to be a requirement to purchase a 4-pack SFP+ transceiver. In my case, I chose the 4 X 1Gb RJ-45 modules (even though I'm not using them, nor have anything plugged in to them). Did you also do this?

    Also, do you have the firmware on your unit fully up to date?

    Just for the record, I'm using 2 X 700760-B21 (10Gb NICs) for both of my servers.

    I'm wondering if it's the number of cables you're using. Have you thought about using 4 cables? Maybe it's a weird requirement to have a minimum of 4. For the sake of troubleshooting, can you plug both cables in to controller A to see if it changes the behavior?

    Please refer to my other blog post at: http://www.stephenwagner.com/?p=791 for more information on my setup.

    Stephen

  • Stephen,

    I didn't purchase a 4-pack, but I didn't buy a bundle either as none were in stock. My unit was put together, bit by bit, so maybe the 4 transceiver minimum was not mandatory.
    I only have the first port in each unit filled with the DAC cables. HP asked me to do the same thing you are and I will do so tomorrow. I have two more cables and may try to plug one in to see if it makes a difference.
    Will let you know.

    J

  • I'm not sure whether the act of plugging in another DAC cable into controller 1 port 2 eliminated the warning message or rebooting both controllers did it, but the warning is gone.

    J

  • Hi Julian,

    Glad to hear you resolved the issue. Let us know if you end up finding out if it was the second cable or not. It's good to know that the 4-pack of transceivers isn't needed (as I didn't test this).

    Is there any chance that there was a pending firmware update on the unit?

    Stephen

  • Hi Stephen,

    Thanks for the results, I'm looking for some comparison with my test VSAN setup I'm going through at the moment.

    Did you use the standard disk sizes that come with VMware IO Analyzer? As standard it writes to a 100mb vmdk, when I increase this to 100gb my iops figures drop like a lead balloon.

    Thanks in advance, i'll post my stats here.

  • Hi,
    just done same test on my MSA2040 SAS (with new GL200 firmware & performance tiering option).
    1 Virtual pool / controller.
    Pool A :
    RAID 1 2x SSD 400Go
    RAID 5 10x 600Go 10k
    Pool B
    RAID 1 2 x SSD 400Go
    RAID 5 9 x 600Go 10k

    2 separate workers. One on each esx, each one accessing one of the 2 datastores
    esx-MSA connection with SAS 6Gb (but limited to 4Go due to the pci 2.0 server ports).
    ==>

    Max IOPS
    .0.5k_100%Read_0%Random 73929.6
    0.5k_100%Read_0%Random 70168.72
    SUM 144098.32
    Max Write IOPS
    Workload Spec Write IOPS
    0.5k_0%Read_0%Random 29555.92
    0.5k_0%Read_0%Random 29199.32
    58755.24
    Max read speed :
    Workload Spec Read MBPS
    512k_100%Read_0%Random 1300.56
    512k_100%Read_0%Random 1731.45
    3032.01
    Max write speed :
    Workload Spec Write MBPS
    512k_0%Read_0%Random 2135.21
    512k_0%Read_0%Random 1983.35
    4118.56

    I made the test with 100Mb iometer disks, then 16Gb iometer disks, same results.
    all ios where provided by the SSDs.
    I'm pretty sure the SSDs are able to provide more iops,
    With SSDS, the MSA2040 bottleneck is clearly the controller CPUs (single core intel gladden celeron 725C). that where at 98-99% during the tests.
    That's an amazing price/performance ratio.
    I'm disappointed by the stats part. We Only got 15mn samples, and no recording of controllers cpu usage.

  • Hi Stephen,

    I just checked that document and there was no page 36... Also, I searched jumbo frames and that document mentioned an MTU of 9000 being the maximum for Jumbo Frames.

    Did you provide the right document?

    I'll check it out, but in my initial configuration of the unit, the documents I references mentioned MTU's of 9000 which the unit accepted.

    Stephen

  • Yeah I just clicked on the link and it took me to the right doco, the second last page says

    Jumbo frames
    A normal Ethernet frame can contain 1500 bytes whereas a jumbo frame can contain a maximum of 9000 bytes for larger data transfers. The MSA reserves some of this frame size; the current maximum frame size is 1400 for a normal frame and 8900 for a jumbo frame. This frame maximum can change without notification. If you are using jumbo frames, make sure to enable jumbo frames on all network components in the data path.

  • Hi Stephen,

    Yes, that is correct. When setting the Jumbo max packet size on devices, any settings done by the user is usually always the max size (which includes overhead).

    The max setting on the MSA is 9000 (even though only 8900 may be data, the rest is overhead). All devices connected are set to a max of 9000.

Share
Published by

Recent Posts

How to properly decommission a VMware ESXi Host

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host. Some might be surprised to… Read More

3 months ago

Disable the VMware Horizon Session Bar

This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client… Read More

4 months ago

vGPU Enabled VM DRS Evacuation during Maintenance Mode

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While… Read More

4 months ago

GPU issues with the VMware Horizon Indirect Display Driver

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with… Read More

4 months ago

Synology DS923+ VMware vSphere Use case and Configuration

Today we're going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration. This little (but powerful) NAS is perfect for… Read More

4 months ago

How to Install the vSphere vCenter Root Certificate

Today we'll go over how to install the vSphere vCenter Root Certificate on your client system. Certificates are designed to verify the identity of the systems, software, and/or resources we… Read More

5 months ago
Powered and Hosted by Digitally Accurate Inc. - Calgary IT Services, Solutions, and Managed Services