Connect with me!

Have a question? Want to hire me? Reach out and Connect!
I'm available for remote and onsite consulting!
To live chat with me, Click Here!
HPE

HPE MSA 2040 – Dual Controller SAN

In the last few months, my company (Digitally Accurate Inc.) and our sister company (Wagner Consulting Services), have been working on a number of new cool projects. As a result of this, we needed to purchase more servers, and implement an enterprise grade SAN. This is how we got started with the HPE MSA 2040 SAN (formerly known as the HP MSA 2040 SAN), specifically a fully loaded HPE MSA 2040 Dual Controller SAN unit.

The Purchase

For the server, we purchased another HPE Proliant DL360p Gen8 (with 2 X 10 Core Processors, and 128Gb of RAM, exact same as our existing server), however I won’t be getting that in to this blog post.

Now for storage, we decided to pull the trigger and purchase an HPE MSA 2040 Dual Controller SAN. We purchased it as a CTO (Configure to Order), and loaded it up with 4 X 1Gb iSCSI RJ45 SFP+ modules (there’s a minimum requirement of 1 4-pack SFP), and 24 X HPE 900Gb 2.5inch 10k RPM SAS Dual Port Enterprise drives. Even though we have the 4 1Gb iSCSI modules, we aren’t using them to connect to the SAN. We also placed an order for 4 X 10Gb DAC cables.

To connect the SAN to the servers, we purchased 2 X HPE Dual Port 10Gb Server SFP+ NICs, one for each server. The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.

HPE MSA 2040 Configuration

I must say that configuration was an absolute breeze. As always, using intelligent provisioning on the DL360p, we had ESXi up and running in seconds with it installed to the on-board 8GB micro-sd card.

I’m completely new to the MSA 2040 SAN and have actually never played with, or configured one. After turning it on, I immediately went to HPE’s website and downloaded the latest firmware for both the drives, and the controllers themselves. It’s a well known fact that to enable iSCSI on the unit, you have to have the controllers running the latest firmware version.

Turning on the unit, I noticed the management NIC on the controllers quickly grabbed an IP from my DHCP server. Logging in, I found the web interface extremely easy to use. Right away I went to the firmware upgrade section, and uploaded the appropriate firmware file for the 24 X 900GB drives. The firmware took seconds to flash. I went ahead and restarted the entire storage unit to make sure that the drives were restarted with the flashed firmware (a proper shutdown of course).

While you can update the controller firmware with the web interface, I chose not to do this as HPE provides a Windows executable that will connect to the management interface and update both controllers. Even though I didn’t have the unit configured yet, it’s a very interesting process that occurs. You can do live controller firmware updates with a Dual Controller MSA 2040 (as in no downtime). The way this works is, the firmware update utility first updates Controller A. If you have a multipath (MPIO) configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place. When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller. This allows you to do online firmware updates that will result in absolutely ZERO downtime. Very neat! PLEASE REMEMBER, this does not apply to drive firmware updates. When you update the hard drive firmware, there can be ZERO I/O occurring. You’d want to make sure all your connected hosts are offline, and no software connection exists to the SAN.

Anyways, the firmware update completed successfully. Now it was time to configure the unit and start playing. I read through a couple quick documents on where to get started. If I did this right the first time, I wouldn’t have to bother doing it again.

I used the wizards available to first configure the actually storage, and then provisioning and mapping to the hosts. When deploying a SAN, you should always write down and create a map of your Storage area Network topology. It helps when it comes time to configure, and really helps with reducing mistakes in the configuration. I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.

Since the MSA 2040 is a Dual Controller SAN, you want to make sure that each host can at least directly access both controllers. Therefore, in my configuration with a NIC with 2 ports, port 1 on the NIC would connect to a port on controller A of the SAN, while port 2 would connect to controller B on the SAN. When you do this and configure all the software properly (VMWare in my case), you can create a configuration that allows load balancing and fault tolerance. Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured vDisk. Most I/O will go through only to the main controller configured for that vDisk, but in the event the controller goes down, it will jump over to the other controller and I/O will proceed uninterrupted until your resolve the fault.

First part, I had to run the configuration wizard and set the various environment settings. This includes time, management port settings, unit names, friendly names, and most importantly host connection settings. I configured all the host ports for iSCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph. Although the host ports can sit on the same subnets, it is best practice to use multiple subnets.

Jumping in to the storage provisioning wizard, I decided to create 2 separate RAID 5 arrays. The first array contains disks 1 to 12 (and while I have controller ownership set to auto, it will be assigned to controller A), and the second array contains disk 13 to 24 (again ownership is set to auto, but it will be assigned to controller B). After this, I assigned the LUN numbers, and then mapped the LUNs to all ports on the MSA 2040, ultimately allowing access to both iSCSI targets (and RAID volumes) to any port.

I’m now sitting here thinking “This was too easy”. And it turns out it was just that easy! The RAID volumes started to initialize.

VMware vSphere Configuration

At this point, I jumped on to my vSphere demo environment and configured the vDistributed iSCSI switches. I mapped the various uplinks to the various portgroups, confirmed that there was hardware link connectivity. I jumped in to the software iSCSI imitator, typed in the discovery IP, and BAM! The iSCSI initiator found all available paths, and both RAID disks I configured. Did this for the other host as well, connected to the iSCSI target, formatted the volumes as VMFS and I was done!

I’m still shocked that such a high performance and powerful unit was this easy to configure and get running. I’ve had it running for 24 hours now and have had no problems. This DESTROYS my old storage configuration in performance, thankfully I can keep my old setup for a vDP (VMWare Data Protection) instance.

HPE MSA 2040 Pictures

I’ve attached some pics below. I have to apologize for how ghetto the images/setup is. Keep in mind this is a test demo environment for showcasing the technologies and their capabilities.

HPE MSA 2040 SAN – Front Image
HP MSA 2040 – Side Image
HPE MSA 2040 SAN with drives – Front Right Image
HP MSA 2040 Rear Power Supply and iSCSI Controllers
HPE MSA 2040 Dual Controller – Rear Image
HP MSA 2040 Dual Controller SAN – Rear Image
HP Proliant DL 360p Gen8
HP MSA 2040 Dual Controller SAN
HPE MSA 2040 – With Power
HP MSA 2040 – Side shot with power on
HP Proliant DL360p Gen8 – UID LED on
HP Proliant DL360p Gen8
HP MSA 2040 Dual Controller SAN
VMWare vSphere

Update: HPE has updated the MSA product line and the 2040 has now been replaced by the HPE MSA 2050 SAN Dual Controller SAN. There are now also SSD Cache models such as the HPE MSA 2052 Dual Controller SAN.

Stephen Wagner

Stephen Wagner is President of Digitally Accurate Inc., an IT Consulting, IT Services and IT Solutions company. Stephen Wagner is also a VMware vExpert, NVIDIA NGCA Advisor, and HPE Influencer, and also specializes in a number of technologies including Virtualization and VDI.

View Comments

  • Congrats on the new SAN Stephen. It's pretty nice, but it better be for ~30K. Yikes!

    Do you have any performance numbers?

    Ash

  • Hi Ash,

    No performance numbers yet, but let's just say using the performance monitor on the web interface on the MSA, I've seen it hit 1GB/sec!

    Copying large files (from and to the same vmdk), are hitting over 400MB/sec which is absolutely insane, especially copying from and to the same location.

    During boot storms, there's absolutely no delays, and the VMs are EXTREMELY responsive even under extremely high load.

    In my configuration I only have 2 links from each host to the MSA, and because of LUN ownership only 1 link is optimized (and used), the other is for fail over and redundancy.

    We have some demo applications (including a full demo SAP instance) running on it, and it just purrs.

    Love it!

    I'll just to get something up and posted soon.

    Stephen

  • Hi, very useful post...
    I have decided to create in my lab a new vmware environment with 2x HP DL360p Gen8 (2 CPU with 8 Core, 48Gb RAM, SDHC for ESXi 5.5, 8 NIC 1Gb) + 1 HP MSA 2040 dual controller with 9 600Gb SAS + 8 SFP transceivers 1Gb iSCSI.
    I'm planning to configure the MSA with 3 vdisk. 1° vdisk RAID5 with phisical disk 1 to 3, 2° vdisk RAID5 with disk 4 to 6, 3° vdisk RAID1 with disk 7 to 8 and a global spare drive with disk 9.
    Then for each vdisk I create one volume with the entire capacity of vdisk and then for each volume, i create one lun per VM. The 3° vdisk i would like to use for the replica on any VM.

    For you, my configuration is ok?

    Thanks a lot
    Bye
    Fabio

  • Hello Fabio,

    That should work!

    I'm curious about what you mean by: "The 3° vdisk i would like to use for the replica on any VM." What are you planning on using this for? I just don't know what you mean by replica.

    Keep in mind that with more disks in a RAID volume, the more speed you have. You may be somewhat limited by speed.

    Have you thought about creating 2 volumes with 4 disks each, and then 1 global spare? This may increase your performance a bit.

    Stephen

  • Sorry, replica is the replication of any VMs for example with software such as vSpehere Replication or Trilead VM Explorer Pro. With this scenario if any VMs fail, I can use the VM replicated.
    It's a good choice use the 3°vdisk or is best to use a secondary storage?

  • Hi Fabio,

    Thanks for clearing that up. Actually that would work great the way you originally planned it. Just make sure you provision enough storage.

    One other thing I want to mention (I found this out after I provisioned everything), if you want to use the Storage snapshot capabilities, you'll need to leave free space on the RAID volumes. I tried to snap my storage the other day and was unable to.

    Let me know if you have any other questions and I'll do my best to help out! I'm pretty new to the unit myself, but it's very simple and easy to configure. Super powerful device!

    Stephen

  • Hi Fabio,

    Storage snapshot capabilities are built in to the MSA 2040. A standard MSA 2040 supports up to 64 snapshots. An extra additional license can be purchased that allows up to 512 snapshots.

  • good, very good..
    When the MSA will arrive in my lab, I'll start to play with it....

    Thanks a lot

  • Very interesting article.

    This is exactly the setup I was about to order for my company; when local HP folks said that such a setup may not work and I should consider a 10G Ethernet switch for iSCSI traffic. Could you please share a photo that shows the servers directly connected to the MSA through the DAC cable ? That might convince them !

    Regards

  • Hello Satish,

    It is indeed a supported configuration. I was extremely nervous too before ordering since there is no mention inside of QuickSpecs or articles on the internet, however since I'm an HP Partner, I have access to internal HP resources to verify configuration, and mention configurations that are indeed supported.

    Just so you know I used these Part#s:
    665249-B21 - HEWLETT PACKARD : HP Ethernet 10Gb 2-port 560SFP+ Adapter (Server NICs)
    487655-B21 - HEWLETT PACKARD : HP BladeSystem c-Class Small Form-Factor Pluggable 3m 10GbE Copper Cable

    I know that the cable (the second item) mentions that it is a for BladeSystem, but you can safely ignore that.

    Please Note: the MSA2040 does need to be purchased with at least 1-pack of SFP+ modules (due to HP ordering rules). I chose the 4-pack of the 1Gb RJ45 iSCSI SFP+ modules. Note that I have these in the unit, but am NOT using them at all. I have the servers connected only via the DAC cables (each server has 1 DAC cable to Controller A, and 1 DAC cable going to Controller B).

    I do have a picture I can send you, let me know if you want me to e-mail you. I can also answer any questions you may have if you want me to e-mail you.

    Thanks,
    Stephen

  • Hey Stephen,

    Great post. I've also got the MSA 2040 with the 900GB SAS drives and connected through dual 12GbE SAS controllers to 2x HP DL380 G8 ESXi-hosts. It's a fantastic and fast system. Better than the old Dell MD1000 I had ;)
    Setup was indeed a breeze. I was doubting to choose the FC or the SAS controllers and I choose for the SAS. I only have 2 ESXi-hosts with VMware Essential plus, so 3 hosts max.

    During the datamigration of our old dataserver to the MSA with robocopy, i saw the data copied around 700MB/s (for large files and during the weekend).

    Backup will be done with Veeam and going to disk and then to tape. Backup server is connected with a 6GbE SAS hba to the MSA port 4 directly and a LTO-5 tapelibrary is connected to the backup server also with a 6GbE hba.

    I'm very pleased with my setup.

  • Hi Lars,

    Glad to hear it's working out so well for you! I'm still surprised how awesome this SAN is. Literally I set it up (with barely any experience), and it's just been sitting there, working, with no issues since! Love the performance as well. And nice touch with the LTO-5 tape library. I fell in love with the HP MSL tape libraries back a few years ago, the performance on those are always pretty impressive!

    I'm hoping I'll be rolling these out for clients soon when it's time to migrate to a virtualized infrastructure!

    Cheers,
    Stephen

  • Hi Stephen,

    Thanks for sharing. About to purchase the 2040 and this configuration could appeal to us.

    I take it you have 2 x 10Gb DAC links per controller (and 2 x unused 1Gb iSCSI SFPs per controller)?

    So with the 10Gb links between your hosts and SAN, are vMotion speeds super fast?

    Our current SAN backbone is a 4Gb FC and we were planning to go to 6Gb SAS but this may be the better option as SAS will limit us hosts-wise in the future.

    Regards,
    Steve

  • Hello Steve,

    That is correct (re: the 2 x 10Gb DACs and 2 x unused 1Gb SFP+s)...

    The configuration is working great for me and I'm having absolutely no complaints. Being a small business I was worried about spending this much money on this config, but it has exceeded my expectations and has turned out to be a great investment.

    For vMotion, each of my servers actually have a dual port 10G-BaseT NIC inside of them. I actually just connected a 3 foot Cat6 cable from server to server and dedicated it to vMotion. vMotion over 10Gb is awesome, it's INCREDIBLY fast. When doing updates to my hosts, it moves 10-15VMs (each with 6-12GB of RAM allocated to each VM) to the other host in less then 40 seconds for all.

    As for Storage vMotion, I have no complaints. It sure as heck beats my old configuration. I don't have any numbers on storage vMotion, but it is fast!

    And to your comment about SAS. Originally I looked at using SAS, but ended up not touching it because of the same issue; I was concerned about adding more hosts in the future. Also, as I'm sure you aware iSCSI creates more compatibility for hosts, connectivity, expansion, etc... In the far far future, I wanted to have the ability to re-purpose this unit when it comes time to performing my own infrastructure refresh.

    Keep in mind with the 2040, the only thing that's going to slow you down, is your disks and RAID level. The controllers can handle SSD drives and SSD speeds, but keep in mind, you want fast disks, you want lots of disks, and you want a fast RAID level.

    Let me know if you have any questions!

    Cheers,
    Stephen

Share
Published by

Recent Posts

How to properly decommission a VMware ESXi Host

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host. Some might be surprised to… Read More

4 months ago

Disable the VMware Horizon Session Bar

This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client… Read More

4 months ago

vGPU Enabled VM DRS Evacuation during Maintenance Mode

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While… Read More

4 months ago

GPU issues with the VMware Horizon Indirect Display Driver

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with… Read More

4 months ago

Synology DS923+ VMware vSphere Use case and Configuration

Today we're going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration. This little (but powerful) NAS is perfect for… Read More

4 months ago

How to Install the vSphere vCenter Root Certificate

Today we'll go over how to install the vSphere vCenter Root Certificate on your client system. Certificates are designed to verify the identity of the systems, software, and/or resources we… Read More

5 months ago
Powered and Hosted by Digitally Accurate Inc. - Calgary IT Services, Solutions, and Managed Services