Feb 142017
 

Years ago, HPe released the GL200 firmware for their HPe MSA 2040 SAN that allowed users to provision and use virtual disk groups (and virtual volumes). This firmware came with a whole bunch of features such as Read Cache, performance tiering, thin provisioning of virtual disk group based volumes, and being able to allocate and commission new virtual disk groups as required.

(Please Note: On virtual disk groups, you cannot add a single disk to an already created disk group, you must either create another disk group (best practice to create with the same number of disks, same RAID type, and same disk type), or migrate data, delete and re-create the disk group.)

The biggest thing with virtual storage, was the fact that volumes created on virtual disk groups, could span across multiple disk groups and provide access to different types of data, over different disks that offered different performance capabilities. Essentially, via an automated process internal to the MSA 2040, the SAN would place highly used data (hot data) on faster media such as SSD based disk groups, and place regularly/seldom used data (cold data) on slower types of media such as Enterprise SAS disks, or archival MDL SAS disks.

(Please Note: To use the performance tier either requires the purchase of a performance tiering license, or is bundled if you purchase an HPe MSA 2042 which additionally comes with SSD drives for use with “Read Cache” or “Performance tier.)

 

When the firmware was first released, I had no impulse to try it out since I have 24 x 900GB SAS disks (only one type of storage), and of course everything was running great, so why change it? With that being said, I’ve wanted and planned to one day kill off my linear storage groups, and implement the virtual disk groups. The key reason for me being thin provisioning (the MSA 2040 supports the “DELETE” VAAI function), and virtual based snapshots (in my environment, I require over-commitment of the volume). As a side-note, as of ESXi 6.5, ESXi now regularly unmaps unused blocks when using the VMFS-6 filesystem (if left enabled), which is great for SANs using thin provision that support the “DELETE” VAAI function.

My environment consisted of 2 linear disk groups, 12 disks in RAID5 owned by controller A, and 12 disks in RAID5 owned by controller B (24 disks total). Two weekends ago, I went ahead and migrated all my VMs to the other datastore (on the other volume), deleted the linear disk group, created a virtual disk group, and then migrated all the VMs back, deleted my second linear volume, and created a virtual disk group.

Overall the process was very easy and fast. No downtime is required for this operation if you’re licensed for Storage vMotion in your vSphere environment.

During testing, I’ve noticed absolutely no performance loss using virtual vs linear, except for some functions that utilize the VAAI storage providers which of course run faster on the virtual disk groups since it’s being offloaded to the SAN. This was a major concern for me as block linear based storage is accessed more directly, then virtual disk groups which add an extra level of software involvement between the controllers and disks (block based access vs file based access for the iSCSI targets being provided by the controllers).

Unfortunately since I have no SSDs and no extra room for disks, I won’t be able to try the performance tiering, but I’m looking forward to it in the future.

I highly recommend implementing virtual disk groups on your HPe MSA 2040 SAN!

  4 Responses to “HPe MSA 2040 – The switch from linear disk groups, to virtual disk groups…”

  1. Hi. I’ve a new MSA2040 with a StorageWorks D2700 connected to it via 2x DACs. I’ve then got the 2040 connected to 2x HPE 8/8 Brocade’s in a FC setup and some ESXi’s connected to the switches via HBAs. I’m experimenting with how best to use the storage trays. I only have 30x 600G SAS 10k disks between the two so I played with 24x in the 2040 and the rest in the 2700, and then 15x in the 2040 and 15x in the 2700 but I can’t which makes the most sense. My final experiment will be to go back to 24x in the 2040 and 6x in the 2700 and set up 2x virtual disks, but with the first VD using odd numbered drives starting in the 2040 and ending in the 2700, and the second VD using the even drives. I thought this would provide the best use of the drives but also spreading the spindles across the 2040 and the 2700. What would your opinion be on this?

  2. Hi “New MSA2040 D2700 owner”,

    As for the design of your implementation, you need to also take in to account the type of workloads you will be putting on the MSA and the various disk groups you create.

    First, I wouldn’t recommend alternating disk placement in the disk groups (your comment about even/odd numbering). I would have the disks physically grouped together with each other so they can be easily identified. While the MSA may allow you to do it your way, I’d highly recommend against it. The last thing you want is someone accidentally pulling a wrong disk in the event of a failure when restoring the ordering of a disk group (please reference MSA 2040 documentation on restoring order after disk failure replacement).

    The SAN will have fastest access to the disks directly inside of the MSA 2040, so I would reserve this for high I/O and high bandwidth applications on disk groups. If you plan on adding any SSD disks in the future, I would reserve the disk ports in the MSA for those SSD disks.

    There is a limit to how many disks can be added to a disk group depending on the RAID level chosen, you must also take this in to account.

    Finally, if possible I would recommend spreading the disk groups if possible over the different storage pools. This helps with performance as one controller owns and serves one pool, while the other controller owns and serves the other pool. In the event of a cable or controller failure, the other controller will take ownership (depending on the type of failure) and/or allow access to the other storage pool.

    I hope this helps… If you can provide more details I’ll do my best to answer any questions or provide advise.

    Cheers,
    Stephen

  3. Hi Stephen. I went back to consecutive disks in RAID5, RAID6 and RAID10 to see what speeds I could get but with 8GB HBA, 2x HPE SAN 8/8 switches and round robin path selection policy in ESXi6 I’m only getting between 400MB/s read and write. Any tips on where I could look for throttling or sub-optimal connections? I’ve got one DL380G9 with 1x dual port Emulex 8Gb card, each port connected to a host port on either SAN switch. Each SAN switch has 2x connections to each controller on the MSA2040 (Controller A & B port 1 to SAN switch 1, controller A & B port 2 to SAN switch 2.)

  4. Hi “MSA2040 D2700 owner”,

    When disk groups are created, there is an initialization time and after completion, a virtual disk scrub is initiated. Are you waiting until these tasks are completed before testing speeds?

    I’m not that familiar with FC (I’ve only worked with iSCSI), but I do believe you should be getting faster speeds. Have you read the best practice documents that HPe provides on configuration (they cover absolutely everything).

    If you’re using ESXi (vSphere), you should check to make sure that all “optimized” paths are being used, and that “non-optimized” paths are active but don’t have I/O going over them. You also need to check your configuration of LUNs with the best practice document to make sure everything is configured properly.

    I hope this helps!

    Cheers

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)