Nov 212015
 
HP MSA2040 Dual Controller SAN with 10Gb DAC SFP+ cables

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

Nov 172015
 

I recently had a reader reach out to me for some assistance with an issue they were having with a VMWare implementation. They were experiencing issues with uploading files, and performing I/O on Linux based virtual machines.

Originally it was believed that this was due to networking issues, since the performance issues were only one way (when uploading/writing to storage), and weren’t experienced with all virtual machines. Another particular behaviour notice was slow uploading speeds to the vSphere client file browser, and slow Physical to Virtual migrations.

After troubleshooting and exploring the issue with them, it was noticed that cache was not enabled on the RAID array that was providing the storage for the vSphere implementation.

Please note, that in virtual environments with storage based off RAID arrays, RAID cache is a must (for performance reasons). Further, Battery backed RAID cache is a must (for protection and data integrity). This allows write operations to be cached and performed on multiple disks at once, sometimes even optimizing the write procedures as they are processed. This allows writes to occur simultaneously to multiple disks, and also dramatically increases observed performance since the ESXi hosts, and virtual machines aren’t waiting for write operations to commit before proceeding to the next.

You’ll notice that under Windows virtual machines, this issue won’t be observed on writes since the Windows VMs typically cache file transfers to RAM, which then write to disk. This could give the impression that there are no storage issues when typically troubleshooting these issues (making one believe that it’s related to the Linux VMs, the ESXi hosts themselves, or some odd networking issue).

 

Again, I cannot stress enough that you should have a battery backed cache module, or capacitor backed flash module providing cache functions.

If you do implement cache without backing it with a battery, corruption can occur on the RAID array if there is a power failure, or if the RAID controller freezes. The battery backed cache allows cached write procedures to be committed to disk on next restart of the storage unit/storage controller thus providing protection.

Jun 072014
 

I’ve had the HPE MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks. Let’s take a look at the HPE MSA 2040 benchmarks on read, write, and IOPS.

First some info on the setup:

-2 X HPE Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)

-HPE MSA 2040 Dual Controller – Configured for iSCSI

-HPE MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives

-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)

-2 vDisks are configured, each owned by a separate controller

-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)

-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)

-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested

-Utilized “VMWare I/O Analyzer” (https://labs.vmware.com/flings/io-analyzer) which uses IOMeter for testing

-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.

Sequential Read Speed:

MSA2040-ReadMax Read: 1480.28MB/sec

Sequential Write Speed:

MSA2040-WriteMax Write: 1313.38MB/sec

See below for IOPS (Max Throughput) testing:

Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.

IOPS Read Testing:

MSA2040-MaxIOPSMax Read IOPS: 70679.91IOPS

IOPS Write Testing:

MSA2040-WriteOPSMax Write IOPS: 29452.35IOPS

PLEASE NOTE:

-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.

-I was running a VMWare vDP replication in the background (My bad, I know…).

-Sum is combined throughput of both hosts, Average is per host throughput.

Conclusion:

Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!

Cheers

Jun 072014
 
vSphere Logo Image

Over the years I’ve come across numerous posts, blogs, articles, and howto guides that provide information on when to use iSCSI port binding, and they’ve all been wrong! Here, I’ll explain when to use iSCSI Port Binding, and why!

This post and information applies to all versions of VMware vSphere including 5, 5.5, 6, 6.5, 6.7, and 7.0.

See below for a video version of the blog post:

VMWare vSphere iSCSI Port Binding – When to use iSCSI Port Binding, and why!

What does iSCSI port binding do

iSCSI port binding binds a software iSCSI initiator interface on a ESXi host to a physical vmknic and configures it accordingly to allow multi-pathing (MPIO) in a situation where both vmknics are residing in the same subnet.

In normal circumstances without port binding, if you have multiple vmkernels on the same subnet (mulithomed), the ESXi host would simply choose one and not use both for transmission of packets, traffic, and data. iSCSI port binding forces the iSCSI initiator to use that adapter for both transmission and receiving of iSCSI packets.

In most simple SAN environments, there are two different types of setups/configurations.

  1. Multiple Subnet – Numerous paths to a storage device on a SAN, each path residing on separate subnets. These paths are isolated from each other and usually involve multiple switches.
  2. Single Subnet – Numerous paths to a storage device on a SAN, each path is on the same subnet. These paths usually go through 1-2 switches, with all interfaces on the SAN and the hosts residing on the same subnet.

IT professionals should be aware of the the issues that occur when you have a host that is multi-homed with multiple NICs on the same subnet.

In a normal typical scenario with Windows and Linux, if you have multiple adapters residing on the same subnet you’ll have issues with broadcasts and transmission of packets, and in most cases you have absolutely no control over what communications are initiated over what NIC due to the way the routing table is handled. In most cases all outbound connections will be initiated through the first NIC installed in the system, or whichever one is inside of the primary route in the routing table.

When to use iSCSI port binding

This is where iSCSI Port Binding comes in to play. If you have an ESXi host that has multiple vmk adapters sitting on the same subnet, you can bind the software iSCSI initiators (vmk adapters) to the physical NICs (vmnics). This allows multiple iSCSI connections on multiple NICs residing on the same subnet to transmit and handle the traffic properly.

So the general rule of thumb is:

  • One subnet, iSCSI port binding is the way to go!
  • Two or more subnets (multiple subnets), do not use iSCSI Port Binding! It’s just not needed since all vmknics are residing on different subnets.

Additional Information

Here’s two links to VMWare documentation explaining this in more detail:

For more information on configuring a vSphere Distributed Switch for iSCSI MPIO, click here!

And a final troubleshooting note: If you configure iSCSI Port Binding and notice that one of your interfaces is showing as “Not Used” and the other as “Last Used”, this is most likely due to either a physical cabling/switching issue (where one of the bound interfaces can’t connect to the iSCSI target), or you haven’t configured permissions on your SAN to allow a connection from that IP address.

Jun 072014
 
vSphere Logo Image

When I first started really getting in to multipath iSCSI for vSphere, I had two major hurdles that really took a lot of time and research to figure out before deploying.

  1. How to configure a vSphere Distributed Switch for iSCSI multipath MPIO with each path being on different subnets (a.k.a. each interface on separate subnets)
  2. Whether or not I should use iSCSI Port Binding

So many articles on the internet, and most were wrong.

In this article I’ll be getting in to the first point, how to configure a vSphere Distributed Switch for iSCSI multipath MPIO with multiple subnets. You can find information on the second point here, when to use iSCSI Port Binding, and why.

I’ll start off by saying that when using multiple subnets on multiple isolated networks for SAN connectivity, you DO NOT use iSCSI Port Binding.

Configure the vSphere Distirbuted Switch (vDS) for iSCSI MPIO

Configuring a standard (non distributed) vSphere Standard Switch is easy, but we want to do this right, right? By configuring a vSphere Distributed Switch, it allows you to roll it out to multiple hosts making configuration and provisioning more easy. It also allows you to more easily manage and maintain the configuration as well. In my opinion, in a fully vSphere rollout, there’s no reason to use vSphere Standard switches. Everything should be distributed!

The setup

My configuration consists of two hosts connecting to an iSCSI device over 3 different paths, each with it’s own subnet. Each host has multiple NICs, and the storage device has multiple NICs as well.

As always, I plan the deployment on paper before touching anything. When getting ready for deployment, you should write down:

  • Which subnets you will use
  • Choose IP addresses for your SAN and hosts
  • I always draw a map that explains what’s connecting to what. When you start rolling this out, it’s good to have that image in your mind and on paper. If you lose track it helps to get back on track and avoid mistakes.

For this example, let’s assume that we have 3 connections (I know it’s an odd number):

Subnets to be used:

  • 10.0.1.X
  • 10.0.2.X
  • 10.0.3.X

SAN Device IP Assignment:

  • 10.0.1.1 (NIC 1)
  • 10.0.2.1 (NIC 2)
  • 10.0.3.1 (NIC 3)

Host 1 IP Assignment:

  • 10.0.1.2 (NIC 1)
  • 10.0.2.2 (NIC 2)
  • 10.0.3.2 (Nic 3)

Host 2 IP Assignment:

  • 10.0.1.3 (NIC 1)
  • 10.0.2.3 (NIC 2)
  • 10.0.3.3 (NIC 3)

So no we know where everything is going to sit, and it’s addresses. It’s now time to configure a vSphere Distributed Switch and roll it out to the hosts.

Instructions

Let’s begin!

  1. We’ll start off by going in to the vSphere client and creating a new vSphere Distributed Switch. You can name this switch whatever you want, I’ll use “iSCSI-vDS” for this example. Going through the wizard you can assign a name. Stop when you get to “Add Hosts and Physical Adapter”, on this page we will chose “Add Later”. Also, when it asks us to create a default port group, we will un-check the box and NOT create one.
  2. Now we need to create some “Port Groups”. Essentially we will be creating a Port Group for each subnet and NIC for the storage configuration. In this example case, we have 3 subnets, and 3 NICs per host, so we will be creating 3 port groups. Go ahead and right click on the new vSphere distributed switch we created (“iSCSI-vDS” in my example), and create a new port group. I’ll be naming my first one “iSCSI-01”, second will be called “iSCSI-02”, and so on. You can go ahead and create one for each subnet. After these are created, we’ll end up with this:
  3. After we have this setup, we now need to do some VERY important configuration. As of right now by default, each port group will have all uplinks configured as Active which we DO NOT want. Essentially what we will be doing, is assigning only one Active Uplink per Port Group. Each port group will be on it’s own subnet, so we need to make sure that the applicable uplink is only active, and the remainder are thrown in to the “Unused Uplinks” section. This can be achieved by right clicking on each port group, and going to “Teaming and Failover” underneath “Policies”. You’ll need to select the applicable uplinks and using the “Move Down” button, move them down to “Unused Uplinks”. Below you’ll see some screenshots from the iSCSI-02, and iSCSI-03 port groups we’ve created in this example:

    You’ll notice that the iSCSI-02 port group, only has the iSCSI-02 uplink marked as active. Also, the iSCSI-03 port group, only have the iSCSI-03 uplink marked as active. The same applies to iSCSI-01, and any other links you have (more if you have more links). Please ignore the entry for “iSCSI-04”, I created this for something else, pretend the entry isn’t there. If you do have 4 subnets, and 4 NICs, then you would have a 4th port group.
  4. Now we need to add the vSphere Distributed Switches to the hosts. Right click on the “iSCSI-vDS” Distributed switch we created and select “Add Host”. Select ONLY the hosts, and DO NOT select any of the physical adapters. A box will appear mentioning you haven’t selected any physical adapters, simply hit “Yes” to “Do you want to continue adding the hosts”. For the rest of the wizard just keep hitting “Next”, we don’t need to change anything. Example below:

    So here we are now, we have a vSphere Distributed Switch created, we have the port groups created, we’ve configured the port groups, and the vDS is attached to our hosts… Now we need to create vmks (vmkernel interfaces) in each port group, and then attach physical adapters to the port groups.
  5. Head over to the Configuration tab inside of your ESXi host, and go to “Networking”. You’ll notice the newly created vSphere Distributed Switch is now inside the window. Expand it. You’ll need to perform these steps on each of your ESXi hosts. Essentially what we are doing, is creating a vmk on each port group, on each host. Click on “Manage Virtual Adapters” and click on “Add”. We’ll select “New Virtual Adapter”, then on the next screen our only option will be “VMKernel”, click Next. In the “Select port group” option, select the applicable port group. You’ll need to do this multiple times as we need to create a vmkernel interface for each port group (a vmk on iSCSI-01, a vmk on iSCSI-02, etc…), on each host, click next. Since this is the first port group (iSCSI-01) vmk we are creating on the first host, we’ll assign the IP address as 10.0.1.2, fill in the subnet box, and finish the wizard. Create another vmk for the second port group (iSCSI-02), since it’s the first host it’ll have an IP of 10.0.2.2, and then again for the 3rd port group with an IP of 10.0.3.2. After you do this for the first host, you’ll need to do it again for the second host, only the IPs will be different since it’s a different host (in this example the second host would have 3 vmks on each port group, example: iSCSI01 – 10.0.1.3, iSCSI02 – 10.0.2.3, iSCSI03 – 10.0.3.3). Here’s an example of iSCSI02 and iSCSI03 on ESXi host 1. Of course there’s also a iSCSI-01 but I cut it from the screenshot.
  6. Now we need to “manage the physical adapters” and attach the physical adapters to the individual port groups. Essentially this will map the physical NIC to the separate subnets port groups we’ve created for storage in the vDS. We’ll need to do this on both hosts. Inside of the “Managed Physical Adapters” box, you’ll see each port group on the left hand side, click on “<Click to Add NIC>”. Now in everyone’s environments the vmnic you add will be different. You should know what the physical adapter you want to map to the subnet/port group is. I’ve removed the vmnic number from the below screenshot just in case… And to make sure you think about this one…

As mentioned above, you need to do this on both hosts for the applicable vmnics. You’ll want to assign all 3 (even though I’ve only assigned 2 in the above screenshot).

Voiala! You’re done! Now all you need to do is go in to your iSCSI initiator and add the IPs of the iSCSI target to the dynamic discovery tab on each host. Rescan the adapter, add the VMFS datastores and you’re done.

If you have any questions or comments, or feel this can be done in a better way, drop a comment on this article. Happy Virtualizing!

Jumbo Frames

There is one additional step if you are using jumbo frames. Please note that to use jumbo frames, all NICs, physical switches, and the storage device itself need to be configured to support this. On the VMWare side of things, you need to apply the following settings:

  1. Under “Inventory” and “Networking”, Right Click on the newly created Distributed Switch. Under the “Properties” tab, select “Advanced” on the left hand side. Change the MTU to the applicable frame size. In my scenario this is 9000.
  2. Under “Inventory” and “Hosts and Clusters”, click on the “Configuration Tab”, then “vSphere Distributed Switch”. Expand the newly created “Distributed Switch”, select “Manage Virtual Adapters”. Select a vmk interface, and click “edit”. Change the MTU to the applicable size, in my case this is 9000. You’ll need to do this for each vmk interface on each physical host.

Good luck with your setup and configuration!

Jun 072014
 

So, you have:

2 X HP Proliant DL360p Gen8 Servers with 2 X 10 Core Processors

1 X MSA 2040 SAN – With Dual Controllers

 

And you want more visibility, functionality, and more important “Insight” on your systems where the hardware meets the software. This is where HP Insight Control for VMWare comes in to play.

This package is amazing for providing information and “Insight” in to all your equipment, including servers and storage units. It allows you to update firmware, monitor and manage servers, monitor and manage storage arrays, and rapidly deploy new data stores and manage existing ones. It makes all this information and functionality available via the vSphere management interfaces, which is just fantastic.

 

I was browsing the downloads area on HP’s website for the MSA 2040, and the website told me I should download “Insight Control for VMWare”, I figured why not! After getting this package installed, I instantly saw the value.

HP Insight control for VMWare, allows you to access server health, management, and control, along with storage health, management, and control. It supports HP servers with iLo, and fully supports the MSA 2040 SAN.

Installation was a breeze, it was installed within seconds. I chose to install it directly on to my demo vSphere 5.5 vCenter server. Barely any configuration is required, the installation process was actually a few clicks of “Next”. Once install, you simply have to configure iLo credentials, and then add your storage system if you have a compatible SAN. Even adding your SAN is super easy, and it allows you to choose whether you want Insight Control to have full access to the SAN (which allows you to create, and manage datastores), or only Read Only, which only allows it to pull information from the unit.

 

And for those of you concerned about port conflicts, it uses:

3500,3501, 3502, 3503, 3504, 3505, 3506, 3507, 3508, 3509, 3510, 3512, 3513, 3511, 8090.

 

The Insight Control for VMWare is available through both the software client, and the web client. As far as I’m concerned it’s a “must have” if your running HP equipment in your vSphere environment!

HP Insight Control Firmware Management Page

HP Insight Control Firmware Management Page

HP Insight Control for VMWare on Software Client for vSphere

HP Insight Control for VMWare on Software Client for vSphere

HP Insight Control for VMWare showing iSCSI initiator paths

HP Insight Control for VMWare showing iSCSI initiator paths

HP Insight Control for VMWare Web Client

HP Insight Control for VMWare Web Client

HP Inight Control for VMWare Overview in Web Client

HP Inight Control for VMWare Overview in Web Client

 

May 282014
 

In the last few months, my company (Digitally Accurate Inc.) and our sister company (Wagner Consulting Services), have been working on a number of new cool projects. As a result of this, we needed to purchase more servers, and implement an enterprise grade SAN. This is how we got started with the HPE MSA 2040 SAN (formerly known as the HP MSA 2040 SAN), specifically a fully loaded HPE MSA 2040 Dual Controller SAN unit.

The Purchase

For the server, we purchased another HPE Proliant DL360p Gen8 (with 2 X 10 Core Processors, and 128Gb of RAM, exact same as our existing server), however I won’t be getting that in to this blog post.

Now for storage, we decided to pull the trigger and purchase an HPE MSA 2040 Dual Controller SAN. We purchased it as a CTO (Configure to Order), and loaded it up with 4 X 1Gb iSCSI RJ45 SFP+ modules (there’s a minimum requirement of 1 4-pack SFP), and 24 X HPE 900Gb 2.5inch 10k RPM SAS Dual Port Enterprise drives. Even though we have the 4 1Gb iSCSI modules, we aren’t using them to connect to the SAN. We also placed an order for 4 X 10Gb DAC cables.

To connect the SAN to the servers, we purchased 2 X HPE Dual Port 10Gb Server SFP+ NICs, one for each server. The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.

HPE MSA 2040 Configuration

I must say that configuration was an absolute breeze. As always, using intelligent provisioning on the DL360p, we had ESXi up and running in seconds with it installed to the on-board 8GB micro-sd card.

I’m completely new to the MSA 2040 SAN and have actually never played with, or configured one. After turning it on, I immediately went to HPE’s website and downloaded the latest firmware for both the drives, and the controllers themselves. It’s a well known fact that to enable iSCSI on the unit, you have to have the controllers running the latest firmware version.

Turning on the unit, I noticed the management NIC on the controllers quickly grabbed an IP from my DHCP server. Logging in, I found the web interface extremely easy to use. Right away I went to the firmware upgrade section, and uploaded the appropriate firmware file for the 24 X 900GB drives. The firmware took seconds to flash. I went ahead and restarted the entire storage unit to make sure that the drives were restarted with the flashed firmware (a proper shutdown of course).

While you can update the controller firmware with the web interface, I chose not to do this as HPE provides a Windows executable that will connect to the management interface and update both controllers. Even though I didn’t have the unit configured yet, it’s a very interesting process that occurs. You can do live controller firmware updates with a Dual Controller MSA 2040 (as in no downtime). The way this works is, the firmware update utility first updates Controller A. If you have a multipath (MPIO) configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place. When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller. This allows you to do online firmware updates that will result in absolutely ZERO downtime. Very neat! PLEASE REMEMBER, this does not apply to drive firmware updates. When you update the hard drive firmware, there can be ZERO I/O occurring. You’d want to make sure all your connected hosts are offline, and no software connection exists to the SAN.

Anyways, the firmware update completed successfully. Now it was time to configure the unit and start playing. I read through a couple quick documents on where to get started. If I did this right the first time, I wouldn’t have to bother doing it again.

I used the wizards available to first configure the actually storage, and then provisioning and mapping to the hosts. When deploying a SAN, you should always write down and create a map of your Storage area Network topology. It helps when it comes time to configure, and really helps with reducing mistakes in the configuration. I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.

Since the MSA 2040 is a Dual Controller SAN, you want to make sure that each host can at least directly access both controllers. Therefore, in my configuration with a NIC with 2 ports, port 1 on the NIC would connect to a port on controller A of the SAN, while port 2 would connect to controller B on the SAN. When you do this and configure all the software properly (VMWare in my case), you can create a configuration that allows load balancing and fault tolerance. Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured vDisk. Most I/O will go through only to the main controller configured for that vDisk, but in the event the controller goes down, it will jump over to the other controller and I/O will proceed uninterrupted until your resolve the fault.

First part, I had to run the configuration wizard and set the various environment settings. This includes time, management port settings, unit names, friendly names, and most importantly host connection settings. I configured all the host ports for iSCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph. Although the host ports can sit on the same subnets, it is best practice to use multiple subnets.

Jumping in to the storage provisioning wizard, I decided to create 2 separate RAID 5 arrays. The first array contains disks 1 to 12 (and while I have controller ownership set to auto, it will be assigned to controller A), and the second array contains disk 13 to 24 (again ownership is set to auto, but it will be assigned to controller B). After this, I assigned the LUN numbers, and then mapped the LUNs to all ports on the MSA 2040, ultimately allowing access to both iSCSI targets (and RAID volumes) to any port.

I’m now sitting here thinking “This was too easy”. And it turns out it was just that easy! The RAID volumes started to initialize.

VMware vSphere Configuration

At this point, I jumped on to my vSphere demo environment and configured the vDistributed iSCSI switches. I mapped the various uplinks to the various portgroups, confirmed that there was hardware link connectivity. I jumped in to the software iSCSI imitator, typed in the discovery IP, and BAM! The iSCSI initiator found all available paths, and both RAID disks I configured. Did this for the other host as well, connected to the iSCSI target, formatted the volumes as VMFS and I was done!

I’m still shocked that such a high performance and powerful unit was this easy to configure and get running. I’ve had it running for 24 hours now and have had no problems. This DESTROYS my old storage configuration in performance, thankfully I can keep my old setup for a vDP (VMWare Data Protection) instance.

HPE MSA 2040 Pictures

I’ve attached some pics below. I have to apologize for how ghetto the images/setup is. Keep in mind this is a test demo environment for showcasing the technologies and their capabilities.

HPe MSA 2040 SAN - Front Image
HPE MSA 2040 SAN – Front Image
HP MSA 2040 - Side Image
HP MSA 2040 – Side Image
HPe MSA 2040 SAN with drives - Front Right Image
HPE MSA 2040 SAN with drives – Front Right Image
HP MSA 2040 Rear Power Supply and iSCSI Controllers
HP MSA 2040 Rear Power Supply and iSCSI Controllers
HPe MSA 2040 Dual Controller - Rear Image
HPE MSA 2040 Dual Controller – Rear Image
HP MSA 2040 Dual Controller SAN - Rear Image
HP MSA 2040 Dual Controller SAN – Rear Image
HPe Proliant DL 360p Gen8 HP MSA 2040 Dual Controller SAN
HP Proliant DL 360p Gen8
HP MSA 2040 Dual Controller SAN
HPe MSA 2040 Dual Controller SAN
HPE MSA 2040 – With Power
HP MSA 2040 - Side shot with power on
HP MSA 2040 – Side shot with power on
HP Proliant DL360p Gen8 - UID LED on
HP Proliant DL360p Gen8 – UID LED on
HP Proliant DL360p Gen8 HP MSA 2040 Dual Controller SAN VMWare vSphere
HP Proliant DL360p Gen8
HP MSA 2040 Dual Controller SAN
VMWare vSphere

Update: HPE has updated the MSA product line and the 2040 has now been replaced by the HPE MSA 2050 SAN Dual Controller SAN. There are now also SSD Cache models such as the HPE MSA 2052 Dual Controller SAN.

Apr 122014
 

Recently I decided it was time to beef up my storage link between my demonstration vSphere environment and my storage system. My existing setup included a single HP DL360p Gen8, connected to a Synology DS1813+ via NFS.

I went out and purchased the appropriate (and compatible) HP 4 x 1Gb Server NIC (Broadcom based, 4 ports), and connected the Synology device directly to the new server NIC (all 4 ports). I went ahead and configured an iSCSI Target using a File LUN with ALUA (Advanced LUN features). Configured the NICs on both the vSphere side, and on the Synology side, and enabled Jumbo frames of 9000 bytes.

I connected to the iSCSI LUN, and created a VMFS volume. I then configured Round Robin MPIO on the vSphere side of things (as always I made sure to enable “Multiple iSCSI initators” on the Synology side).

I started to migrate some VMs over to the iSCSI LUN. At first I noticed it was going extremely slow. I confirmed that traffic was being passed across all NICs (also verified that all paths were active). After the migration completed I decided to shut down the VMs and restart to compare boot times. Booting from the iSCSI LUN was absolutely horrible, the VMs took forever to boot up. Keep in mind I’m very familiar with vSphere (my company is a VMWare partner), so I know how to properly configure Round Robin, iSCSI, and MPIO.

I then decided to tweak some settings on the ESXi side of things. I configured the Round Robin policy to IOPS=1, which helped a bit. Then changed the RR policy to bytes=8800 which after numerous other tweaks, I determined achieved the highest performance to the storage system using iSCSI.

This config was used for a couple weeks, but ultimately I was very unsatisfied with the performance. I know it’s not very accurate, but looking at the Synology resource monitor, each gigabit link over iSCSI was only achieving 10-15MB/sec under high load (single contiguous copies) that should have resulted in 100MB/sec and higher per link. The combined LAN throughput as reported by the Synology device across all 4 gigabit links never exceeded 80MB/sec. File transfers inside of the virtual machines couldn’t get higher then 20MB/sec.

I have a VMWare vDP (VMWare Data Protection) test VM configured, which includes a performance analyzer inside of the configuration interface. I decided to use this to test some specs (I’m too lazy to actually configure a real IO/throughput test since I know I won’t be continuing to use iSCSI on the Synology with the horrible performance I’m getting). The performance analyzer tests run for 30-60 minutes, and measure writes and reads in MB/sec, and Seeks in seconds. I tested 3 different datastores.

Synology  DS1813+ NFS over 1 X Gigabit link (1500MTU):

  • Read 81.2MB/sec, Write 79.8MB/sec, 961.6 Seeks/sec

Synology DS1813+ iSCSI over 4 x Gigabit links configured in MPIO Round Robin BYTES=8800 (9000MTU):

  • Read 36.9MB/sec, Write 41.1MB/sec, 399.0 Seeks/sec

Custom built 8 year old computer running Linux MD Raid 5 running NFS with 1 X Gigabit NIC (1500MTU):

  • Read 94.2MB/sec, Write 97.9MB/sec, 1431.7 Seeks/sec

Can someone say WTF?!?!?!?! As you can see, it appears there is a major performance hit with the DS1813+ using 4 Gigabit MPIO iSCSI with Round Robin. It’s half the speed of a single link 1 X Gigabit NFS connection. Keep in mind I purchased the extra memory module for my DS1813+ so it has 4GB of memory.

I’m kind of choked I spent the money on the extra server NIC (as it was over $500.00), I’m also surprised that my custom built NFS server from 8 years ago (drives are 4 years old) with 5 drives is performing better then my 8 drive DS1813+. All drives used in both the Synology and Custom built NFS box are Seagate Barracuda 7200RPM drives (Custom box has 5 X 1TB drives configured RAID5, the Synology has 8 x 3TB drives configured in RAID 5).

I won’t be using iSCSI  or iSCSI MPIO again with the DS1813+ and actually plan on retiring it as my main datastore for vSphere. I’ve finally decided to bite the bullet and purchase an HP MSA2024 (Dual Controller with 4 X 10Gb SFP+ ports) to provide storage for my vSphere test/demo environment. I’ll keep the Synology DS1813+ online as an NFS vDP backup datastore.

Feel free to comment and let me know how your experience with the Synology devices using iSCSI MPIO is/was. I’m curious to see if others are experiencing the same results.

UPDATE – June 6th, 2014

The other day, I finally had time to play around and do some testing. I created a new FileIO iSCSI Target, I connected it to my vSphere test environment and configured round robin. Doing some tests on the newly created datastore, the iSCSI connections kept disconnecting. It got to the point where it wasn’t usable.

I scratched that, and tried something else.

I deleted the existing RAID volume and I created a new RAID 5 volume and dedicated it to Block I/O iSCSI target. I connected it to my vSphere test environment and configured round robin MPIO.

At first all was going smoothly, until again, connection drops were occurring. Logging in to the DSM, absolutely no errors were being reported and everything was fine. Yet, I was at a point where all connections were down to the ESXi host.

I shut down the ESXi host, and then shut down and restarted the DS1813+. I waited for it to come back up however it wouldn’t. I let it sit there and waited for 2 hours for the IP to finally be pingable. I tried to connect to the Web interface, however it would only load portions of the page over extended amounts of time (it took 4 hour to load the interface). Once inside, it was EXTREMELY slow. However it was reporting that all was fine, and everything was up, and the disks were fine as well.

I booted the ESXi host and tried to connect to it, however it couldn’t make the connection to the iSCSI targets. Finally the Synology unit became un-responsive.

Since I only had a few test VMs loaded on the Synology device, I decided to just go ahead and do a factory reset on the unit (I noticed new firmware was available as of that day). I downloaded the firmware, and started the factory reset (which again, took forever since the web interface was crawling along).

After restarting the unit, it was not responsive. I waited a couple hours and again, the web interface finally responded but was extremely slow. It took a couple hours to get through the setup page, and a couple more hours for the unit to boot.

Something was wrong, so I restarted the unit yet again, and again, and again.

This time, the alarm light was illuminated on the unit, also one of the drive lights wouldn’t come on. Again, extreme unresponsiveness. I finally got access to the web interface and it was reporting the temperature of one of the drives as critical, but it said it was still functioning and all drives were OK. I shut off the unit, removed the drive, and restarted it again, all of a sudden it was extremely responsive.

I removed the drive, hooked it up to another computer and confirmed that it was failed (which it was).

I replaced the drive with a new one (same model), and did three tests. One with NFS, one with FileIO iSCSI, and one with BlockIO iSCSI. All of a sudden the unit was working fine, and there was absolutely NO iSCSI connections dropping. I tested the iSCSI targets under load for some time, and noticed considerable performance increases with iSCSI, and no connection drops.

Here are some thoughts:

  • Two possible things fixed the connection drops, either the drive was acting up all along, or the new version of DSM fixed the iSCSI connection drops.
  • While performance has increased with FileIO to around ~120-160MB/sec from ~50MB/sec, I’m still not even close to maxing out the 4 X 1Gb interfaces.
  • I also noticed a significant performance increase with NFS, so I’m leaning towards the fact that the drive had been acting up since day one (seeks per second increased by 3 fold after replacing the drive and testing NFS). I/O wait has been significantly reduced
  • Why did the Synology unit just freeze up once this drive really started dying? It should have been marked as failed instead of causing the entire Synology unit not to function.
  • Why didn’t the drive get marked as failed at all? I regularly performed SMART tests, and checked drive health, there was absolutely no errors. Even when the unit was at a standstill, it still reported the drive as working fine.

Either way, the iSCSI connection drops aren’t occurring anymore, and performance with iSCSI is significantly better. However, I wish I could hit 200MB+/sec.

At this point it is usable for iSCSI using FileIO, however I was disappointed with BlockIO performance (BlockIO should be faster, no idea why it isn’t).

For now, I have an NFS datastore configured (using this for vDP backup), although I will be creating another FileIO iSCSI target and will do some more testing.

Update – August 16, 2019: Please see these additional posts regarding performance and optimization:

Jul 082013
 

Recently I needed to upgrade and replace my storage system which provides basic SMB dump file services, iSCSI, and NFS to my internal network and vSphere cluster. As most of you know, in the past I have traditionally created and configured my own storage systems. For the most part this has worked fantastic, especially with the NFS and iSCSI target services being provided and built in to the Linux OS (iSCSI thanks to Lio-Target).

A few reasons for the upgrade…

  1. I need more storage
  2. I need a pre-packaged product that comes with warranty.

Taking care of the storage size was easy (buy more drives), however I needed to find a pre-packaged product that fits my requirements for performance, capabilities, stability, support, and of course warranty. iSCSI and NFS support was an absolute must!

Some time ago, when I first started working with Lio-Target before it was incorporated and merged in to the linux kernel, I noticed that the parent company Rising Tide Systems mentioned they also provided the target for numerous NAS and SAN devices available on the market, Synology being one of them. I never thought anything of this as back then I wasn’t interesting in purchasing a pre-packaged product, until my search for a new storage system.

Upon researching, I found that Synology released their 2013 line of products. These products had a focus on vSphere compatibility, performance, and redundant network connections (either through Trunking/Link aggregation, or MPIO iSCSI connections).

The device that caught my attention for my purpose was the DS1813+.

DS1813+
Synology DS1813+

Synology DS1813+ Specifications:

  • CPU Frequency : Dual Core 2.13GHz
  • Floating Point
  • Memory : DDR3 2GB (Expandable, up to 4GB)
  • Internal HDD/SSD : 3.5″ or 2.5″ SATA(II) X 8 (Hard drive not included)
  • Max Internal Capacity : 32TB (8 X 4TB HDD) (Capacity may vary by RAID types) (See All Supported HDD)
  • Hot Swappable HDD
  • External HDD Interface : USB 3.0 Port X 2, USB 2.0 Port X 4, eSATA Port X 2
  • Size (HxWxD) : 157 X 340 X 233 mm
  • Weight : 5.21kg
  • LAN : Gigabit X 4
  • Link Aggregation
  • Wake on LAN/WAN
  • System Fan : 120x120mm X2
  • Easy Replacement System Fan
  • Wireless Support (dongle)
  • Noise Level : 24.1 dB(A)
  • Power Recovery
  • Power Supply : 250W
  • AC Input Power Voltage : 100V to 240V AC
  • Power Frequency : 50/60 Hz, Single Phase
  • Power Consumption : 75.19W (Access); 34.12W (HDD Hibernation);
  • Operating Temperature : 5°C to 35°C (40°F to 95°F)
  • Storage Temperature : -10°C to 70°C (15°F to 155°F)
  • Relative Humidity : 5% to 95% RH
  • Maximum Operating Altitude : 6,500 feet
  • Certification : FCC Class B, CE Class B, BSMI Class B
  • Warranty : 3 Years

This puppy has 4 gigabit LAN ports, and 8 SATA bays. There’s tons of reviews on the internet praising Synology, and their DSM operating system (based on embedded linux) on the internet, so I decided to live dangerously and went ahead and placed an order for this device, along with 8 X Seagate 3TB Barracuda drives.

Unfortunately, it’s extremely difficult to get your hands on a DS1813+ in Canada (I’m not sure why). After numerous orders placed and cancelled with numerous companies, I finally found a distributor who was able to get me one. I’ll just say the wait was totally worth it. Initially I also purchased the 2GB RAM add-on as well, so I had this available when the DS1813+ arrived.

I was hoping to take a bunch of pictures, and do thorough testing with the unit before throwing it in to production, however right from the get go, it was extremely easy to configure and use, so right away I had it running in production. Sorry for the lack of pics! 🙂

I did however get a chance to setup the 8 drives in RAID 5, and configured an iSCSI block based target. The performance was fantastic, no problems whatsoever. Even maxing out one gigabit connection, the resources of the unit were barely touched.

I’m VERY impressed with the DSM operating system. Everything is clearly spelled out, and you have very detailed control of the device and all services. Configuration of SMB shares, iSCSI targets, and NFS exports is extremely simple, yet allows you to configure advanced features.

After testing out the iSCSI performance, I decided to get the unit ready for production. I created 2 shared folders, and exported these via NFS to my ESXi hosts. It was very simple, quick, and the ESXi hosts had absolutely no problems connecting to the exports.

One thing that really blew me away about this unit, is the performance. Immediately after configuring the NFS exports, mounting them and using Storage vMotion to migrate 14 live virtual machines to the DS1813+ I noticed MASSIVE performance gains. The performance gains were so large, it put my old custom storage system to shame. And this is really interesting, considering my old storage system, while custom, is actually spec’d way higher then the storage unit (CPU, RAM, and the SATA controller). I’m assuming the DS1813+ has numerous kernel optimizations for storage, and at the same time does not have the overhead of a fully Linux distribution. This also means it’s more stable since you don’t have tons of applications running in the background that you don’t need.

After migrating the VMs I noticed that the virtual machines were running way faster, and were may more responsive. I’m assuming this is due to increased IOPS.

Either way I’m extremely happy with the device and fully recommend it. I’ll be posting more blog articles later detailing configuration of services in detail such as iSCSI, NFS, and some other things. I’m already planning on picking up an additional DS1513+ (5 bay unit) to act as a storage server for VM backups which I perform using GhettoVCB.

Update – August 16, 2019: Please see these additional posts regarding performance and optimization:

Nov 222012
 

Just something I wanted to share in case anyone else ran in to this issue…

At a specific client we have 2 X MSA60 units attached via Smart Array P800 controllers to 2 X DL360 G6 servers. These combo of server, controller, and storage units were purchased just after they were originally released from HP.

I’m writing about a specific condition in which after a drive fails in RAID 5, during rebuild, numerous (and I mean over 70,000) event log entries in the event viewer state: “Surface analysis has repaired an inconsistent stripe on logical drive 1 connected to array controller P800 located in server slot 2. This repair was conducted by updating the parity data to match the data drive contents.”

 

One one of these arrays, shortly after a successful rebuild while the event viewer was spitting these errors out, had another drive fail. At this point the RAID array went offline, and the entire RAID array and all it’s contents were unrecoverable. Keep in mind this occurred after the rebuild, while a surface scan was in progress. In this specific case we rebuilt the array, restored from backup and all was good. After mentioning this to HP support techs, they said it was safe to ignore these messages as they were fine and informational (I didn’t feel this was the case). After creating the new RAID array on this specific unit, we never saw these messages on that unit again.

On the other MSA60 unit however, we regularly received these messages (we always keep the firmware of the MSA60 unit, and the P800 controller up to date). Again numerous times asked HP support and they said we could safely ignore these. Recently, during a power outage, the P800 controller flagged it’s cache batteries as failed, at the same time a drive failed and we were yet again presented with these errors after the rebuild. After getting the drive replaced, I contacted HP again, and finally insisted that they investigate this issue regarding the event log errors. This specific time, new errors about parity were presenting themselves in the event viewer.

After being put on hold for some time, they came back and mentioned that these errors are probably caused because the RAID array was created with a very early firmware version. They recommended to delete the logical array, and re-create it with the latest firmware to avoid any data loss. I specifically asked if there was a chance that the array could fail due to these errors, and the fact it was created with an early firmware version, and they confirmed it. I went ahead, created backups, deleted the array and re-created it, restored the back and the errors are no longer present.

 

I just wanted to create this blog post, as I see numerous people are searching for the meaning of these errors, and wanted to shed some light and maybe help a few of you out, to help you avoid any future catastrophic problems!