Connect with me!

Have a question? Want to hire me? Reach out and Connect!
I'm available for remote and onsite consulting!
To live chat with me, Click Here!
VMware

HPE MSA 2040 Dual Controller SAN – 10Gb iSCSI DAC Connection to HPE Server Pictures

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

Stephen Wagner

Stephen Wagner is President of Digitally Accurate Inc., an IT Consulting, IT Services and IT Solutions company. Stephen Wagner is also a VMware vExpert, NVIDIA NGCA Advisor, and HPE Influencer, and also specializes in a number of technologies including Virtualization and VDI.

View Comments

  • Stephen

    Would you mind to post what part model of 10 GB Adapters did you use on system please ?

    Kind Regards

    Igor

  • Hi Igor,

    I used these for the NICs and Cables:

    HP Ethernet 10Gb 2-port 560SFP+ Adapter (665249-B21)
    HP BladeSystem c-Class Small Form-Factor Pluggable 3m 10GbE Copper Cable (487655-B21)

    Cheers

  • Hi Stephen

    I'm pretty impressed on your build!!

    Have got any issues with this 10 GB Adapter on VMware ?

    Thanks

  • Hi Igor,

    Thanks bud! Actually, I'll be honest, I set this SAN up with the DAC cables, and it's been running solid non-stop since for probably almost 2 years now!

    The speed is AWESOME, the setup works great. And I've had absolutely no issues (literally no issues).

    I also designed and sold a similar solution (Gen9 instead of Gen8 Servers) to a client that deals with big-data, and it's been rock solid there too...

    The MSA 2040 is a beautiful SAN and just absolutely rocks with VMWare. I wish I could sell more of these puppies! The SFP+ DAC cables are awesome too for keeping the price down...

  • Just to clarify for everyone -

    This is a MSA2040 directly connected to a DL server via the 10GB DAC with no 10GB switch in place... Correct ?

  • Hi Charles,

    That is correct. the MSA 2040 is actually connected to Two (2) DL 360p Gen8 Servers using 10Gb DAC cables, and NO switch...

  • Hi Stephen,

    I have the same sort of setup but instead of 1 MSA I've got two boxes. The idea is to sync the devices to have a full DR in place from a storage perspective. The replication license is in place and works on iSCSI, but not managing to make it work in FC! Do you know if this is possible after all? Having two MSA connected back to back over FC!

    Thanks for your help, and nice work!

  • Hi Stephen,
    I've have gotten these on hand:
    3 HP DL380 GEN9 E5-2640V3 SRV with 3 HP HP Ethernet 10Gb 2-port 530T Adapter 657128-001 (RJ-45)
    1 HP MSA 2040 SAN Controller.

    going to wire them up physically.

    question. how do you connect your 2 HP server to your network? To let end user access the guests?

    Thanks.

  • Hi Jon,

    Each of my servers has the following:

    4-port 1Gb RJ45 NIC - Each port is configured for a different network and provides VM access to network (end users accessing VMs)

    2-port 10Gb RJ45 NIC - 1 port is configured and dedicated to vMotion, so VMs can be vMotioned at 10Gb speeds from host to host. The other port is un-used

    2-port 10Gb SFP+ NIC - Each port has a SFP+ DAC cable running to one of the controllers on my MSA2040 SAN. So each server has a connection to each controller on the SAN

    Hope this answers your question!

    Cheers

  • Stephen -

    Does the MSA2040 see the host servers as it would in an iscsi configuration ?

    Speaking from how the LUNs are allocated to the host - can you map multiple LUNs with tiering in place ?

    What I am getting at = other than being able to connect to more than a few DL360's is there any disadvantage to using DAC VS 10GB switch ?

  • Hi Charles,

    This is actually an iSCSI configuration. As for the mapping multiple LUNs with teiring, I'm not quite sure what you mean, could you expand on that?

    Essentially, it's as simple as this:
    -DAC is cheaper than fiber
    -DAC with up to 4 hosts (each host with 2 connections, 1 to each controller) removes the need (and cost) of a 10Gb switch
    -DAC (this needs to be confirmed) could actually be used to connect to a SFP+ 10Gb switch as well, future-proofing configurations for end-users starting small, and adding more hosts later.

    Disadvantages to using DAC direct to hosts (no switches):
    -While DAC in my config, provides path redundancy, more redundancy can be achieved with more NICs and if switches were used. You could still use DAC, you just require switches.
    -With redundant paths (and no switches), you can have a max of 4 hosts directly attached using DAC.

    A big thing to keep in mind when using DAC: All it is, is just a different type of cable (fiber vs RJ45 vs DAC). Everything is virtually the same (networking, iSCSI, etc...). The big recommendation I do for DAC, is just for companies wanting to keep the budget minimized if they have a low host count. Even then, if you got some 10Gb switches with SFP+ ports, in theory you could still use those DAC cables, only connect them to the switches instead of hosts.

    Hope this helps!

    Cheers,
    Stephen

  • Hi Stepehen,

    Good day to you!

    I have a customer based out in Australia who wants the configuration to be like this:

    3 servers(2 hosts servers and 1 back up server; each with 2 x 560SFP 10Gb Adapters)

    All the 4 x 10Gbe ports are to be connected to two network switches for redundancy; 6 connections per switch from the servers with DAC cables

    2 x MSA 2040 ( 1 with 1G RJ-45 based iscsi and 1 with 10G iscsi)

    The 10G iscsi should be connected to the switch using DAC cables (Dual controller with 4 connections to the switches)

    1G iscsi MSA 2040 should be connected to the network switches with Cat6 cables;

    there are another 5 switches which are to be connected to these 2 network switches

    my question is: with so many DAC connection mix with cat6 connections to the switches; i am finding it hard to propose a right switch to them or do you think this config can be further improves??

    Thank you so much in advance Stephen!

    Looking forward to hear from you,

    Regards,
    Mufi

  • Hi Mufi,

    Thanks for reaching out!

    It sounds like you've done your homework and have a good design in order.

    Please note that if you have two separate networks (one group of switches for communication for servers to controller A, and one group of switches for communication for servers to controller B), make sure to set them up on different subnets. When using iSCSI you want to make sure that no incorrect paths are mapped. If you use the same subnet on both networks it might believe paths should exist which do not.

    Also, lets say you create two subnets (one for controller A and one for controller B), if you're virtualizing, since the hosts have multiple NICs on the same subnet (multiple on Controller A subnet, and multiple on Controller B subnet), you'll need to configure iSCSI port binding. This ensures that each NIC on that subnet is actively being used to connect, receive, and transmit.

    For iSCSI Port Binding, please see this post I authored: http://www.stephenwagner.com/?p=799

    I would also recommend to find a suitable switch model which you will be using for the backbone of the SAN networks for both controller groups (you want high switching bandwidth capacity, and reliability).

    As for your final question regarding improving the configuration. If you were to simplify it at all, you may lose redundancies.

    The only thing I am curious of, is why your second MSA 2040 unit is using 1G RJ-45 instead of 10G SFP+ DAC? If you were to keep everything 10G DAC (both units connected 10G DAC), then you would be able to deploy SFP+ switches as your main backbone of your SAN network, and from that point have a DAC, SFP+, or Fiber running to other switches which will provide other types of connectivity to the solution.

    Cheers,
    Stephen

  • And a few additional notes Mufi in addition to my last post. I had a few things come to mind I've seen in some HPe docs:

    http://h20195.www2.hp.com/V2/getpdf.aspx/4AA4-6892ENW.pdf?ver=Rev%206

    -HPe recommends no more than 8 paths to a single host, as anything more may put additional stressed on the OS/host and may delay recovery times if a path goes down (Page 20).

    -Please read the example on page 38 of the PDF for subnet configuration examples. It's very similar to the configuration you're setting up.

  • Hi Stephen,

    Thank you so much of your reply!

    Your advice is much appreciated!

    Customer has explicitly asked us to configure the second MSA 2040 with 1G iSCSI;

    I recommended them the same, to have 10G DAC connection but they still insists on 1G iSCSI RJ-45 based;

    As for now, for the network switches; I am planning to propose HPE FlexFabric 5820X 24XG SFP+ Switch (JC102B) to them with multiple 10G connections(Both from servers and MSA) going into the switch and for the 1G RJ-45 connectivity; I am proposing RJ-45 based transceivers on the switch to establish the connection with MSA 2040 and other switches with RJ-45 connection.

    Would like to hear your opinion on this.

    Thank you so much once again for the materials and your valuable advice!

    Thank you.

    Regards,
    Mufi

Share
Published by

Recent Posts

How to properly decommission a VMware ESXi Host

While most of us frequently deploy new ESXi hosts, a question and task not oftenly discussed is how to properly decommission a VMware ESXi host. Some might be surprised to… Read More

4 months ago

Disable the VMware Horizon Session Bar

This guide will outline the instructions to Disable the VMware Horizon Session Bar. These instructions can be used to disable the Horizon Session Bar (also known as the Horizon Client… Read More

4 months ago

vGPU Enabled VM DRS Evacuation during Maintenance Mode

Normally, any VMs that are NVIDIA vGPU enabled have to be manually migrated with manual vMotion if a host is placed in to maintenance mode, to evacuate the host. While… Read More

4 months ago

GPU issues with the VMware Horizon Indirect Display Driver

You may experience GPU issues with the VMware Horizon Indirect Display Driver in your environment when using 3rd party applications which incorrectly utilize the incorrect display adapter. This results with… Read More

4 months ago

Synology DS923+ VMware vSphere Use case and Configuration

Today we're going to cover a powerful little NAS being used with VMware; the Synology DS923+ VMware vSphere Use case and Configuration. This little (but powerful) NAS is perfect for… Read More

4 months ago

How to Install the vSphere vCenter Root Certificate

Today we'll go over how to install the vSphere vCenter Root Certificate on your client system. Certificates are designed to verify the identity of the systems, software, and/or resources we… Read More

5 months ago
Powered and Hosted by Digitally Accurate Inc. - Calgary IT Services, Solutions, and Managed Services