Nov 212015
 
HP MSA2040 Dual Controller SAN with 10Gb DAC SFP+ cables

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

  46 Responses to “HPE MSA 2040 Dual Controller SAN – 10Gb iSCSI DAC Connection to HPE Server Pictures”

  1. Stephen

    Would you mind to post what part model of 10 GB Adapters did you use on system please ?

    Kind Regards

    Igor

  2. Hi Igor,

    I used these for the NICs and Cables:

    HP Ethernet 10Gb 2-port 560SFP+ Adapter (665249-B21)
    HP BladeSystem c-Class Small Form-Factor Pluggable 3m 10GbE Copper Cable (487655-B21)

    Cheers

  3. Hi Stephen

    I’m pretty impressed on your build!!

    Have got any issues with this 10 GB Adapter on VMware ?

    Thanks

  4. Hi Igor,

    Thanks bud! Actually, I’ll be honest, I set this SAN up with the DAC cables, and it’s been running solid non-stop since for probably almost 2 years now!

    The speed is AWESOME, the setup works great. And I’ve had absolutely no issues (literally no issues).

    I also designed and sold a similar solution (Gen9 instead of Gen8 Servers) to a client that deals with big-data, and it’s been rock solid there too…

    The MSA 2040 is a beautiful SAN and just absolutely rocks with VMWare. I wish I could sell more of these puppies! The SFP+ DAC cables are awesome too for keeping the price down…

  5. Just to clarify for everyone –

    This is a MSA2040 directly connected to a DL server via the 10GB DAC with no 10GB switch in place… Correct ?

  6. Hi Charles,

    That is correct. the MSA 2040 is actually connected to Two (2) DL 360p Gen8 Servers using 10Gb DAC cables, and NO switch…

  7. Hi Stephen,

    I have the same sort of setup but instead of 1 MSA I’ve got two boxes. The idea is to sync the devices to have a full DR in place from a storage perspective. The replication license is in place and works on iSCSI, but not managing to make it work in FC! Do you know if this is possible after all? Having two MSA connected back to back over FC!

    Thanks for your help, and nice work!

  8. Hi Stephen,
    I’ve have gotten these on hand:
    3 HP DL380 GEN9 E5-2640V3 SRV with 3 HP HP Ethernet 10Gb 2-port 530T Adapter 657128-001 (RJ-45)
    1 HP MSA 2040 SAN Controller.

    going to wire them up physically.

    question. how do you connect your 2 HP server to your network? To let end user access the guests?

    Thanks.

  9. Hi Jon,

    Each of my servers has the following:

    4-port 1Gb RJ45 NIC – Each port is configured for a different network and provides VM access to network (end users accessing VMs)

    2-port 10Gb RJ45 NIC – 1 port is configured and dedicated to vMotion, so VMs can be vMotioned at 10Gb speeds from host to host. The other port is un-used

    2-port 10Gb SFP+ NIC – Each port has a SFP+ DAC cable running to one of the controllers on my MSA2040 SAN. So each server has a connection to each controller on the SAN

    Hope this answers your question!

    Cheers

  10. Stephen –

    Does the MSA2040 see the host servers as it would in an iscsi configuration ?

    Speaking from how the LUNs are allocated to the host – can you map multiple LUNs with tiering in place ?

    What I am getting at = other than being able to connect to more than a few DL360’s is there any disadvantage to using DAC VS 10GB switch ?

  11. Hi Charles,

    This is actually an iSCSI configuration. As for the mapping multiple LUNs with teiring, I’m not quite sure what you mean, could you expand on that?

    Essentially, it’s as simple as this:
    -DAC is cheaper than fiber
    -DAC with up to 4 hosts (each host with 2 connections, 1 to each controller) removes the need (and cost) of a 10Gb switch
    -DAC (this needs to be confirmed) could actually be used to connect to a SFP+ 10Gb switch as well, future-proofing configurations for end-users starting small, and adding more hosts later.

    Disadvantages to using DAC direct to hosts (no switches):
    -While DAC in my config, provides path redundancy, more redundancy can be achieved with more NICs and if switches were used. You could still use DAC, you just require switches.
    -With redundant paths (and no switches), you can have a max of 4 hosts directly attached using DAC.

    A big thing to keep in mind when using DAC: All it is, is just a different type of cable (fiber vs RJ45 vs DAC). Everything is virtually the same (networking, iSCSI, etc…). The big recommendation I do for DAC, is just for companies wanting to keep the budget minimized if they have a low host count. Even then, if you got some 10Gb switches with SFP+ ports, in theory you could still use those DAC cables, only connect them to the switches instead of hosts.

    Hope this helps!

    Cheers,
    Stephen

  12. Hi Stepehen,

    Good day to you!

    I have a customer based out in Australia who wants the configuration to be like this:

    3 servers(2 hosts servers and 1 back up server; each with 2 x 560SFP 10Gb Adapters)

    All the 4 x 10Gbe ports are to be connected to two network switches for redundancy; 6 connections per switch from the servers with DAC cables

    2 x MSA 2040 ( 1 with 1G RJ-45 based iscsi and 1 with 10G iscsi)

    The 10G iscsi should be connected to the switch using DAC cables (Dual controller with 4 connections to the switches)

    1G iscsi MSA 2040 should be connected to the network switches with Cat6 cables;

    there are another 5 switches which are to be connected to these 2 network switches

    my question is: with so many DAC connection mix with cat6 connections to the switches; i am finding it hard to propose a right switch to them or do you think this config can be further improves??

    Thank you so much in advance Stephen!

    Looking forward to hear from you,

    Regards,
    Mufi

  13. Hi Mufi,

    Thanks for reaching out!

    It sounds like you’ve done your homework and have a good design in order.

    Please note that if you have two separate networks (one group of switches for communication for servers to controller A, and one group of switches for communication for servers to controller B), make sure to set them up on different subnets. When using iSCSI you want to make sure that no incorrect paths are mapped. If you use the same subnet on both networks it might believe paths should exist which do not.

    Also, lets say you create two subnets (one for controller A and one for controller B), if you’re virtualizing, since the hosts have multiple NICs on the same subnet (multiple on Controller A subnet, and multiple on Controller B subnet), you’ll need to configure iSCSI port binding. This ensures that each NIC on that subnet is actively being used to connect, receive, and transmit.

    For iSCSI Port Binding, please see this post I authored: http://www.stephenwagner.com/?p=799

    I would also recommend to find a suitable switch model which you will be using for the backbone of the SAN networks for both controller groups (you want high switching bandwidth capacity, and reliability).

    As for your final question regarding improving the configuration. If you were to simplify it at all, you may lose redundancies.

    The only thing I am curious of, is why your second MSA 2040 unit is using 1G RJ-45 instead of 10G SFP+ DAC? If you were to keep everything 10G DAC (both units connected 10G DAC), then you would be able to deploy SFP+ switches as your main backbone of your SAN network, and from that point have a DAC, SFP+, or Fiber running to other switches which will provide other types of connectivity to the solution.

    Cheers,
    Stephen

  14. And a few additional notes Mufi in addition to my last post. I had a few things come to mind I’ve seen in some HPe docs:

    http://h20195.www2.hp.com/V2/getpdf.aspx/4AA4-6892ENW.pdf?ver=Rev%206

    -HPe recommends no more than 8 paths to a single host, as anything more may put additional stressed on the OS/host and may delay recovery times if a path goes down (Page 20).

    -Please read the example on page 38 of the PDF for subnet configuration examples. It’s very similar to the configuration you’re setting up.

  15. Hi Stephen,

    Thank you so much of your reply!

    Your advice is much appreciated!

    Customer has explicitly asked us to configure the second MSA 2040 with 1G iSCSI;

    I recommended them the same, to have 10G DAC connection but they still insists on 1G iSCSI RJ-45 based;

    As for now, for the network switches; I am planning to propose HPE FlexFabric 5820X 24XG SFP+ Switch (JC102B) to them with multiple 10G connections(Both from servers and MSA) going into the switch and for the 1G RJ-45 connectivity; I am proposing RJ-45 based transceivers on the switch to establish the connection with MSA 2040 and other switches with RJ-45 connection.

    Would like to hear your opinion on this.

    Thank you so much once again for the materials and your valuable advice!

    Thank you.

    Regards,
    Mufi

  16. No worries, feel free to reach out if you ever need anything!

    As for the switch, you should contact HPe and find out if the switch is compatible with the HPe DAC cables. It should be, but I would confirm just to make sure you won’t have any issues. There’s a chance the switches may or may not be listed on QuickSpecs datasheet for the MSA 2040, you could also check the QuickSpecs on the switch as well.

    I’m betting they would work though!

    Just make sure that you DO use the DAC cable Part#s specified in the Quickspecs of the MSA 2040. I know that the MSA 2040 is picky about the DAC cables.

    Cheers!

  17. Hi Stephen,

    Thank you so much! God bless!

    Regards,
    Mufi

  18. Hi Stephen,

    Could you share the configurations that we must to perform on both the server and MSA SAN after made this Lab.

    Thank You!

  19. Hi Teddy,

    Take a look at some of my other blog posts… I have 3-4 dedicated to the MSA 2040 and the configuration. You should be able to find what you are looking for!

    If after you take a look, and if you don’t find it. Let me know what you need help with.

    Cheers,
    Stephen

  20. Hi,
    do you know if this solution works with 3 servers attached as DAC to the first 3 ports in each controller and the fourth port on each controller is left empty?
    Thank you

  21. Hi Bob,

    Yes, that would work. Just make sure you connect them in order (Example. Server 1 connected to port 1 on Controller A, and port 1 on Controller B. Server 2 connected to A2, B2. Server 3 connected to A3, B3).

    Cheers

  22. Hi,
    I have the following scnario, please suggest a best option

    3xESXi servers need to connect to MSA 2040 storage directly with 10Gb iSCSI .they dont have any 10G switch and using only 1 G switch. Customer also have DR MSA storage 2040 and wanted to replicate using 1Gb iSCSI. Can the two remaining port in storage controller can be populated with 1Gb RJ45 transceiver and configure it for replication ? If this can be used, what is the part number of the 1Gb transceiver which I can use in replace of 10G SFP.

  23. Hi Stephen,

    I would like to share a similar experience with pretty much this exact config. I setup this config for a client about 2 years ago and had questions regarding the subnetting for iscsi ports on the MSA. According to hp docs, HP recommends subnetting vertically across Controllers A & B where port 1 on controller A & port 1 on controller B would be on the same subnet. In my experience I found that I needed to assign each port on each controller to its own subnet and mirror that on my hosts.

    Could you please elaborate on how you achieved redundancy in your setup?

  24. Hi Craig,

    Thanks for sharing your experience, and great question!

    Keep in mind that there is no right or wrong way to configure the subnets. The configuration and design has to be specific to the environment that the solution architect chooses to design.

    Numerous factors play in to how it should be configured:
    -Are the hosts directly attached
    -Are switches being used between hosts and the SAN
    -If switches are being used, how many
    -How many redundancy networks
    -Single SAN network vs multiple SAN networks (different separated networks for redundancy)
    -Design of fault tolerance and at which point in the network is fault tolerance configured
    -Discovery path of LUNs on MPIO configurations

    For example in a host direct attached configuration (server directly connected to SAN with dual links, one to each controller), I would recommend using different subnets for each port. This includes different subnets for each port for each host (2 hosts, with a dual controller SAN, would result in 4 different subnets being configured). This avoids having to configure iSCSI Port Mapping in vSphere, and also avoids any potential problems with the ESXi host detecting paths that don’t really exist. These “phantom” paths that don’t exist can occur because multiple connections are configured on the same subnet but aren’t in fact physically present on the same subnet, and without a switch, each NIC can’t access all the IPs on the subnet, which are of course being advertised by the dynamic iSCSI discovery process. Using different subnets allows discovery to function correctly.

    For example in an environment with a single switch (solution architect doesn’t care about the possibility of a single switch failure), all hosts and SAN ports would connect to a single switch. In this case, it would be best to use a single subnet, so that each NIC on each server, has full access to all ports on the SAN. iSCSI port mapping MUST be used in this situation! This increases performance since the hosts have multiple paths to the owning controller (as well as other controller), however if the switch fails, the storage network goes offline.

    For example in an environment with multiple switches (solution architect wants multiple switches for redundancy), you would configure multiple subnets. In this case each controller and host would have a “presence” on ALL subnets, on both switches. If a switch were to fail, storage traffic would be unaffected since it still has the remaining switch. Failover would occur.

    Essentially with that last example, you have have 2 switches, or even 4 for redundancy, as long as each host, and each controller has access to all the switches and all the subnets.

    In my specific environment I was trying to keep costs low while maintaining redundancy. I elected to go the directly attached route. In my case, 2 hosts, with dual controllers on the SAN, required 4 separate subnets to be configured. Each host has a connection to each controller, and all connections from all hosts are on different subnets. This allows the path detection to function correctly, and also provides redundancy in the event of a DAC cable or controller failure.

    It all depends on what you’re trying to achieve. I hope this helps!

    Let me know if you have any questions!

    Cheers,
    Stephen

  25. HI there,
    We are looking this type of configuration, question is, are you seeing the physical disks or are luns needing to be created and presented to the OS.

    Cheers
    Shane

  26. Hi Shane,

    The disks are NOT presented. You DO need to create LUNs on the MSA and then present them to the host.

    Stephen

  27. Can I use FC cables instead of DAC with the same NICs?

  28. Hi Erika,

    If you use FC cables, then you’ll need to use FC adapters on your hosts. However, the SAN controllers support both FC/iSCSI as long as you configure the ports to the required mode.

    Stephen

  29. Hey Stephen,

    I’m been working on this issue for about a week now. I have purchased 2 10g cards and installed them into my servers which I have connected to my SAN. I am trying to set the IP addresses on those two cards. I can see them in VMWARE 6.0 and on my ILO (DL360 Gen 8), but I have not figured out how to change the ip addresses. Any help you can provide will be appreciated.

  30. Hello Stephen,

    How do you set the ip addresses for the network adapters?

  31. Hi Cha,

    The IP addresses are not set specifically to the card but to the vmk (vmkernel network adapter) adapter.

    Depending on if you’re using a vSphere Distributed switch or a vSwitch, you’ll need to modify the vmk adapters IP configuration to set the IPs.

    Cheers,
    Stephen

  32. Dear Stephen – Excuse me, my case is not quite this, but maybe you can help me or inform me some place or some tool. I have HP P2000G3 FC / iSCSI, 12 SATA HDDs and 3 were left with the amber light, my system got compromised paralyzed. My question is if I change the HDs with problems, I have how to recover my data – RAID 5 +

  33. Hi Alessandro,

    If you confirm that the drives were in fact failed (and not functioning), you’ll either need to restore from your backup/DR system, or you’ll need to reach out to a company that specializes in RAID recovery.

    Hope this helps,
    Stephen

  34. Hello Stephen

    First off great post. I have a question. For you if you have any insight for me. I have 6 host vmware server environment that is currently connect to a storage array via 10gbaset but I am looking to up grade my storage array to the msa 2052 but my findings is that it does not support 10gbaset. If that is true. What would be my options on getting my San network to 10gig with and msa 2052. I have a Cisco 10gbaset switch handling all the San traffic.

  35. Hi Tavin,

    You’d use either 10Gb DAC cables or 10Gb fiber from the SAN to your switch, and then use 10GBaseT RJ45 to your servers.

    Personally I’d use DAC because it’s cheap and works great! Make sure you use the appropriate Part numbers provided in the Quickspecs and documentation, as the MSA storage units only accept certified/approved SFP+ modules.

    Hope that helps!

    Stephen

  36. Thank you for the quick respond. That’s what I thought so the journey continues thank you so much…..

  37. A great guide do you know of any reason why this couldn’t be used with Hyper-V instead of VMware?

  38. Yes, it should work in most cases despite the software on the servers! 🙂

  39. I Know this post is pretty old, but I was really impressed with your setup and did something very similar with an MSA 2052. Same DAC Cards going to (2) Gen 8 servers. Only difference with my setup is that I am running Server 2012 R2 Hyper-V instead of VMWARE. I really need to upgrade these hosts to 2016 at the very least.

    With this setup all of a sudden I’ve been having random crashes of the Hyper-V hosts. Firmware, drivers all up to date. I keep getting the same bugcheck 0x0EF, but no relevant log entry

    Wondering if you have come across this with any of your clients that have this setup?

    Thanks,

    Phil

  40. Hi Phil,

    Sorry but I haven’t heard about this or seen this happen. This solution has been rock solid.

    Have you opened the BSOD minidump to find out what’s causing the bugcheck? It should at least point you to software, a driver, or hardware.

    Cheers
    Stephen

  41. Hi There,

    good day, i having a issue when i install 2nd controller to MSA2040, i cant login to the system manage page with message ” loading remote replication information”. after removed 2nd controller, the system will work normal but with error missing controller

    thank you and happy new year to you.

    Cheers
    William

  42. Hi Stephen,
    I have HPe MSA SAN Storage 2040 iSCSi Model,
    How to connect directly the storage that to Server hp Proliant DL380 G9, and what it the model card compatibility with MSA Storage iSCSi model?

    Best Regards

  43. i want to do this same setup but im confused about what to put in the vmware network kernel adapters etc. currently i have 12 1gbps coming out of each host and each group of 1gbps has specific purpose like DATA, management, vmotion, iscsi etc.

    this would need to be consolidated into 1 10gb nic kernel? is this even possible what is the configuration on the vmware 6.5 side.

    currently i have 1gbps connection from host to network and this means we only get 50mb/s connection to datastore it is kiling me i dont understand, ive just bought some dac cables to trial this setup

  44. Hi Sulayman,

    There’s actually flexibility. If your host only has 1 x 10Gb NIC, then you’d put all services over it. However, I strongly always recommend separating iSCSI (and SAN) traffic from data.

    In my case on my hosts. Each server has 2 x 10Gb SFP+ DAC cables going from each server to each controller (2 controllers) on the SAN.

    Now, additional, I also have a 10GBase-T connection from the server dedicated towards vMotion, VM traffic, and management.

    In this video, I was using 4 x 1Gb for VM traffic, vMotion, and management, but I have upgraded since to the 10Gb.

    Stephen

  45. okay im starting to understand better, so iscsi need to be seperate on a seperate 10gb sfp+ dac and the other 10gb sfp+ dac will be just data?

    here are some screenshots of my setup i think it make more sense if you can see
    https://prntscr.com/rlfggs (servers, san and switch)

    https://prntscr.com/rlfk63 (vmkernel adapters)
    https://prntscr.com/rlfjve (virtual switch 1_
    https://prntscr.com/rlfk02 (virtual switch 2)
    https://prntscr.com/rlfjfv (storage devices)
    https://prntscr.com/rlfj7g (storage paths)

    IT manager at high school nearly lost all my hair.

  46. Hi Sulayman,

    The need is to be determined during the solution design phase. Or if you’re changing an existing solution, the need is created by whatever is available (either logistically, or cost effectively).

    If you’re looking for assistance in designing, troubleshooting, or going in depth with your solution, don’t hesitate to reach out as I’m available to provide consulting services. More information can be found at https://www.stephenwagner.com/hire-stephen-wagner-it-services/.

    Thanks,
    Stephen

 Leave a Reply

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

(required)

(required)