Over the years I’ve come across numerous posts, blogs, articles, and howto guides that provide information on when to use iSCSI port binding, and they’ve all been wrong! Here, I’ll explain when to use iSCSI Port Binding, and why!
This post and information applies to all versions of VMware vSphere including 5, 5.5, 6, 6.5, 6.7, and 7.0.
See below for a video version of the blog post:
iSCSI port binding binds a software iSCSI initiator interface on a ESXi host to a physical vmknic and configures it accordingly to allow multi-pathing (MPIO) in a situation where both vmknics are residing in the same subnet.
In normal circumstances without port binding, if you have multiple vmkernels on the same subnet (mulithomed), the ESXi host would simply choose one and not use both for transmission of packets, traffic, and data. iSCSI port binding forces the iSCSI initiator to use that adapter for both transmission and receiving of iSCSI packets.
In most simple SAN environments, there are two different types of setups/configurations.
IT professionals should be aware of the the issues that occur when you have a host that is multi-homed with multiple NICs on the same subnet.
In a normal typical scenario with Windows and Linux, if you have multiple adapters residing on the same subnet you’ll have issues with broadcasts and transmission of packets, and in most cases you have absolutely no control over what communications are initiated over what NIC due to the way the routing table is handled. In most cases all outbound connections will be initiated through the first NIC installed in the system, or whichever one is inside of the primary route in the routing table.
This is where iSCSI Port Binding comes in to play. If you have an ESXi host that has multiple vmk adapters sitting on the same subnet, you can bind the software iSCSI initiators (vmk adapters) to the physical NICs (vmnics). This allows multiple iSCSI connections on multiple NICs residing on the same subnet to transmit and handle the traffic properly.
So the general rule of thumb is:
Here’s two links to VMWare documentation explaining this in more detail:
For more information on configuring a vSphere Distributed Switch for iSCSI MPIO, click here!
And a final troubleshooting note: If you configure iSCSI Port Binding and notice that one of your interfaces is showing as “Not Used” and the other as “Last Used”, this is most likely due to either a physical cabling/switching issue (where one of the bound interfaces can’t connect to the iSCSI target), or you haven’t configured permissions on your SAN to allow a connection from that IP address.
Are you running an HPE Nimble or HPE Alletra 6000 SAN in your VMware environment with iSCSI? A commonly overlooked component of the solution architecture and configuration when using these… Read More
You might ask if/what the procedure is for updating Enhanced Linked Mode vCenter Server Instances, or is there even any considerations that apply? vCenter Enhanced Link Mode is a feature… Read More
In this NVIDIA vGPU Troubleshooting Guide, I'll help show you how to troubleshoot vGPU issues on VMware platforms, including VMware Horizon and VMware Tanzu. This guide applies to the full… Read More
When using VMware vSphere, you may notice vCenter OVF Import and Datastore File Access Issues, when performing various tasks with OVF Imports, as well as uploading and/or downloading files from… Read More
When attempting to log in to your VMware vCenter using the HPE Simplivity Upgrade Manager to perform an upgrade on your Simplivity Infrastructure, the login may fail with Access Denied,… Read More
When using VMware vSAN 7.0 Update 3 (7U3) and using the graceful shutdown (and restart) of your entire vSAN cluster, you may experience an issue resulting with all VMs inaccessible… Read More
Do you have a good article that I can follow to configure a proper MPIO and iSCSI port binding?
In the past i follow this article: http://www.virtualtothecore.com/en/howto-configure-a-small-redundant-iscsi-infrastructure-for-vmware/ with 2 HP server and one QNAP with 4 NICs
Now I have 3 HP gen8 and one MSA 2040..... Can I follow the same old article?
I have only one subnet 192.168.1.x
Thanks a lot for your support.
Sorry for the delayed response! (It's Stampede week here in Calgary, busy time of the year!)
Do you only have 1 switch between the MSA 2040 and the 3 HP Servers, or multiple switches? Also, are you using standard switches, or vSphere Distributed Switches?
I briefly took a look at that guide and for the most part it looks good, however I might configure my vSphere switches slightly different. And as always, I always recommend using multiple subnets (and avoid using iSCSI port binding).
Let me know and I'll see what I can come up with for you, or any advice I may have.
I have 2 Switch HP 1910 24p Gb managed.
I can only use the standard vmware switch because I have the Essential Plus License.
If I usedmultiple subnet, i need to use the Vlan because in the two HP Switch there is also the normal traffic of the VMs and the other clients of the network with IP 192.168.1.x
Thanks a lot for your support, you are great...
for the installation of the ESXi on the HP 360p Gen8 I will use a HP SDHC 32Gb. IT's a good choise about the security and the stability of the system?
I'll start off with the easiest question: The DL360p Gen8 works great with the SDHC cards for ESXi to be installed on to. I've used both the SD card and internal USB thumb drive option, and both work great!
So if you do only use one subnet, you can use that guide you originally posted, however instead of creating multiple switches and binding them to the same NICs, I would instead create only one, configure your VLAN, and then create multiple vmkernal (vmk) interfaces on that single switch (each with their own IP on the network). Then after this you would simply go in to the iSCSI initiator settings and enable iSCSI port binding on each vmk interface.
Keep in mind, that if you were to use both switches (with different subnets), then you would have added redundancy to your configuration in case one of the switches ever failed. This is just a consideration.
Hope this helps,
Thanks for your reply, I'll do the configuration with one subnet, 192.168.1.x.
Next week all the products arrive in my lab and then I'll write you my idea of configuration.
Thanks a lot for your support.
See you soon.
Hi, finally the MSA2040 is arrived in my lab.
Dual controlle, 8 port iSCSI 1Gb, 7 HDD SFF 600Gb SAS.
I do this configuration:
I have created 2 VDSIK. The first, with SAS1-2-3 in RAID5.
The second with SAS 4-5-6 in RAID5 and SAS7 is Global Spare.
The first VDISK is mapped to controller A and the second vdisk is mapped to controller B.
Each VDISK have one volume of entire capacity mapped on each port of the controller (A1, A2, A3, A4 and B1, B2, B3, B4)
for you is a good configuration?
Thanks a lot for your support
That should work great. When you created the Vdisks, did you choose auto for owning controller? If not, I would advise to change it.
Other than that you should be good!
ok, i change it....
I have chose Vdisk1 - controller A
vdisk2 - controller B
So just to confirm, when you created the volumes, when it asked for a controller ownership, you chose "Auto", correct?
NO. Now I have manual select Controller A and controller B.
Tomorrow change the ownership to Auto
I'm looking at redesign our iscsi network to include 2 switches and put them on 2 subnets /24.
I understand that I should not do port binding in this case. But I wanted to confirm if that this case holds true if your host has 4 nics for iscsi. I was looking at putting vmnic7 and vmnic6 on subnet 10.0.1.x and vmnic5 and vmnic4 on subnet 10.0.2.x. Would you do portbinding on the nics with in the same subnet?
Thanks for your help.
Let me see what I can find out, but I'm assuming that you would have to have port binding enabled, however it may result in some erroneous routes/paths which you may have to mark as "inactive" or "disabled" manually. This may or may not be the case, but I'm pretty sure in your case you would need to use iSCSI port binding.
Let me see what I can find out and I'll get back to you!
Got the information faster than I thought I would! haha
Essentially you WILL use iSCSI port binding. Make sure that pair of NICs that are on a single subnet are configured on their own vSwitch (or Distributed switch). DO NOT use the same vSwitch (or vDs) for different subnets.
When you have the server NICs (only put NICs on the same subnet on the same vSwitch) on their own vSwitch (or vDs), then you can configure iSCSI port binding!
Let me know if you have any questions!
I have 4 NIC's in 1 host, and 8 NIC's in SAN in test environment
NIC's in host are each in unique subnet
and SAN ip
WIth Port binding enabled, I get good IO ~414MB read, if I disable it, i get 127MB read?
I am not sure where I am going wrong here, i just removed the NIC's from the Software iSCSI addapter, do I also have to split them up into seperate vSwitches?
Just curious, how do you have everything wired? Do you have seperate physical switches? To confirm, the host is directly attached to the SAN?
If you are using multiple subnets, you should have your vSwitches specially configured.