Stephen



Name: Stephen Wagner

Age: 27

Location: Calgary, Alberta (Canada)

Occupation: President of Digitally Accurate Inc. (also operating as D.A. Consulting)

Interests:

-Computers (Windows, Linux, OSX)

-Wireless Technologies (Device hacking, reverse engineering, long range links, open source hardware)

-Single Board Computers (SBCs, SBC Development)

-Mountain Biking

-Electronica (House, Hard House)

Background:

(Not Completed)

 

Get in Touch:

FlickrTwitterLinkedInYouTubeGoogle+Facebook  

Sep 302014
 

Recently, a new type of error I haven’t seen showed up on one of the servers I maintain and manage.

 

Event ID: 513

Source: CAPI2

Event:

Cryptographic Services failed while processing the OnIdentity() call in the System Writer Object.

Details:
AddLegacyDriverFiles: Unable to back up image of binary EraserUtilRebootDrv.

System Error:
The system cannot find the file specified.
.

 

Also, after further investigation I also noticed that when Windows Server Backup was running, sometimes snapshots on the C: volume wouldn’t “grow in time” so were automatically deleting.

It was difficult to find anything on the internet regarding this as in my case it was reporting “The system cannot find the file specified”, whereas all other cases were due to security permissions. On the bright side, I was able to identify the software that this file belonged to: Symantec Endpoint Protection.

Ultimately I found a fix. PLEASE ONLY attempt this, if you are receiving the “The system cannot find the file specified”. If you are seeing any “Access Denied” messages under System Error, your issue is related to something else.

 

To fix:

1) Uninstall Symantec Endpoint protection.

2) Restart Server

3) Disable VSS snapshots for C: volume (NOTE: This will delete all existing snapshots for the drive.).

4) Re-install Symantec Endpoint protection.

5) Re-enable VSS snapshots for C: volume.

 

When this issue occurred, I was seeing the event many times every hour. It’s been 4 days since I applied this fix and it has completely disappeared, back to a 100% clean event log!

Aug 142014
 

So I purchased a Surface Pro 3 today from the new Microsoft Store that opened up in Calgary, Alberta today. I purchased the 512GB – i7 version with 8GB of RAM.

The unit is slick, beautiful, and totally has a purpose, however there is one major problem I encountered: overheating!

 

First it sync’ed my apps from my Microsoft Account, upon installing 20 (Metro) apps, the unit overheated and I was presented with the black background screen with a circle and a thermometer icon. The unit had to cool down for a while before it allowed me to power on. I wasn’t even using the device, except 20 “apps” were installing in the background.

 

I put the unit in my server room (air conditioned to 18 degrees), and then proceeded to configure the Surface, install applications, and install all the Windows Updates and firmware updates. Since installing the firmware updates the unit has not overheated, however it’s burning my hand from just ONLY running Microsoft Outlook.

Here is a screenshot of the temperatures when running only Microsoft Outlook.

SurfacePro3-Overheat

This specific unit is too hot to use for me. It’s too hot for me to even hold to just read e-mails, and the sound of the fan racing non-stop (even when idling) is driving me absolutely insane. I’ve decided to return the unit for a refund until it sounds like these issues get resolved.

Is anyone else noticing overheating issues with their i7 version of the Microsoft Surface Pro 3?

UPDATE: I found this thread on Microsoft’s “Answers” forum – http://answers.microsoft.com/en-us/surface/forum/surfpro3-surfusingpro/excessively-loud-fan-constant-overheating-during/1efa253a-f7f2-486b-a891-5633738b8532

Jun 072014
 

Well, I’ve had the HP MSA 2040 setup, configured, and running for about a week now. Thankfully this weekend I had some time to hit some benchmarks.

 

First some info on the setup:

-2 X HP Proliant DL360p Gen8 Servers (2 X 10 Core processors each, 128GB RAM each)

-HP MSA 2040 Dual Controller – Configured for iSCSI

-HP MSA 2040 is equipped with 24 X 900GB SAS Dual Port Enterprise Drives

-Each host is directly attached via 2 X 10Gb DAC cables (Each server has 1 DAC cable going to controller A, and Each server has 1 DAC cable going to controller B)

-2 vDisks are configured, each owned by a separate controller

-Disks 1-12 configured as RAID 5 owned by Controller A (512K Chunk Size Set)

-Disks 13-24 configured as RAID 5 owned by Controller B (512K Chunk Size Set)

-While round robin is configured, only one optimized path exists (only one path is being used) for each host to the datastore I tested

-Utilized “VMWare I/O Analyzer” (https://labs.vmware.com/flings/io-analyzer) which uses IOMeter for testing

-Running 2 “VMWare I/O Analyzer” VMs as worker processes. Both workers are testing at the same time, testing the same datastore.

 

Sequential Read Speed:

MSA2040-ReadMax Read: 1480.28MB/sec

 

Sequential Write Speed:

MSA2040-WriteMax Write: 1313.38MB/sec

 

See below for IOPS (Max Throughput) testing:

Please note: The MaxIOPS and MaxWriteIOPS workloads were used. These workloads don’t have any randomness, so I’m assuming the cache module answered all the I/O requests, however I could be wrong. Tests were run for 120 seconds. What this means is that this is more of a test of what the controller is capable of handling itself over a single 10Gb link from the controller to the host.

 

IOPS Read Testing:

MSA2040-MaxIOPSMax Read IOPS: 70679.91IOPS

 

IOPS Write Testing:

MSA2040-WriteOPSMax Write IOPS: 29452.35IOPS

 

PLEASE NOTE:

-These benchmarks were done by 2 seperate worker processes (1 running on each ESXi host) accessing the same datastore.

-I was running a VMWare vDP replication in the background (My bad, I know…).

-Sum is combined throughput of both hosts, Average is per host throughput.

 

Conclusion:

Holy crap this is fast! I’m betting the speed limit I’m hitting is the 10Gb interface. I need to get some more paths setup to the SAN!

Cheers

 

Jun 072014
 

Recently, while doing some semi-related research on the internet, I’ve come across numerous how-to and informational articles explaining how to configure iSCSI MPIO, and advising readers to incorrectly use iSCSI port binding. I felt I needed to whip up a post to explain why and when you should use iSCSI port Binding. I’m surprised there aren’t more blog posts on the internet explaining this!

 

iSCSI port binding, binds an iSCSI initiator interface on a ESXi host to a vmknic and configures accordingly to allow multipathing in a situation where both vmknics are residing in the same subnet. In normal circumstances, if you have multiple vmkernels on the same subnet, the ESXi host would simply choose one and not use both.

 

Let’s start off by mentioning that in most simple SAN environments, there are two different types of setups/configurations.

1) Multiple Subnet – Numerous paths to a storage device on a SAN, each path residing on separate subnets. These paths are isolated from each other and usually involve multiple switches.

2) Single Subnet – Numerous paths to a storage device on a SAN, each path is on the same subnet. These paths usually go through 1-2 switches, with all interfaces on the SAN and the hosts residing on the same subnet.

 

A lot of you I.T. professionals know the the issues that occur when you have a host that is multi-homed, and you know that in normal typical scenarios with Windows and Linux, that if you have multiple adapters residing on the same subnet, you’ll have issues with broadcasts, and in most cases you have absolutely no control over what communications are initiated over what NIC due to the way the routing table is handled. In most cases all outbound connections will be initiated through the first NIC installed in the system, or whichever one is inside of the primary route in the routing table.

 

This is where iSCSI Port Binding comes in to play. If you have an ESXi host that has vmks sitting on the same subnet, you can bind the iSCSI initiators to the physical NICs. This allows multiple iSCSI connections on multiple NICs residing on the same subnet.

So the general rule of thumb is:

-1 subnet, iSCSI port binding is the way to go!

-2+ subnets, DON’T USE ISCSI PORT BINDING! It’s just not needed since all vmknics are residing on different subnets.

 

Here’s two links to VMWare documentation explaining this in more detail:

http://kb.vmware.com/kb/2010877
http://kb.vmware.com/kb/2038869

 

Jun 072014
 

When I first started really getting in to multipath iSCSI for vSphere, I had two major hurdles that really took a lot of time and research to figure out before deploying. So many articles on the internet, and most were wrong.

1) How to configure a vSphere Distributed Switch for iSCSI multipath MPIO with each path being on different subnets (a.k.a. each interface on separate subnets)

2) Whether or not I should use iSCSI Port Binding

 

In this article I’ll only be getting in to the first point. I’ll be creating another article soon to go in to detail with the second, but I will say right now in this type of configuration (using multiple subnets on multiple isolated networks), you DO NOT use iSCSI Port Binding.

 

Configuring a standard (non distributed) vSphere Standard Switch is easy, but we want to do this right, right? By configuring a vSphere Distributed Switch, it allows you to roll it out to multiple hosts making configuration and provisioning more easy. It also allows you to more easily manage and maintain the configuration as well. In my opinion, in a fully vSphere rollout, there’s no reason to use vSphere Standard switches. Everything should be distributed!

My configuration consists of two hosts connecting to an iSCSI device over 3 different paths, each with it’s own subnet. Each host has multiple NICs, and the storage device has multiple NICs as well.

As always, I plan the deployment on paper before touching anything. When getting ready for deployment, you should write down:

-Which subnets you will use

-Choose IP addresses for your SAN and hosts

-I always draw a map that explains what’s connecting to what. When you start rolling this out, it’s good to have that image in your mind and on paper. If you lose track it helps to get back on track and avoid mistakes.

 

For this example, let’s assume that we have 3 connections (I know it’s an odd number):

Subnets to be used:

10.0.1.X

10.0.2.X

10.0.3.X

SAN Device IP Assignment:

10.0.1.1 (NIC 1)

10.0.2.1 (NIC 2)

10.0.3.1 (NIC 3)

Host 1 IP Assignment:

10.0.1.2 (NIC 1)

10.0.2.2 (NIC 2)

10.0.3.2 (Nic 3)

Host 2 IP Assignment:

10.0.1.3 (NIC 1)

10.0.2.3 (NIC 2)

10.0.3.3 (NIC 3)

 

So no we know where everything is going to sit, and it’s addresses. It’s now time to configure a vSphere Distributed Switch and roll it out to the hosts.

1) We’ll start off by going in to the vSphere client and creating a new vSphere Distributed Switch. You can name this switch whatever you want, I’ll use “iSCSI-vDS” for this example. Going through the wizard you can assign a name. Stop when you get to “Add Hosts and Physical Adapter”, on this page we will chose “Add Later”. Also, when it asks us to create a default port group, we will un-check the box and NOT create one.

2) Now we need to create some “Port Groups”. Essentially we will be creating a Port Group for each subnet and NIC for the storage configuration. In this example case, we have 3 subnets, and 3 NICs per host, so we will be creating 3 port groups. Go ahead and right click on the new vSphere distributed switch we created (“iSCSI-vDS” in my example), and create a new port group. I’ll be naming my first one “iSCSI-01″, second will be called “iSCSI-02″, and so on. You can go ahead and create one for each subnet. After these are created, we’ll end up with this:

vSphere Distributed Switch Configuration for MPIO multiple subnets

 

3) After we have this setup, we now need to do some VERY important configuration. As of right now by default, each port group will have all uplinks configured as Active which we DO NOT want. Essentially what we will be doing, is assigning only one Active Uplink per Port Group. Each port group will be on it’s own subnet, so we need to make sure that the applicable uplink is only active, and the remainder are thrown in to the “Unused Uplinks” section. This can be achieved by right clicking on each port group, and going to “Teaming and Failover” underneath “Policies”. You’ll need to select the applicable uplinks and using the “Move Down” button, move them down to “Unused Uplinks”. Below you’ll see some screenshots from the iSCSI-02, and iSCSI-03 port groups we’ve created in this example:

PortGroup-iSCSI02

PortGroup-iSCSI03

You’ll notice that the iSCSI-02 port group, only has the iSCSI-02 uplink marked as active. Also, the iSCSI-03 port group, only have the iSCSI-03 uplink marked as active. The same applies to iSCSI-01, and any other links you have (more if you have more links). Please ignore the entry for “iSCSI-04″, I created this for something else, pretend the entry isn’t there. If you do have 4 subnets, and 4 NICs, then you would have a 4th port group.

4) Now we need to add the vSphere Distributed Switches to the hosts. Right click on the “iSCSI-vDS” Distributed switch we created and select “Add Host”. Select ONLY the hosts, and DO NOT select any of the physical adapters. A box will appear mentioning you haven’t selected any physical adapters, simply hit “Yes” to “Do you want to continue adding the hosts”. For the rest of the wizard just keep hitting “Next”, we don’t need to change anything. Example below:

AddVDStoHost

So here we are now, we have a vSphere Distributed Switch created, we have the port groups created, we’ve configured the port groups, and the vDS is attached to our hosts… Now we need to create vmks (vmkernel interfaces) in each port group, and then attach physical adapters to the port groups.

5) Head over to the Configuration tab inside of your ESXi host, and go to “Networking”. You’ll notice the newly created vSphere Distributed Switch is now inside the window. Expand it. You’ll need to perform these steps on each of your ESXi hosts. Essentially what we are doing, is creating a vmk on each port group, on each host.

Click on “Manage Virtual Adapters” and click on “Add”. We’ll select “New Virtual Adapter”, then on the next screen our only option will be “VMKernel”, click Next. In the “Select port group” option, select the applicable port group. You’ll need to do this multiple times as we need to create a vmkernel interface for each port group (a vmk on iSCSI-01, a vmk on iSCSI-02, etc…), on each host, click next.

Since this is the first port group (iSCSI-01) vmk we are creating on the first host, we’ll assign the IP address as 10.0.1.2, fill in the subnet box, and finish the wizard. Create another vmk for the second port group (iSCSI-02), since it’s the first host it’ll have an IP of 10.0.2.2, and then again for the 3rd port group with an IP of 10.0.3.2.

After you do this for the first host, you’ll need to do it again for the second host, only the IPs will be different since it’s a different host (in this example the second host would have 3 vmks on each port group, example: iSCSI01 – 10.0.1.3, iSCSI02 – 10.0.2.3, iSCSI03 – 10.0.3.3).

Here’s an example of iSCSI02 and iSCSI03 on ESXi host 1. Of course there’s also a iSCSI-01 but I cut it from the screenshot.

Screen1

6) Now we need to “manage the physical adapters” and attach the physical adapters to the individual port groups. Essentially this will map the physical NIC to the seperate subnets port groups we’ve created for storage in the vDS. We’ll need to do this on both hosts. Inside of the “Managed Physical Adapters” box, you’ll see each port group on the left hand side, click on “<Click to Add NIC>”. Now in everyone’s environments the vmnic you add will be different. You should know what the physical adapter you want to map to the subnet/port group is. I’ve removed the vmnic number from the below screenshot just in case… And to make sure you think about this one…

ManagePhysical

 

As mentioned above, you need to do this on both hosts for the applicable vmnics. You’ll want to assign all 3 (even though I’ve only assigned 2 in the above screenshot).

 

Voiala! You’re done! Now all you need to do is go in to your iSCSI initiator and add the IPs of the iSCSI target to the dynamic discovery tab on each host. Rescan the adapter, add the VMFS datastores and you’re done.

If you have any questions or comments, or feel this can be done in a better way, drop a comment on this article. Happy Virtualizing!

 

Additional Note – Jumbo Frames

There is one additional step if you are using jumbo frames. Please note that to use jumbo frames, all NICs, physical switches, and the storage device itself need to be configured to support this. On the VMWare side of things, you need to apply the following settings:

1) Under “Inventory” and “Networking”, Right Click on the newly created Distributed Switch. Under the “Properties” tab, select “Advanced” on the left hand side. Change the MTU to the applicable frame size. In my scenario this is 9000.

2) Under “Inventory” and “Hosts and Clusters”, click on the “Configuration Tab”, then “vSphere Distributed Switch”. Expand the newly created “Distributed Switch”, select “Manage Virtual Adapters”. Select a vmk interface, and click “edit”. Change the MTU to the applicable size, in my case this is 9000. You’ll need to do this for each vmk interface on each physical host.

 

Jun 072014
 

So, you have:

2 X HP Proliant DL360p Gen8 Servers with 2 X 10 Core Processors

1 X MSA 2040 SAN – With Dual Controllers

 

And you want more visibility, functionality, and more important “Insight” on your systems where the hardware meets the software. This is where HP Insight Control for VMWare comes in to play.

This package is amazing for providing information and “Insight” in to all your equipment, including servers and storage units. It allows you to update firmware, monitor and manage servers, monitor and manage storage arrays, and rapidly deploy new data stores and manage existing ones. It makes all this information and functionality available via the vSphere management interfaces, which is just fantastic.

 

I was browsing the downloads area on HP’s website for the MSA 2040, and the website told me I should download “Insight Control for VMWare”, I figured why not! After getting this package installed, I instantly saw the value.

HP Insight control for VMWare, allows you to access server health, management, and control, along with storage health, management, and control. It supports HP servers with iLo, and fully supports the MSA 2040 SAN.

Installation was a breeze, it was installed within seconds. I chose to install it directly on to my demo vSphere 5.5 vCenter server. Barely any configuration is required, the installation process was actually a few clicks of “Next”. Once install, you simply have to configure iLo credentials, and then add your storage system if you have a compatible SAN. Even adding your SAN is super easy, and it allows you to choose whether you want Insight Control to have full access to the SAN (which allows you to create, and manage datastores), or only Read Only, which only allows it to pull information from the unit.

 

And for those of you concerned about port conflicts, it uses:

3500,3501, 3502, 3503, 3504, 3505, 3506, 3507, 3508, 3509, 3510, 3512, 3513, 3511, 8090.

 

The Insight Control for VMWare is available through both the software client, and the web client. As far as I’m concerned it’s a “must have” if your running HP equipment in your vSphere environment!

HP Insight Control Firmware Management Page

HP Insight Control Firmware Management Page

HP Insight Control for VMWare on Software Client for vSphere

HP Insight Control for VMWare on Software Client for vSphere

HP Insight Control for VMWare showing iSCSI initiator paths

HP Insight Control for VMWare showing iSCSI initiator paths

HP Insight Control for VMWare Web Client

HP Insight Control for VMWare Web Client

HP Inight Control for VMWare Overview in Web Client

HP Inight Control for VMWare Overview in Web Client

 

May 282014
 

In the last few months, my company (Digitally Accurate Inc.) and our sister company (Wagner Consulting Services), have been working on a number of new cool projects. As a result of this, we needed to purchase more servers, and implement an enterprise grade SAN.

 

For the server, we just purchased another HP Proliant DL360p Gen8 (with 2 X 10 Core Processors, and 128Gb of RAM, exact same as our existing server), however I won’t be getting that in to this blog post.

 

Now for storage, we decided to pull the trigger and purchase an HP MSA 2040 Dual Controller SAN. We purchased it as a CTO (Configure to Order), and loaded it up with 4 X 1Gb iSCSI RJ45 SFP+ modules (there’s a minimum requirement of 1 4-pack SFP), and 24 X HP 900Gb 2.5inch 10k RPM SAS Dual Port Enterprise drives. Even though we have the 4 1Gb iSCSI modules, we aren’t using them to connect to the SAN. We also placed an order for 4 X 10Gb DAC cables.

 

To connect the SAN to the servers, we purchased 2 X HP Dual Port 10Gb Server SFP+ NICs, one for each server. The SAN will connect to each server with 2 X 10Gb DAC cables, one going to Controller A, and one going to Controller B.

 

I must say that configuration was an absolute breeze. As always, using intelligent provisioning on the DL360p, we had ESXi up and running in seconds with it installed to the onboard 8GB micro-sd card.

 

I’m completely new to the MSA 2040 SAN and have actually never played with, or configured one. After turning it on, I immediately went to HPs website and downloaded the latest firmware for both the drives, and the controllers themselves. It’s a well known fact that to enable iSCSI on the unit, you have to have the controllers running the latest firmware version.

 

Turning on the unit, I noticed the management NIC on the controllers quickly grabbed an IP from my DHCP server. Logging in, I found the web interface extremely easy to use. Right away I went to the firmware upgrade section, and uploaded the appropriate firmware file for the 24 X 900GB drives. The firmware took seconds to flash. I went ahead and restarted the entire storage unit to make sure that the drives were restarted with the flashed firmware (a proper shutdown of course).

 

While you can update the controller firmware with the web interface, I chose not to do this as HP provides a Windows executable that will connect to the management interface and update both controllers. Even though I didn’t have the unit configured yet, it’s a very interesting process that occurs. You can do live controller firmware updates with a Dual Controller MSA 2040 (as in no downtime). The way this works is, the firmware update utility first updates Controller A. If you have a multipath configuration where your hosts are configured to use both controllers, all I/O is passed to the other controller while the firmware update takes place. When it is complete, I/O resumes on that controller and the firmware update then takes place on the other controller. This allows you to do online firmware updates that will result in absolutely ZERO downtime. Very neat! PLEASE REMEMBER, this does not apply to drive firmware updates. When you update the hard drive firmware, there can be ZERO I/O occurring. You’d want to make sure all your connected hosts are offline, and no software connection exists to the SAN.

 

Anyways, the firmware update completed successfully. Now it was time to configure the unit and start playing. I read through a couple quick documents on where to get started. If I did this right the first time, I wouldn’t have to bother doing it again.

 

I used the wizards available to first configure the actually storage, and then provisioning and mapping to the hosts. When deploying a SAN, you should always write down and create a map of your Storage area Network topology. It helps when it comes time to configure, and really helps with reducing mistakes in the configuration. I quickly jaunted down the IP configuration for the various ports on each controller, the IPs I was going to assign to the NICs on the servers, and drew out a quick diagram as to how things will connect.

 

Since the MSA 2040 is a Dual Controller SAN, you want to make sure that each host can at least directly access both controllers. Therefore, in my configuration with a NIC with 2 ports, port 1 on the NIC would connect to a port on controller A of the SAN, while port 2 would connect to controller B on the SAN. When you do this and configure all the software properly (VMWare in my case), you can create a configuration that allows load balancing and fault tolerance. Keep in mind that in the Active/Active design of the MSA 2040, a controller has ownership of their configured vDisk. Most I/O will go through only to the main controller configured for that vDisk, but in the event the controller goes down, it will jump over to the other controller and I/O will proceed uninterrupted until your resolve the fault.

 

First part, I had to run the configuration wizard and set the various environment settings. This includes time, management port settings, unit names, friendly names, and most importantly host connection settings. I configured all the host ports for iSCSI and set the applicable IP addresses that I created in my SAN topology document in the above paragraph. Although the host ports can sit on the same subnets, it is best practice to use multiple subnets.

 

Jumping in to the storage provisioning wizard, I decided to create 2 separate RAID 5 arrays. The first array contains disks 1 to 12 (and while I have controller ownership set to auto, it will be assigned to controller A), and the second array contains disk 13 to 24 (again ownership is set to auto, but it will be assigned to controller B). After this, I assigned the LUN numbers, and then mapped the LUNs to all ports on the MSA 2040, ultimately allowing access to both iSCSI targets (and RAID volumes) to any port.

 

I’m now sitting here thinking “This was too easy”. And it turns out it was just that easy! The RAID volumes started to initialize.

 

At this point, I jumped on to my vSphere demo environment and configured the vDistributed iSCSI switches. I mapped the various uplinks to the various portgroups, confirmed that there was hardware link connectivity. I jumped in to the software iSCSI initator, typed in the discovery IP, and BAM! The iSCSI initiator found all available paths, and both RAID disks I configured. Did this for the other host as well, connected to the iSCSI target, formatted the volumes as VMFS and I was done!

 

I’m still shocked that such a high performace and powerful unit was this easy to configure and get running. I’ve had it running for 24 hours now and have had no problems. This DESTROYS my old storage configuration in performance, thankfully I can keep my old setup for a vDp (VMWare Data Protection) instance.

 

I’ve attached some pics below. I have to apologize for how ghetto the images/setup is. Keep in mind this is a test demo environment for showcasing the technologies and their capabilities.

 

HP MSA 2040 SAN - Front Image

HP MSA 2040 SAN – Front Image

HP MSA 2040 - Side Image

HP MSA 2040 – Side Image

HP MSA 2040 SAN with drives - Front Right Image

HP MSA 2040 SAN with drives – Front Right Image

HP MSA 2040 Rear Power Supply and iSCSI Controllers

HP MSA 2040 Rear Power Supply and iSCSI Controllers

HP MSA 2040 Dual Controller - Rear Image

HP MSA 2040 Dual Controller – Rear Image

HP MSA 2040 Dual Controller SAN - Rear Image

HP MSA 2040 Dual Controller SAN – Rear Image

HP Proliant DL 360p Gen8 HP MSA 2040 Dual Controller SAN

HP Proliant DL 360p Gen8
HP MSA 2040 Dual Controller SAN

HP MSA 2040 - With Power

HP MSA 2040 – With Power

HP MSA 2040 - Side shot with power on

HP MSA 2040 – Side shot with power on

HP Proliant DL360p Gen8 - UID LED on

HP Proliant DL360p Gen8 – UID LED on

HP Proliant DL360p Gen8 HP MSA 2040 Dual Controller SAN VMWare vSphere

HP Proliant DL360p Gen8
HP MSA 2040 Dual Controller SAN
VMWare vSphere

Apr 122014
 

Recently I decided it was time to beef up my storage link between my demonstration vSphere environment and my storage system. My existing setup included a single HP DL360p Gen8, connected to a Synology DS1813+ via NFS.

I went out and purchased the appropriate (and compatible) HP 4 x 1Gb Server NIC (Broadcom based, 4 ports), and connected the Synology device directly to the new server NIC (all 4 ports). I went ahead and configured an iSCSI Target using a File LUN with ALUA (Advanced LUN features). Configured the NICs on both the vSphere side, and on the Synology side, and enabled Jumbo frames of 9000 bytes.

I connected to the iSCSI LUN, and created a VMFS volume. I then configured Round Robin MPIO on the vSphere side of things (as always I made sure to enable “Multiple iSCSI initators” on the Synology side).

I started to migrate some VMs over to the iSCSI LUN. At first I noticed it was going extremely slow. I confirmed that traffic was being passed across all NICs (also verified that all paths were active). After the migration completed I decided to shut down the VMs and restart to compare boot times. Booting from the iSCSI LUN was absolutely horrible, the VMs took forever to boot up. Keep in mind I’m very familiar with vSphere (my company is a VMWare partner), so I know how to properly configure Round Robin, iSCSI, and MPIO.

I then decided to tweak some settings on the ESXi side of things. I configured the Round Robin policy to IOPS=1, which helped a bit. Then changed the RR policy to bytes=8800 which after numerous other tweaks, I determined achieved the highest performance to the storage system using iSCSI.

This config was used for a couple weeks, but ultimately I was very unsatisfied with the performance. I know it’s not very accurate, but looking at the Synology resource monitor, each gigabit link over iSCSI was only achieving 10-15MB/sec under high load (single contiguous copies) that should have resulted in 100MB/sec and higher per link. The combined LAN throughput as reported by the Synology device across all 4 gigabit links never exceeded 80MB/sec. File transfers inside of the virtual machines couldn’t get higher then 20MB/sec.

I have a VMWare vDP (VMWare Data Protection) test VM configured, which includes a performance analyzer inside of the configuration interface. I decided to use this to test some specs (I’m too lazy to actually configure a real IO/throughput test since I know I won’t be continuing to use iSCSI on the Synology with the horrible performance I’m getting). The performance analyzer tests run for 30-60 minutes, and measure writes and reads in MB/sec, and Seeks in seconds. I tested 3 different datastores.

 

Synology  DS1813+ NFS over 1 X Gigabit link (1500MTU):

Read 81.2MB/sec, Write 79.8MB/sec, 961.6 Seeks/sec

Synology DS1813+ iSCSI over 4 x Gigabit links configured in MPIO Round Robin BYTES=8800 (9000MTU):

Read 36.9MB/sec, Write 41.1MB/sec, 399.0 Seeks/sec

Custom built 8 year old computer running Linux MD Raid 5 running NFS with 1 X Gigabit NIC (1500MTU):

Read 94.2MB/sec, Write 97.9MB/sec, 1431.7 Seeks/sec

 

Can someone say WTF?!?!?!?! As you can see, it appears there is a major performance hit with the DS1813+ using 4 Gigabit MPIO iSCSI with Round Robin. It’s half the speed of a single link 1 X Gigabit NFS connection. Keep in mind I purchased the extra memory module for my DS1813+ so it has 4GB of memory.

I’m kind of choked I spent the money on the extra server NIC (as it was over $500.00), I’m also surprised that my custom built NFS server from 8 years ago (drives are 4 years old) with 5 drives is performing better then my 8 drive DS1813+. All drives used in both the Synology and Custom built NFS box are Seagate Barracuda 7200RPM drives (Custom box has 5 X 1TB drives configured RAID5, the Synology has 8 x 3TB drives configured in RAID 5).

I won’t be using iSCSI  or iSCSI MPIO again with the DS1813+ and actually plan on retiring it as my main datastore for vSphere. I’ve finally decided to bite the bullet and purchase an HP MSA2024 (Dual Controller with 4 X 10Gb SFP+ ports) to provide storage for my vSphere test/demo environment. I’ll keep the Synology DS1813+ online as an NFS vDP backup datastore.

Feel free to comment and let me know how your experience with the Synology devices using iSCSI MPIO is/was. I’m curious to see if others are experiencing the same results.

 

UPDATE – June 6th, 2014

The other day, I finally had time to play around and do some testing. I created a new FileIO iSCSI Target, I connected it to my vSphere test environment and configured round robin. Doing some tests on the newly created datastore, the iSCSI connections kept disconnecting. It got to the point where it wasn’t usable.

I scratched that, and tried something else.

I deleted the existing RAID volume and I created a new RAID 5 volume and dedicated it to Block I/O iSCSI target. I connected it to my vSphere test environment and configured round robin MPIO.

At first all was going smoothly, until again, connection drops were occurring. Logging in to the DSM, absolutely no errors were being reported and everything was fine. Yet, I was at a point where all connections were down to the ESXi host.

I shut down the ESXi host, and then shut down and restarted the DS1813+. I waited for it to come back up however it wouldn’t. I let it sit there and waited for 2 hours for the IP to finally be pingable. I tried to connect to the Web interface, however it would only load portions of the page over extended amounts of time (it took 4 hour to load the interface). Once inside, it was EXTREMELY slow. However it was reporting that all was fine, and everything was up, and the disks were fine as well.

I booted the ESXi host and tried to connect to it, however it couldn’t make the connection to the iSCSI targets. Finally the Synology unit became un-responsive.

Since I only had a few test VMs loaded on the Synology device, I decided to just go ahead and do a factory reset on the unit (I noticed new firmware was available as of that day). I downloaded the firmware, and started the factory reset (which again, took forever since the web interface was crawling along).

After restarting the unit, it was not responsive. I waited a couple hours and again, the web interface finally responded but was extremely slow. It took a couple hours to get through the setup page, and a couple more hours for the unit to boot.

Something was wrong, so I restarted the unit yet again, and again, and again.

This time, the alarm light was illuminated on the unit, also one of the drive lights wouldn’t come on. Again, extreme unresponsiveness. I finally got access to the web interface and it was reporting the temperature of one of the drives as critical, but it said it was still functioning and all drives were OK. I shut off the unit, removed the drive, and restarted it again, all of a sudden it was extremely responsive.

I removed the drive, hooked it up to another computer and confirmed that it was failed (which it was).

I replaced the drive with a new one (same model), and did three tests. One with NFS, one with FileIO iSCSI, and one with BlockIO iSCSI. All of a sudden the unit was working fine, and there was absolutely NO iSCSI connections dropping. I tested the iSCSI targets under load for some time, and noticed considerable performance increases with iSCSI, and no connection drops.

Here are some thoughts:
-Two possible things fixed the connection drops, either the drive was acting up all along, or the new version of DSM fixed the iSCSI connection drops.

-While performance has increased with FileIO to around ~120-160MB/sec from ~50MB/sec, I’m still not even close to maxing out the 4 X 1Gb interfaces.

-I also noticed a significant performance increase with NFS, so I’m leaning towards the fact that the drive had been acting up since day one (seeks per second increased by 3 fold after replacing the drive and testing NFS). I/O wait has been significantly reduced

-Why did the Synology unit just freeze up once this drive really started dying? It should have been marked as failed instead of causing the entire Synology unit not to function.

-Why didn’t the drive get marked as failed at all? I regularly performed SMART tests, and checked drive health, there was absolutely no errors. Even when the unit was at a standstill, it still reported the drive as working fine.

Either way, the iSCSI connection drops aren’t occurring anymore, and performance with iSCSI is significantly better. However, I wish I could hit 200MB+/sec.

At this point it is usable for iSCSI using FileIO, however I was disappointed with BlockIO performance (BlockIO should be faster, no idea why it isn’t).

For now, I have an NFS datastore configured (using this for vDP backup), although I will be creating another FileIO iSCSI target and will do some more testing.

Apr 112014
 

Earlier today I was doing some work in my demonstration vSphere environment, when I had to modify some settings of one of my VMs that are setup as the latest version (which means you can only edit the settings inside of the vSphere Web Client).

To my surprise, when logging in, immediately I received an error: “ManagedObjectReference: type = Datastore, value = datastore-XXXX, serverGuid = XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX refers to a managed object that no longer exists or has never existed“. Also, after clicking OK, I noticed that lots of information being presented inside of the vSphere web client was inaccurate. Some Virtual Machines were being reported as sitting on different datastores (they were at one point weeks ago, however since were moved). Also, it was reporting that some Virtual Machines were off, when in fact they were on and running.

Symptoms:

-Errors about missing datastores on log on to the vSphere Web Client.

-Virtual Machines were being reported as off (turned off) even though they were running.

-Viewing VMs in vSphere client, reporting they are being stored on a different datastore then they actually are.

-Disconnecting and (re) connecting hosts have no effect on issue.

 

This freaked me out, it was a true “Uhh Ohh” moment. Something was corrupt. Keep in mind that ALL information in the vSphere client was correct and accurate, it was only the vSphere Web client that was having issues.

 

Anyways, I tried a bunch of things to fix it, and spent hours working on the problem. FINALLY I came up with a fix. If you are running in to this issue, PLEASE take a snapshot of your vCenter Server before attempting to fix it, so that you can roll back if you screw anything up (which I had to do multiple times, lol).

The Fix:

1) Stop the “VMWare vCenter Inventory Service”.

2) Delete the “data” folder inside of “Program Files\VMware\Infrastructure\Inventory Service”.

3) Open a Command Prompt with elevated privileges. Change your working directory to “Program Files\VMware\Infrastructure\Inventory Service\scripts”.

4) Run “createDB.bat”, this will reset and create a Inventory Service database.

5) Run “is-change-sso.bat https://computername.domain.com:7444/lookupservice/sdk “administrator@vSphere.local” “SSO_PASSWORD”. Change the computername.domain.com to your FQDN for your vCenter server, and change the SSO_PASSWORD to your Single Signon Admin password.

6) Start the “VMWare vCenter Inventory Service”. At this point, if you try to log on to the vSphere Web Client, it will error with: “Client is not authenticated to VMware Inventory Service”. We’ve already won half the battle.

7) We now need to register the vCenter Server with the newly reset Inventory Service. In an elevated Command Prompt (that we opened above), changed the working path to: “Program Files\VMware\Infrastructure\VirtualCenter Server\isregtool”.

8) Run “register-is.bat https://computername.domain.com:443/sdk https://computername.domain.com:10443 https://computername.domain.com:7444/lookupservice/sdk”. Change computername.domain.com to your FQDN for your vCenter server.

9) Restart the “VMware VirtualCenter Server” service. This will also restart the Management Web services.

 

BAM, it’s fixed! I went ahead and restarted the entire server that the vCenter server was running on. After this, all was good, and everything looked great inside of the vSphere Web Client. I’m actually noticing it’s running WAY faster, and isn’t as glitchy as it was before.

Happy Virtualizing! :)