Nov 212015
 
HP MSA2040 Dual Controller SAN with 10Gb DAC SFP+ cables

I’d say 50% of all e-mails/comments I receive from the blog in the last 12 months or so, have been from viewers requesting pictures or proof of the HPE MSA 2040 Dual Controller SAN being connection to servers via 10Gb DAC Cables. This should also apply to the newer generation HPE MSA 2050 Dual Controller SAN.

Decided to finally publicly post the pics! Let me know if you have any questions. In the pictures you’ll see the SAN connected to 2 X HPE Proliant DL360p Gen8 servers via 4 X HPE 10Gb DAC (Direct Attach Cable) Cables.

Connection of SAN from Servers

Connection of SAN from Servers

Connection of DAC Cables from SAN to Servers

Connection of DAC Cables from SAN to Servers

See below for a video with host connectivity:

Nov 172015
 

Decided to whip up a post about an issue that I have been running in to more and more as of late.

Typically, situation goes as follows: Customer has an environment where there are industrial machines running Windows CE Embedded computers as controllers. These systems typically are configured to either host files, or grab files off a network. These systems are typically dated, and IT staff is unable to get the Windows CE based machines to connect to network shares on Windows Servers running SMB version 2 or later (ie. Windows Server 2008 and later).

 

This issue is due to authentication issues with protocols and incompatibles. Over the years, Windows File Sharing has come a long way (SMB to be precise). Numerous security enhancements have been made, authentication mechanisms, etc…

In all cases, I’ve noticed companies usually either give up, or hire someone who is able to resolve it, but the resolution is never documented.

 

The solution I have come to could be considered somewhat controversial (due to the fact that Windows XP has reached it’s EOF), but I’ve found a way.

To provide file sharing solutions, in my experiences I have been able to accomplish this by implementing a Windows XP based “proxy” machine (calling it a proxy by name, not by actual usage). Configuring a Windows XP machine, enabling the “guest” account on it, and configuring file shares, will allow users on the network to dump files on these “proxy” network shares, in turn which will be browsable and accessible to the Windows CE machine. This Windows XP machine can be joined to the domain, to allow seamless authentication with other network users/computers, and also contains it’s own local user database.

The guest account needs to be enabled as the Windows CE machines typically browse and do initial file sharing handshakes as “guest”. You’ll also need a local user account configured on the Windows XP machine, which is the account that the actual Windows CE machine will use to connect/authenticate against the share and it’s access.

Please note, you may also have to go in to the “Local Security” policy, and allow guest access to file shares and browsing on the Windows XP machine.

 

As always, since Windows XP has reached it’s end of life, no more security updates are available. You want to make sure you have other security measures in place to mitigate any security concerns that could arise from having an active XP OS running on the network. If anyone else has a better solution or can comment further on this, please do! I’ve had to deal with this issue multiple times for CNC machines with older CE based controllers, as well as handheld Windows CE devices that require network share access.

Nov 172015
 

I recently had a reader reach out to me for some assistance with an issue they were having with a VMWare implementation. They were experiencing issues with uploading files, and performing I/O on Linux based virtual machines.

Originally it was believed that this was due to networking issues, since the performance issues were only one way (when uploading/writing to storage), and weren’t experienced with all virtual machines. Another particular behaviour notice was slow uploading speeds to the vSphere client file browser, and slow Physical to Virtual migrations.

After troubleshooting and exploring the issue with them, it was noticed that cache was not enabled on the RAID array that was providing the storage for the vSphere implementation.

Please note, that in virtual environments with storage based off RAID arrays, RAID cache is a must (for performance reasons). Further, Battery backed RAID cache is a must (for protection and data integrity). This allows write operations to be cached and performed on multiple disks at once, sometimes even optimizing the write procedures as they are processed. This allows writes to occur simultaneously to multiple disks, and also dramatically increases observed performance since the ESXi hosts, and virtual machines aren’t waiting for write operations to commit before proceeding to the next.

You’ll notice that under Windows virtual machines, this issue won’t be observed on writes since the Windows VMs typically cache file transfers to RAM, which then write to disk. This could give the impression that there are no storage issues when typically troubleshooting these issues (making one believe that it’s related to the Linux VMs, the ESXi hosts themselves, or some odd networking issue).

 

Again, I cannot stress enough that you should have a battery backed cache module, or capacitor backed flash module providing cache functions.

If you do implement cache without backing it with a battery, corruption can occur on the RAID array if there is a power failure, or if the RAID controller freezes. The battery backed cache allows cached write procedures to be committed to disk on next restart of the storage unit/storage controller thus providing protection.

Nov 162015
 

After upgrading to Windows 10, I immediately noticed that my 3 display setup no longer worked. It was powered by two NVidia graphics cards (GeForce GT 640, and a GeForce GTX 550 Ti).

For some time, I couldn’t find anything on the internet explaining as to why I lost my dual display setup. Finally I came across a forum that pointed to this NVidia Support KB article: http://nvidia.custhelp.com/app/answers/detail/a_id/3707/~/windows-10-will-not-load-the-nvidia-display-driver-for-my-older-graphics-card

Essentially Fermi based GPUs utilize WDDM 1.3 mode, whereas the newer architectures of Maxwell and Kepler support WDDM 2.0. In Windows 10, it is not able to load multiple display drivers using different WDDM versions.

For a really long time I waited and no updates enabled the functionality until September when I performed an update, and out of nowhere they started to work. I assumed they fixed the issue permanently, however after updating once again, I lost the capabilities. In this case I reverted to the last driver.

I’m not sure if they updated the Fermi driver to support WDDM 2.0, but I just know it started working. And then after a short while, with another driver update stopped working again. Again, the driver rollback fixed the issue.

 

I recently upgraded to the latest build of Windows 10, and completely lost the ability once again, and lost the ability to rollback drivers.

It was time to find out exactly what driver version WORKS with both Kepler, Fermi, and Maxwell architectures.

After playing around, I found the WORKING NVidia driver version to be: 358.50

Load this version up, and you’ll be good to go! Hope it saves you some time!