Aug 272018
 
Right side of MSA 2040

So, what happens in a worst-case scenario where your backup system fails, you don’t have any VM snapshots, and the last thing standing in the way of complete data loss is your SAN storage systems LUN snapshots?

Well, first you fire whoever purchased and implemented the backup system, then secondly you need to start restoring the VM (or VMs) from your SAN LUN snapshots.

While I’ve never had to do this in the past (all the disaster recovery solutions I’ve designed and sold have been tested and function), I’ve always been curious what the process is and would be like. Today I decided to try it out and develop a procedure for restoring a VM from SAN Storage LUN snapshot.

For this test I pretended a VM was corrupt on my VMware vSphere cluster and then restored it to a previous state from a LUN snapshot on my HPE MSA 2040 (identical for the HPE MSA 2050, and MSA 2052) Dual Controller SAN.

To accomplish the restore, we’ll need to create a host mapping on the SAN for the LUN snapshot to a new LUN number available to the hosts. We then need to add and mount the VMFS volume (residing on the snapshot) to the host(s) while assigning it a new signature and then vMotion the VM from the snapshot’s VMFS to original datastore.

Important Notes (Read first):

  • When mounting a VMFS volume from a SAN snapshot, you MUST RE-SIGNATURE THE SNAPSHOT VMFS volume. Not doing so can cause problems.
  • The snapshot cannot be mapped as read only, VMFS volumes must be marked as writable in order to be mounted on ESXi hosts.
  • You must follow the proper procedure to gracefully dismount and detach the VMFS volume and storage device before removing the snapshot’s host mapping on the SAN.
  • We use Storage vMotion to perform a high-speed move and recovery of the VM. If you’re not licensed for Storage vMotion, you can use the datastore file browser and copy/move from the snapshot VMFS volume to live production VMFS volume, however this may be slower.
  • During this entire process you do not touch, modify, or change any settings on your existing active production LUNs (or LUN numbers).
  • Restoring a VM from a SAN LUN snapshot will restore a crash consistent copy of the VM. The VM when recovered will believe a system crash occurred and power was lost. This is NOT a graceful application consistent backup and restore.
  • Please read your SAN documentation for the procedure to access SAN snapshots, and create host mappings. With the MSA 2040 I can do this live during production, however your SAN may be different and your hosts may need to be powered off and disconnected while SAN configuration changes are made.
  • Pro tip: You can also power on and initialize the VM from the snapshot before initiating the storage vMotion. This will allow you to get production services back online while you’re moving the VM from the snapshot to production VMFS volumes.
  • I’m not responsible if you damage, corrupt, or cause any damage or issues to your environment if you follow these procedures.

We are assuming that you have already either deleted the damaged VM, or removed it from your inventory and renamed the VMs folder on the live VMFS datastore to change the name (example, renaming the folder from “SRV01” to “SRV01.bad”. If you renamed the damaged VM, make sure you have enough space for the new restored VM as well.

Procedure:

Mount the VMFS volume on the LUN snapshot to the ESXi host(s)
  1. Identify the VM you want to recover, write it down.
  2. Identify the datastore that the VM resides on, write it down.
  3. Identify the SAN and identify the LUN number that the VMFS datastore resides on, write it down.
  4. Identify the LUN Snapshot unique name/id/number and write it down, confirm the timestamp to make sure it will contain a valid recovery point.
  5. Log on to the SAN and create a host mapping to present the snapshot (you recorded above) to the hosts using a new and unused LUN number.
  6. Log on to your ESXi host and navigate to configuration, then storage adapters.
  7. Select the iSCSI initator and click the “Rescan Storage Adapters” button to rescan all iSCSI LUNs.

    VMware ESXi Host Rescan Storage Adapter

    VMware ESXi Host Rescan Storage Adapter

  8. Ensure both check boxes are checked and hit “Ok”, wait for the scan to complete (as shown in the “Recent Tasks” window.

    VMware ESXi Host Rescan Storage Adapter Window for VMFS Volume and Devices

    VMware ESXi Host Rescan Storage Adapter Window for VMFS Volume and Devices

  9. Now navigate to the “Datastores” tab under configuration, and click on the “Create a new Datastore” button as shown below.

    VMware ESXi Host Add Datastore Window

    VMware ESXi Host Add Datastore Window

  10. Continue with “VMFS” selected and select continue.
  11. In the next window, you’ll see your existing datastores, as well as your new datastore (from the snapshot). You can leave the “Datastore name” as is since this value will be ignored. In this window you’re going to select the new VMFS datastore from the snapshot. Make sure you confirm this by looking at the LUN number, as well as the value under “SnapshotVolume”. It is critical that you select the snapshot in this window (it should be the new LUN number you added above).
  12. Select next and continue.
  13. On the next window “Mount Option”, you need to change the radio button to and select “Assign a new signature”. This is critical! This will assign a new signature to differentiate it from your existing real production datastore so that the ESXi hosts don’t confuse it.
  14. Continue with the wizard and complete the mount process. At this point ESXi will resignture the VMFS volume and rename it to “snap-OriginalVolumeNameHere”.
  15. You can now browse the VMFS datastore residing on the LUN snapshot and do anything you’d normally be able to do with a normal datastore.
Copy/Move/vMotion the VM from the snapshot VMFS volume to your production VMFS volume

Note: The next steps are only if you are licensed for storage vMotion. If you aren’t you’ll need to use the copy or move function in the file browsing area to copy or move the VMs to your live production VMFS datastores:

  1. Now we’ll go to the vCenter/ESXi host storage area in the web client, and using the “Files” tab, we’ll browse the snapshots VMFS datastore that we just mounted.
  2. Locate the folder for the VM(s) you want to recover, open the folder, right click on the vmx file for the VM and select “Register VM”. Repeat this for any of the VMs you want to recover from the snapshot. Complete the wizard for each VM you register and add it to a host.
  3. Go back to you “Hosts and VMs” view, you’ll now see the VMs are added.
  4. Select and right click on the VM you want to move from the snapshot datastore to your production live datastore, and select “Migrate”.
  5. In the vMotion migrate wizard, select “Change Storage only”.
  6. Continue to the wizard, and storage vMotion the VM from the snapshot VMFS to your production VMFS volume. Wait for the vMotion to complete.
  7. After the storage vMotion is complete, boot the VM and confirm everything is functioning.
Gracefully unmount, detach, and remove the snapshot VMFS from the ESXi host, and then remove the host mapping from the SAN
  1. On each of your ESXi hosts that have access to the SAN, go to the “Datastores” section under the ESXi hosts configuration, right click on the snapshot VMFS datastore, and select “Unmount”. You’ll need to repeat this on each ESXi host that may have automounted the snapshot’s VMFS volume.
  2. On each of your ESXi hosts that have access to the SAN, go to the “Storage Devices” section under the ESXi hosts configuration and identify (by LUN number) the “disk” that is the snapshot LUN. Select and highlight the snapshot LUN disk, select “All Actions” and select “Detach”. Repeat this on each host.
  3. Double check and confirm that the snapshot VMFS datastore (and disk object) have been unmounted and detached from each ESXi host.
  4. You can now log in to your SAN and remove the host mapping for the snapshot-to-LUN. We will not longer present the snapshot LUN to any of the hosts.
  5. Back to the ESXi hosts, navigate to “Storage Adapters”, select the “iSCSI Initiator Adapter”, and click the “Rescan Storage Adapters”. Repeat this for each ESXi host.

    VMware ESXi Host Rescan Storage Adapter

    VMware ESXi Host Rescan Storage Adapter

  6. You’re done!
Aug 262018
 
Fedora Logo

One of the coolest things I love about running VMware Horizon View and VDI is that you can repurpose old computers, laptops, or even netbooks in to perfect VDI clients running Linux! This is extremely easy to do and gives life to old hardware you may have lying around (and we all know there’s nothing wrong with that).

I generally use Fedora and the VMware Horizon View Linux client to accomplish this. See below to see how I do it!

 

Quick Guide

  1. Download the Fedora Workstation install or netboot ISO from here.
  2. Burn it to a DVD/CD if you have DVD/CD drive, or you can write it to a USB stick using this method here.
  3. Install Fedora on to your laptop/notebook/netbook using the workstation install.
  4. Update your Fedora Linux install using the following command
    dnf -y upgrade
  5. Install the prerequisites for the VMware Horizon View Linux client using these commands
    dnf -y install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm
    dnf -y install gstreamer-plugins-ugly gstreamer-plugins-bad gstreamer-ffmpeg xine-lib-extras-freeworld xine-lib-extras-freeworld libssl* libcrypto* openssl-devel libpng12 systemd-devel libffi-devel
    
  6. To fix an issue with package versions and dependancies, run the following commands
    ln -s /usr/lib64/libudev.so.1 /usr/lib64/libudev.so.0
    ln -s /usr/lib64/libffi.so.6 /usr/lib64/libffi.so.5
  7. Download the VMware Horizon View Linux client from here
  8. Make the VMware bundle executable and then run the installer using these commands (your file name may be different depending on build version number)
    chmod 777 VMware-Horizon-Client-4.8.0-8518891.x64.bundle
    sudo ./VMware-Horizon-Client-4.8.0-8518891.x64.bundle
  9. Complete the installation wizard
  10. You’re done!

To run the client, you can find it in the GUI applications list as “VMware Horizon Client”, or you can launch it by running “vmware-view”.

VMware Horizon View on Linux in action

Here is a VMware Horizon View Linux client running on HP Mini 220 Netbook

Additional Notes:

-If you’re comfortable, instead of the workstation install, you can install the Fedora LXQt Desktop spin, which is a lightweight desktop environment perfect for low performance hardware or netbooks. More information and the download link for Fedora LXQt Desktop Spin can be found here: https://spins.fedoraproject.org/en/lxqt/

-If you installed Fedora Workstation and would like to install the LXQt window manager afterwards, you can do so by running the following command (after installing, at login prompt, click on the gear to change window managers):

dnf install @lxqt-desktop-environment

-Some of the prerequisites above in the guide may not be required, however I have installed them anyways for compatibility.

Aug 252018
 
Fedora Logo

After doing a fresh install or upgrade of Fedora Core Linux (FC28 in my case, but this applies to any version), you may notice that when the system boots it gets stuck on a black screen with a white cursor. The cursor will not move and there will be no drive activity.

This issue occurs with GNOME on my old HP Mini 210 Netbook every time I do a fresh install of Fedora on it (or upgrade it).

Follow the process below to temporarily boot and then permanently fix it.

Temporary fix

To get the system to boot:

  1. Power on the computer, and carefully wait for the GRUB bootloader to appear (the boot selection screen).
  2. When the GRUB bootloader appears, press the “e” key to edit the highlighted (default) boot entry.
  3. Scroll down until you get to the line starting with “linux16”, then use your right arrow key and scroll right until you get to the end of the kernel options (while scrolling right, you may scroll multiple lines down which is fine and expected). The line should finally end with “rhgb” and “quiet”.
  4. Remove “rhgb” and “quiet”, and then add “nomodeset=0”
  5. Press “CTRL+x” to boot the system.
  6. The system should now boot.

FYI: “rhgb” is the kernel switch/option for redhat graphical boot, and “quiet” makes the system messages more quiet (who would have guessed).

Permanent Fix

To permanently resolve the issue:

  1. Once the system has booted, log in.
  2. Open a terminal window (Applications -> Terminal, or press the “Start” button and type terminal).
  3. Use your favorite text editor and edit the file “/etc/default/grub” (I use nano which can be install by running “dnf install nano”):
    nano /etc/default/grub
  4. Locate the line with the variable “GRUB_CMDLINE_LINUX”, and add “nomodeset=0” to the variables. Feel free to remove “rhgb” and “quiet” if you’d like text boot. Here’s an example of my line after editing (yours will look different):
    GRUB_CMDLINE_LINUX="resume=/dev/mapper/fedora_da--netbook01-swap rd.lvm.lv=fedora_da-netbook01/root rd.lvm.lv=fedora_da-netbook01/swap nomodeset=0"
  5. Save the file and exit the text editor (CTRL+x to quit, the press “y” and enter to save)
  6. At the bash prompt, execute the following command to regenerate the grub.conf file on the /boot partition from your new default file:
    grub2-mkconfig -o /boot/grub2/grub.cfg
  7. Restart your system, it should now boot!

Please Note: Always make sure you have a full system backup before modifying any system files!

Aug 252018
 
Fedora Logo

A fun fact that a lot of users still aren’t aware of, is that you can create Fedora and CentOS bootable media (a bootable USB stick) using the DVD/CD ISO image, and using the linux dd command.

So if you have an existing and running Linux install, you can use this method to quickly write an ISO file to a USB stick!

 

Here’s How!

  1. Get your USB stick handy, make sure it’s big enough to store the ISO file you want to download.
  2. Download your preferred ISO DVD or CD Image for Installation from CentOS or Fedora.
  3. Connect your USB stick, open a terminal session and run the following command to identify the device name of the USB stick (mine was sdb for /dev/sdb):
    [root@StephenW-X1 ~]# dmesg | grep removable
    [  171.890670] sd 1:0:0:0: [sdb] Attached SCSI removable disk
  4. Issue the following command to write the ISO image to the USB stick. Change the input filename, and output device name to reflect your own.
    [root@StephenW-X1 Downloads]# dd if=Fedora-Workstation-netinst-x86_64-28-1.1.iso of=/dev/sdb
    1193984+0 records in
    1193984+0 records out
    611319808 bytes (611 MB, 583 MiB) copied, 13.6777 s, 44.7 MB/s
  5. Your done!

Please Note:

  • Choosing the wrong /dev/sd[x] device can case you to write the ISO file to your hard drive, or another hard drive in your system. Make sure you select the right device name. If you’re unsure, don’t run the command.
  • You can also use the “Fedora Media Writer for Windows” available here: https://getfedora.org/fmw/FedoraMediaWriter-win32-4.1.1.exe to write an ISO image to USB if you’re running Windows.
Aug 222018
 

HPE Moonshot

I had the pleasure of playing with a fully loaded HPE Moonshot 1500 Chassis, and an HPe Edgeline EL4000 Converged Edge System last month during my visit to HPe Headquarters in Toronto, Ontario. I like to think of this thing as the answer for high-density anything and everything!

HPE Moonshot 1500 Chassis

I’ve known about the HPE Moonshot portfolio for some time, however I didn’t understand how mammoth one of these chassis’ are until I saw it performing in real life.

HPe Moonshot 1500 Chassis with 45 Cartridges

HPE Moonshot 1500 Chassis with 45 Cartridges

The HPE Moonshot 1500 Chassis supports up to 45 cartridges, and up to 4 SoC (System on Chip) OS instances per cartridge for a total of 180 OS instances in a 4.3U (5U for 1 x 1500 Chassis or 13U for 3 x 1500 Chassis) sized footprint. The chassis also supports up to 2 switches and 2 uplink modules in addition to the 45 cartridges.

Prime uses for HPE Moonshot 1500 (remember, high-density everything):

  • VDI (Virtual Desktop Infrastructure via VMware or Microsoft)
  • HDI (Hosted Desktop Infrastructure via Citrix Provisioning Server)
  • Server consolidation and Virtualization
  • SDDC (Software Defined Data Center)
  • HPC (High Performance Computing, both Virtualized and Non-Virtualized workloads)
  • Energy Efficient Compute
  • EUC (End User Computing – Software defined end user desktops without virtualization)
  • Video Transcoding
  • Analytics and Interpritation
  • IoT and AI
  • Custom workloads

As you can see, you can virtually load up whatever you’d like on it that requires a CPU (HPE Moonshot can run both x86 and ARM architectures depending on which cartridges are utilized).

The chassis is monitored and managed via the HPE Moonshot 1500 Chassis Management module and the HPE Moonshot Provisioning Manager.

HPE Edgeline EL4000 Converged Edge System

The HPE Edgeline EL4000 was designed (you probably guessed it) for the edge. Whether it be the enterprise edge, media edge, or IoT edge, the EL4000 is a perfect fit.

HPe Edgeline EL4000 Converged Edge System

HPE Edgeline EL4000 Converged Edge System

This bad boy supports up to 4 HPE Proliant Server Cartridge (m510 or m710x) compute nodes in a 1U package. It also supports up to 4 PCIe cards, or 4 PXIe modules assignable to any of the compute modules.

Prime uses for the HPE Edgeline EL4000:

  • Edge Computing (AI, IoT EDGE)
  • ROBO (Remote Office Branch Office)
  • Server Consolidation and Virtualization (ROBO)
  • VDI (Virtual Desktop Infrastructure)
  • HDI (Hosted Desktop Infrastructure)
  • Video Transcoding
  • Industrial applications (Machine monitoring, Condition Monitoring)
  • Edgeline data analytics
  • Industrial/Manufacturing Quality Control and Quality Assurance (Video Analytics and Interpretation)
  • SMB Applications

The El4000 has iLo (Integrated Lights Out) built in, and provides management and monitoring. This unit also supports GPU accelerator/compute cards such as the Nvidia P4 Graphics Accelerator (specifically an Nvidia Tesla P4 8GB Computational PCIe card).

HPE Moonshot Cartridges

With the flexibility of different cartridges, along with Moonshot being software defined, you can highly customize whatever workload you may be running.

HPe Proliant m800 Moonshot Cartridge Front View

HPE Proliant m800 Moonshot Cartridge Front View

HPe Proliant m800 Moonshot Cartridge Side View

HPE Proliant m800 Moonshot Cartridge Side View

The following cartridges are currently available for the HPE Moonshot platform:

  • HPE Proliant m710p – Server or Desktop Virtualization, includes Intel Iris Pro P6300 graphics for VDI deployments (supported by VMware vSphere for vDGA passthrough and vSGA) or video transcoding.
  • HPE Proliant m710x – Server or Desktop Virtualization, includes Intel Iris Pro P580 graphics for VDI deployments (supported by VMware vSphere for vDGA passthrough and vSGA) or video transcoding.
  • HPE Proliant m700p – Designed for high-performance Citrix Mobile Workspaces (high-density EUC) for 4 desktops per cartridge with AMD Radeon HD 8000 graphics.
  • HPE Proliant m510 – Features the Xeon D processor targeting high performance, AI, analytics, machine learning, and IoT workloads.

As you can see there is quite some flexibility as far as the cartridges you can roll out. I get really excited when I think of VDI with Moonshot just because of the fact that the Intel Iris Pro P580, and P6300 are fully supported on VMware’s HCL for vDGA and vSGA graphics for vSphere 6.5 and 6.7.

There are also retired/discontinued cartridges (such as the HPE Proliant m800) which are beyond the scope of this blog post.

HPE Moonshot Networking

On the HPe Moonshot 1500 Chassis, networking is handled inside of the chassis via 1 or 2 network switch modules and uplink modules. You’ll then connect the uplinks from the uplink modules to your real physical network. You can connect to your network via QSFP+ or SFP+ connections using DAC (direct attached cables) or fiber cables with transceivers at speeds of 40Gb or 10Gb.

The Moonshot 1500 chassis supports the following switch modules:

  • Moonshot-45Gc Switch – 1Gb Switch connectivity for m510, m510-16c, m710x cartridges and works with the Moonshot 6 x SFP+ Uplink Module
  • Moonshot-45XGc Switch – 1Gb or 10Gb Switch connectivity for m510, m510-16c, m710x cartridges and works with the Moonshot 16 x SFP+ Uplink Module or the 4 QSFP+ Uplink Module
  • Moonshot-180XGc Switch – 1Gb or 10Gb Switch connectivity for m510, m510-16c, m710x cartridges, and 1Gb Switch connectivity for m700p and works with the Moonshot 16 x SFP+ Uplink Module or the 4 QSFP+ Uplink Module

On the HPE Edgeline EL4000, networking is handled via 2 x 10Gb SFP+ switched version, or a 8 x 10Gb QSFP+ pass-thru version. The unit also has a dedicated 1Gb RJ45 port for HPE iLo connectivity.

HPE Moonshot Storage

Each cartridge can contain it’s own dedicated storage up to 2TB. This is perfect for a HPE StoreVirtual VSA deployment or even basic direct attached storage. You can also connect HPE Moonshot to an HPE 3PAR SAN or an HPE Apollo 4500 storage system via the 10Gb network Fabric.

There’s a few options as to how you can plan your storage deployment with Moonshot:

  • DAS – Direct Attached Storage (in cartridge)
  • HPE 3PAR SAN or HPE Apollo 4500 Storage System
  • iSCSI/NFS (May or may not be supported depending on your workload)
  • VMware vSAN (May or may not be supported/certified)

As you can see, there’s quite a few options and possibilities as far as your storage deployment goes.

HPE Moonshot Pictures

Here’s some additional photos of the unit.

HPe Moonshot at HPe Center of Excellence

HPE Moonshot 1500 Chassis opened and running

HPe Moonshot 1500 Chassis with Cartridges

HPE Moonshot 1500 Chassis with Cartridges

And remember, if you’re interested in the HPE Moonshot product or any other products or solutions in HPE’s portfolio, please don’t hesitate to reach out to me or my company (Digitally Accurate Inc.) for more information as we are an HPE partner and design/configure/sell HPE solutions!