Stephen Wagner

Name: Stephen Wagner

Age: 30

Location: Calgary, Alberta (Canada)

Occupation: President of Digitally Accurate Inc. (also operating as D.A. Consulting)


-Computers (Windows, Linux, OSX)

-Wireless Technologies (Device hacking, reverse engineering, long range links, open source hardware)

-Single Board Computers (SBCs, SBC Development)

-Mobile Platforms (Windows 10 for Mobile)

-Mountain Biking

-Electronica (House, Hard House)




-Disaster Recovery

-HP Hardware (Proliant Servers, SANs)

-Storage (iSCSI, SAN, NFS, NAS)


Microsoft Lumia 950XL (Dual SIM) (Insider Fast Build Flight)

Nokia Lumia 1020 (Insider Fast Build Flight) (Backup Phone)

Microsoft Surface Pro

Lenovo X1 Carbon 2015

Mini Biography:

Born and raised in Calgary, Alberta (3rd generation), I’ve had the pleasure of also living in North Vancouver and Maple Ridge in British Columbia, before moving back home to Calgary.

I started out very young with technology. My father owned numerous Computer/I.T. companies over the years doing everything from business systems, to Point of Sale solutions. Growing up I had access to business hardware, software, and pretty much all the fun stuff that comes with it.

First memories I have go back to the days of Windows for Workgroup 3.11, NT 3.51, and Unixware around the age of 7. It was around this time that I really started my passion with computers and technology. I still remember how amazed I was when the NT 4.0 beta was released. I was fully building computers at this age (and my father was getting me to help with building systems and rolling them out to clients).

My main obsession was with networking. I found the premise of connecting computers to a large network and sharing information very fascinating, whether it was talking and chatting, sharing files and data, or playing multiplayer games. Around the age of 10 I was already wrapping IPX/SPX inside of TCP to play LAN games over the internet against other players, and had developed a thorough understanding of network technologies.

Shortly after this, my fascination with Linux started. I picked up my first copy of RedHat Linux 5.1 when I was 11 years old from a book purchased at a bookstore. And so began my dive in to the Linux world (I can’t tell you how many nights were spent doing live FTP installs over dial-up connections to get the latest versions of Linux).

Over the years I learnt about domains, active directory, clustering, and got more interested in business applications, infrastructure, and internet services.

Age 13 through 17 were spent learning about internet services, wireless services (and wireless hacking), bettering my knowledge of Linux, and more business infrastructure services. At 16 I could fully configure a Windows 2000 Advanced Server with all of the add-ons/roles enabled and configured (AD, Exchange, FTP, Telnet, Remote Installation services, DHCP, DNS, Clustering, etc…).

Around 17 years old I really started to dive in to Single Board Computing (using alternative architectures such as ARM and MIPS), and wireless technologies. I started off with a Soekris Net4801 board, doing remote PXE installations of RedHat Linux using PXE and NFS. I also picked up specialize wireless cards and specialized wireless antennas (learning about promiscuous mode on wireless cards, long range wireless links, and customization of wireless technologies to permit long range links). Over the years I also made it a hobby of installing/hacking Linux on everything I could (proprietary firewalls, Xbox, Wireless Access Points, etc…). I can’t verify this, but I think I was one of the first people to get the Redhat 9 distribution running on the First Generation Xbox.

Upon graduating high school in 2004, I initially was pre-accepted in to Electrical Engineering, however decided against immediately going to university so that I could get a job and save up beforehand.

My first big jump in to business was in 2004 as I was brought onboard with a major homebuilder as an I.T. Specialist.

During my time there, within 2 years I was promoted numerous times first to I.T. Coordinator, and finally to I.T. Manager. I managed over 80 workstations deployed across two offices (in Calgary and Edmonton), and over 20 sales centers split between the two cities. The technologies I implemented, managed, and supported included Windows Server 2003, Citrix, Terminal Services, SQL Server, and a number of Line of Business applications specific to the industry.

During this time, I fell in love with business and business I.T. infrastructure. After 2 years I resigned from my position to start my own business (at the age of 19).

For over 10 years I have provided I.T. Infrastructure services for the SMB and enterprise markets, specializing in Virtualization, Line of Business applications, ERP/CRM solutions, Storage, Security, and I.T. Management.

Experienced with:
Infrastructure design, implementation and support
Virtualization (vSphere 5.x, vSphere 6.x, iSCSI, HP MSA 2040 SAN)
Disaster Recovery (Tape Libraries, Disk Backup, special requirement backups for databases, etc...)
Wireless Point to Point (Linking buildings)
Wireless Point to Multi-Point (Warehouse wireless)
Linking multi-sites using fiber
Line of Business Implementation and Integration (Accounting, Estimating, Purchasing, Scheduling, etc...)
Remote Access Solutions (Linking multiple branch offices to HQ, or remote access for employees from home)
Mobile Devices (Windows Phone, Windows Mobile, Apple iPhone, BlackBerry, Blackberry Enterprise Server)
Workstation Management Technologies
IT Workflow and IT Support Development/Management
Line of Business Applications (Directional Drilling, Home Sales, Shop Management)

Software Implementation, Integration, and Support Experience:
Microsoft Windows Server 2000, 2003, 2008, 2008R2, 2012, 2012 R2
Microsoft Windows Small Business Server (SBS2008, SBS2011)
Microsoft Exchange Server (2007, 2010, 2013, 2016)
Microsoft SQL Server (2005, 2008, 2008R2)
RedHat Enterprise Linux
Symantec Endpoint Protection (and Protection Suite)
Sophos Unified Threat Management (Sophos UTM)
Simply Accounting
Sage BusinessVision
Timberline Accounting and Estimating
BuilderMT Construction Management
Zybertech HomeFront
Lio-Target iSCSI
Embedded Linux development
Halliburton Landmark Compass, WellPlan, EDM
Shoptech E2

Get in Touch:


Jun 012017

Today I’m writing about something we all hate, issues with either limited or no cell phone reception. There’s pictures below so please scroll down and check them out!

We’ve all lived in a house or area where there’s no reception at some point in our life. In the house that I’m in right now, I’ve had no or limited reception for the past 2 years. Regularly I have missed calls (phone won’t ring, and I’ll receive a voicemail notification 2 hours later), or people will send me text messages (SMS) and I won’t receive them for hours. Sometimes if someone sends multiple SMS messages, I’ll actually even completely lose reception for 15 minute intervals (phone completely unusable).

This has been extremely frustrating as I use my phone a lot, and while I do have an office line, people tend to call your mobile when they want to get in touch ASAP. It became an even larger problem when clients started texting me for work emergencies. While I always stress to call the office, they are texting these more and more often.

Recently, to make the problem worse I switched from a Microsoft Lumia 950XL to a Samsung Galaxy S8+. When I received my new S8+, my phone wouldn’t even ring at all, while occasionally I could make an outbound call.


For these reception issues, there are typically 4 ways to resolve them:

  1. WiFi Calling
    1. Routes calls, SMS/MMS (texting), and cell services through a traditional Wifi access point. Unfortunately Canadian carriers just recently started to implement this, also you’ll need a supported carrier branded phone. Wifi calling usually won’t work if you’re using an unlocked phone, or purchased directly from manufactorer (you’ll need to buy a phone directly from your provider).
    2. Provides easy handoffs from Wifi calling to the native cell towers.
    3. Unfortunately, if you’re in a low reception area, you’re phone will continue to scan and struggle to connect to cell towers (even though it’s sitting in standby). This will consume battery power.
    4. Easy as it requires no special hardware except a phone and carrier that supports the technology.
  2. Femtocell/microcell/picocell
    1. This is a little device that looks similar to your wireless router or wireless access point.
    2. Connects to your provider using your internet connection. The device is essentially a mini cell tower that your phone will connect to using its normal cellular technologies.
    3. These are popular in the United States with multiple carriers providing options, however my provider in Canada doesn’t sell or use these. I could be wrong but I don’t think any providers in Canada carry these.
    4. Easy as it requires only a single small box similar to your wifi router, and a carrier that supports it.
  3. Cell Amplifier / Cell Booster
    1. A device with two antennas, one indoor and one outdoor. Install outdoor antenna facing closest cell tower, install indoor antenna in your house. This boosts and amplifies the signal coming in and going out.
    2. This option is more difficult as it requires mounting an antenna either outdoors (for best reception) or inside of a window. Also cabling must be laid to the booster which must be a specified distance away from the outside antenna. This can be overwhelming and challenging for some.
    3. Most expensive option if you don’t move.
  4. Move to a new house
    1. Most expensive option
    2. Chances it may not correct, or even make your reception issue worse
    3. New neighbors might be crazy


In my scenario, I decided to purchase a Wilson Electronics – weBoost Home 4G Cell Phone Booster Kit. With my lack of experience with boosters, I decided to purchase the most cost-effective option that supported LTE and also which was a refurbished unit. I figured if it worked, I could upgrade it in the future to a better model that was brand new and a model higher.


Please see the links below for information: – Canada Online Store – Manufacturer website with information on products

The model I purchased:

Refurbished Part#: 470101R

New Part#: 470101F

weBoost Home 4G Product Page (United States Web Site) (Canada Web Site)


Well, after a few weeks the booster finally showed up! Everything was packed nicely, and I was pleasantly surprised about the quality of the materials (antennas, cables) and the unit itself. With my specific unit being a refurbished model, it looked great and you wouldn’t have been able to even notice.

The unit comes with mounting supplies for different mounting options. I could either mount it on a pole (such as the plumbing exhaust port on the roof), against the side of the house, or use the neat window mounting option for window placement (neat little window mount that uses suction cups to affix).

I already was aware of the location of two towers in my area and had previously used cell surveying utilities to find areas where reception was available. If you purchase a cell booster, you can either follow the instructions for finding the best placement with cell service, or you can use apps on your phone to find the best placement.

Here’s some pictures from unboxing and testing. Please click on the image to see a larger version of the image:

weBoost Home 4G 470101

weBoost Home 4G 470101 Cell Booster Kit


weBoost Home 4G 470101 Cell Booster

weBoost Home 4G 470101 Cell Booster Unboxed


weBoost Home 4G 470101 Cell Booster

weBoost Home 4G 470101 Cell Booster Refurbished


weBoost Home 4G 470101 Cell Booster Outside Antenna Window Mount

weBoost Home 4G 470101 Cell Booster Outside Antenna mounted on Window



weBoost Home 4G 470101 Cell Booster

weBoost Home 4G 470101 Cell Booster


weBoost Home 4G 470101 Cell Booster Inside Antenna

weBoost Home 4G 470101 Cell Booster Inside Antenna


weBoost Home 4G 470101 Cell Booster Turned on

weBoost Home 4G 470101 Cell Booster Turned on with full Green LED lights (operational)


And BAM! That was it, literally on the first test it worked great. Full bars in the basement with my main carrier! I tried a few other locations, and found at an alternative location, my other cell provider (I have 2 phones, with two providers), started to function as well!


See below for reception before and after:


As you can see there was a vast improvement! I tested it with phone calls, texts, MMS messages, and data, and it all worked fantastic! All lights on the booster were green (orange and/or red lights mean adjustments are needed).

Now since testing was complete, I decided to install it to make it look neat and tidy and hide all the wires.

I decided to leave it using the window mount since it was working so well (this was to avoid having to get on the roof, or drill in to the house). Underneath the window I have a cool-air intake so I was able to fish the antenna wire through the ventilation duct down to the basement. I was able to make everything look neat and tidy.

Below pics are final install:

Installed weBoost Home 4G 470101 Cell Booster

Installed weBoost Home 4G 470101 Cell Booster


Installed weBoost Home 4G 470101 Cell Booster

Installed weBoost Home 4G 470101 Cell Booster


Installed weBoost Home 4G 470101 Cell Booster Inside Antenna

Installed weBoost Home 4G 470101 Cell Booster Inside Antenna


The entire process was extremely easy and I’m very happy with the result. I’d highly recommend this to anyone with reception issues. This should be able to help as long as there is faint reception. Please note, if you’re in an area with absolutely no reception, then a booster will not function as there is nothing to boost.

You’ll probably need two people, both for testing the signal and adjusting the antenna, as well as fishing cable through your house. Most of the time required for my install was associated with running the wiring.

For testing signal strength, I used the “LTE Discovery” app on Android (

Feb 182017

This is an issue that effects quite a few people and numerous forum threads can be found on the internet by those searching for the solution.

This can occur both when taking manual snapshots of virtual machines when one chooses “Quiesce guest filesystem”, or when using snapshot based backup applications such as vSphere Data Protection (vSphere vDP).


For the last couple days, one of my test VMs (Windows Server 2012 R2) has been experiencing this issue and the snapshot has been failing with the following errors:

An error occurred while taking a snapshot: Failed to quiesce the virtual machine.
An error occurred while saving the snapshot: Failed to quiesce the virtual machine.

As always with standard troubleshooting, I restarted the VM, checked for VSS provider errors, and insured that the Windows Services involved with snapshots were in their correct state and configuration. Unfortunately this had no effect, and everything was configured the way it should be.

I also tried to re-install VMWare tools, which had no effect.

PLEASE NOTE: If you experience this issue, you should confirm the services are in their correct state and configuration, as outlined in VMware KB: 1007696. Source:


The Surprise Fix:

In the days leading up to the failure when things were running properly, I did notice that the quiesced snapshots for that VM were taking a long time process, but were still functioning correctly before the failure.

This morning during troubleshooting, I went ahead and deleted all the Windows Volume Shadow Copies which are internal and inside of the Virtual Machine itself. These are the shadow copies that the Windows guest operating system takes on it’s own filesystem (completely unrelated to VMware).

To my surprise after doing this, not only was I able to create a quiesced snapshot, but the snapshot processed almost instantly (200x faster than previously when it was functioning).

I’m assuming this was causing a high load for the VMware snapshot to process and a timeout was being hit on snapshot creation which caused the issue. While Windows volume shadow copies are unrelated to VMware snapshots, they both utilize the same VSS (Volume Shadow Copy Service) system inside of windows to function and process. One must also keep in mind that the Windows volume shadow copies will of course be part of a VMware snapshot.

PLEASE NOTE: Deleting your Windows Volume Shadow copies will delete your Windows volume snapshots inside of the virtual machine. You will lose the ability to restore files and folders from previous volume shadow copy snapshots. Be aware of what this means and what you are doing before attempting this fix.

Feb 142017

Years ago, HPe released the GL200 firmware for their HPe MSA 2040 SAN that allowed users to provision and use virtual disk groups (and virtual volumes). This firmware came with a whole bunch of features such as Read Cache, performance tiering, thin provisioning of virtual disk group based volumes, and being able to allocate and commission new virtual disk groups as required.

(Please Note: On virtual disk groups, you cannot add a single disk to an already created disk group, you must either create another disk group (best practice to create with the same number of disks, same RAID type, and same disk type), or migrate data, delete and re-create the disk group.)

The biggest thing with virtual storage, was the fact that volumes created on virtual disk groups, could span across multiple disk groups and provide access to different types of data, over different disks that offered different performance capabilities. Essentially, via an automated process internal to the MSA 2040, the SAN would place highly used data (hot data) on faster media such as SSD based disk groups, and place regularly/seldom used data (cold data) on slower types of media such as Enterprise SAS disks, or archival MDL SAS disks.

(Please Note: To use the performance tier either requires the purchase of a performance tiering license, or is bundled if you purchase an HPe MSA 2042 which additionally comes with SSD drives for use with “Read Cache” or “Performance tier.)


When the firmware was first released, I had no impulse to try it out since I have 24 x 900GB SAS disks (only one type of storage), and of course everything was running great, so why change it? With that being said, I’ve wanted and planned to one day kill off my linear storage groups, and implement the virtual disk groups. The key reason for me being thin provisioning (the MSA 2040 supports the “DELETE” VAAI function), and virtual based snapshots (in my environment, I require over-commitment of the volume). As a side-note, as of ESXi 6.5, ESXi now regularly unmaps unused blocks when using the VMFS-6 filesystem (if left enabled), which is great for SANs using thin provision that support the “DELETE” VAAI function.

My environment consisted of 2 linear disk groups, 12 disks in RAID5 owned by controller A, and 12 disks in RAID5 owned by controller B (24 disks total). Two weekends ago, I went ahead and migrated all my VMs to the other datastore (on the other volume), deleted the linear disk group, created a virtual disk group, and then migrated all the VMs back, deleted my second linear volume, and created a virtual disk group.

Overall the process was very easy and fast. No downtime is required for this operation if you’re licensed for Storage vMotion in your vSphere environment.

During testing, I’ve noticed absolutely no performance loss using virtual vs linear, except for some functions that utilize the VAAI storage providers which of course run faster on the virtual disk groups since it’s being offloaded to the SAN. This was a major concern for me as block linear based storage is accessed more directly, then virtual disk groups which add an extra level of software involvement between the controllers and disks (block based access vs file based access for the iSCSI targets being provided by the controllers).

Unfortunately since I have no SSDs and no extra room for disks, I won’t be able to try the performance tiering, but I’m looking forward to it in the future.

I highly recommend implementing virtual disk groups on your HPe MSA 2040 SAN!

Feb 082017

When running vSphere 6.5 and utilizing a VMFS-6 datastore, we now have access to automatic LUN reclaim (this unmaps unused blocks on your LUN), which is very handy for thin provisioned storage LUNs.

Essentially when you unmap blocks, it “tells” the storage that unused (deleted or moved data) blocks aren’t being used anymore and to unmap them (which decreases the allocated size on the storage layer). Your storage LUN must support VAAI and the “Delete” function.

Most of you have noticed that storage reclaim in the vSphere client has two settings for priority; none, or low.

For those of you who feel daring or want to spice life up a bit, you can increase the priority through the esxcli command. While I can’t recommend this (obviously VMware chose to hide these options due to performance considerations), you can follow these instructions to change the priority higher.


To view current settings:

esxcli storage vmfs reclaim config get –volume-label=DATASTORENAME

To set reclaim priority to medium:

esxcli storage vmfs reclaim config set –volume-label=DATASTORENAME –reclaim-priority=medium

To set reclaim priority to high:

esxcli storage vmfs reclaim config set –volume-label=DATASTORENAME –reclaim-priority=high


You can confirm these settings took effect by running the command to view settings above, or view the datastore in the storage section of the vSphere client. While the vSphere client will reflect the higher priority setting, if you change it lower and then want to change it back higher, you’ll need to use the esxcli command to bring it up to a higher priority again.

Feb 072017

With vSphere 6.5 came VMFS 6, and with VMFS 6 came the auto unmap feature. This is a great feature, and very handy for those of you using thin provisioning on your datastores hosted on storage that supports VAAI.

I noticed something interesting when running the manual unmap command for the first time. It isn’t well documented, but I thought I’d share for those of you who are doing a manual LUN unmap for the first time.


Automatic unmap (auto space reclamation) is on, however you want to speed it up or have a large chunk of block’s you want unmapped immediately, and don’t want to wait for the auto feature.


I wasn’t noticing any unmaps were occurring automatically and I wanted to free up some space on the SAN, so I decided to run the old command to forcefully run the unmap to free up some space:

esxcli storage vmfs unmap –volume-label=DATASTORENAME –reclaim-unit=200

After kicking it off, I noticed it wasn’t completing as fast as I thought it should be. I decided to enable SSH on the host and took a look at the /var/log/hostd.log file. To my surprise, it wasn’t stopping at a 200 block reclaim, it just kept cycling running over and over (repeatedly doing 200 blocks):

2017-02-07T14:12:37.365Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:37.978Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:38.585Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:39.191Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:39.808Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:40.426Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:41.050Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:41.659Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:42.275Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-9XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX
2017-02-07T14:12:42.886Z info hostd[XXXXXXXX] [Originator@XXXX sub=Libs opID=esxcli-fb-XXXX user=root] Unmap: Async Unmapped 200 blocks from volume XXXXXXXX-XXXXXXXX-XXXX-XXXXXXXXX

That’s just a small segment of the logs, but essentially it just kept repeating the unmap/reclaim over and over in 200 block segments. I waited hours, tried to issue a “CTRL+C” to stop it, however it kept running.

I left it to run overnight and it did eventually finish while I was sleeping. I’m assuming it attempted to unmap everything it could across the entire datastore. Initially I thought this command would only unmap the specified block size.

When running this command, it will continue to cycle in the block size specified until it goes through the entire LUN. Be aware of this when you’re planning on running the command.

Essentially, I would advise not to manually run the unmap command unless you’re prepared to unmap and reclaim ALL your unused allocated space on your VMFS 6 datastore. In my case I did this because I had 4TB of deleted data that I wanted to unmap immediately, and didn’t want to wait for the automatic unmap.

I thought this may have been occurring because the automatic unmap function was on, so I tried it again after disabling auto unmap. The behavior was the same and it just kept running.


If you are tempted to run the unmap function, keep in mind it will continue to scan the entire volume (despite what block count you set). With this being said, if you are firm on running this, choose a larger block count (200 or higher) since smaller blocks will take forever (tested with a block size of 1 and after analyzing the logs and rate of unmaps, it would have taken over 3 months to complete on a 9TB array).

Feb 062017

Had a nasty little surprise with one of my clients this afternoon. Two days ago I updated their Sophos UTM (UTM220) to version 9.410-6 without any issues.

However, today I started to receive notifications that services were crashing (specifically ACC device agent).

After receiving a few of these, I logged in to check it out. Immediately there was no visible errors on the UTM itself, but after some further digging, I noticed these event logs in the “System Messages” log file:

2017:02:06-17:09:32 mail partitioncleaner[7918]: automatic cleaning for partition /tmp started (inodes: 0/100 blocks: 100/85)

2017:02:06-17:09:32 mail partitioncleaner[7918]: stopping deletion: can’t delete more files

Looks like a potential storage problem? Yes it was, but slightly more complicated.

I enabled SSH on the UTM and issued the “df” command (show’s volume usage), and found that the /tmp volume was 100% full.

Doing a “ls” and “ls -hl”, I found there were 25+ files that were around 235MB in size called: “AV-malware-names-XXXX-XXXXXX”.

Restarting the unit clears those files, however they come back shortly (I noticed it would add one every 5-10 minutes).

After some further digging (still haven’t heard back from Sophos on the support case), I came across some other users experiencing the same issues. While no one found a permanent resolution, they did mention this had to do with the Avira AV engine or possibly the dual scan engine.

Checking the UTM, I noticed that we had the E-Mail scanning configured for dual scan.

Solution (temporary workaround):

I went ahead and configured the E-Mail scanner (the only scanner I had that was using dual scan) to use single scan only. I then restarted the UTM. In my environment the default setting for single scanning is set to “Sophos”.

I am now sitting here with 30 minutes of uptime and absolutely no “AV-malware-names-XXXX-XXXXXX” files created.

I will post an update when I hear back from Sophos support.

Hope this helps someone else!


Update (after original post):

I heard back from Sophos support, this is a known bug in 9.410. The current official workaround is to change to single scan and use the AVIRA engine instead of the Sophos engine.

Update #2:

Received notification this morning of a new firmware update available (Version: 9.411003 – Maintenance Release). While I haven’t installed it, it appears from the Bugfixes notes that it was released to fix this issue:

 Fix [NUTM-6804]: [AWS] Update breaks HVM standalone installations
Fix [NUTM-6747]: [Email] SAVI scanner coredumps permanently in MailProxy after update to 9.410
Fix [NUTM-6802]: [Web] New coredumps from httpproxy after update to v9.410

Update #3:

I noticed that this bug was interrupting some mailflow on my Sophos UTM, as well as some of my clients. I went ahead and as an emergency situation, installed 9.411-3.

Things were fine for around 10 hours until I started to receive notification of the HTTP proxy failing and requiring restart. Logging in to the UTM, it was very unresponsive, sometimes completely unresponsive for around 10 minutes. Web browsing was not functioning at all on the internal network behind the UTM.

This issue still hasn’t been resolved. Hopefully we see a stable working fix sometime soon.

Jan 272017

Greetings everyone!

I had my first predicted disk failure occur on my HPe MSA 2040. As always, it was a breeze contacting HPe support to get the drive replaced (since my unit has a 4 hour response warranty).

However, with this being my first drive swap I came across something worth mentioning. Typically in RAID arrays when a disk fails, you simply swap out the failed disk and it starts rebuilding, this is NOT the case if you have an HPe MSA 2040 that’s fully loaded with no spares configured.

If you have global spares, the moment the disk is failed, it will automatically rebuild on to available configured spares.

If you don’t have any global spares (my case), the replacement disk is marked as unused and available. You must set this disk as a spare in the SMU for the rebuild to start.

One additional note, if you do have spares and a disk fails, when you replace the disk that failed it will not automatically rebuild that disk back from the spare. You must force fail (pull out) the spare disk for it to start rebuilding on the freshly replaced disk. Always confirm current redundancy levels and activity before forcefully failing any disks!

As per HPe’s MSA 1040/2040 Best Practices document:


Dec 082016

So you just completed your migration from an earlier version of vSphere up to vSphere 6.5 (particularly vCenter 6.5 Virtual Appliance). When trying to log in to the vSphere web client, you receive numerous “The VMware enhanced authentication plugin has updated it’s SSL certificate in Firefox. Please restart Firefox.”. You’ll usually see 2 of these messages in a row on each page load.

You’ll also note that the “Enhanced Authentication Plugin” doesn’t function after the install (it won’t pull your Active Directory authentication information).

To resolve this:

Uninstall all vSphere plugins from your workstation. I went ahead and uninstalled all vSphere related software on my workstation, this includes the deprecated vSphere C# client application, all authentication plugins, etc… These are all old.

Open up your web browser and point to your vCenter server (https://vCENTERSERVERNAME), and download the “Trusted root CA certificates” from VMCA (VMware certificate authority).

Download and extract the ZIP file. Navigate through the extracted contents to the windows certs. These root CA certificates need to be installed to your “Trusted Root Certification Authorities” store on your system, and make sure you skip the “Certificate Revocation List” file which ends in a “.r0”.

To install them, right click, choose “Install Certificate”, choose “Local Machine”, yes to UAC prompt, then choose “Place all certificates in the following store”, browse, and select “Trusted Root Certification Authorities”, and finally finish. Repeat for each of the certificates. Your workstation will now “trust” all certificates issued by your VMware Certificate Authority (VMCA).

You can now re-open your web browser, download the “Enhanced Authentication Plugin” from your vCenter instance, and install. After restarting your computer, the plugin should function and the messages will no longer appear.

Leave a comment!

Dec 072016

Well, I start writing this post minutes after completing my first vSphere 6.0 upgrade to vSphere 6.5, and as always with VMware products it went extremely smooth (although with any upgrade there are minor hiccups).

Thankfully with the evolution of virtualization technology, upgrades such as the upgrade to vSphere 6.5 is such a massive change to your infrastructure, yet the process is extremely simplified, can be easily rolled out, and in the event of problems has very simple clear paths to revert back and re-attempt. Failed upgrades usually aren’t catastrophic, and don’t even affect production environments.

Whenever I do these vSphere upgrades, I find it funny how you’re making such massive changes to your infrastructure with each click and step, yet the thought process and understanding behind it is so simple and easy to follow. Essentially, after one of these upgrades you look back and think: “Wow, for the little amount of work I did, I sure did accomplish a lot”. It’s just one of the beauties of virtualization, especially holding true with VMware products.

To top it all off you can complete the entire upgrade/migration without even powering off any of your virtual machines. You could do this live, during business hours, in a production environment… How cool is that!


Just to provide some insights in to my environment, here’s a list of the hardware and configuration:

-2 X HPe Proliant DL360p Gen8 Servers (each with dual processors, and each with 128GB RAM, no local storage)

-1 X HPe MSA2040 Dual Controller SAN (each host has multiple connections to the SAN via 10Gb DAC iSCSI, 1 connection to each of the dual controllers)

-VMware vSphere 6.0 running on Windows Virtual Machine (Windows Server 2008 R2)

-VMware Update Manager (Running on the same server as the vCenter Server)

-VMware Data Protection (2 x VMware vDP Appliances, one as a backup server, one as a replication target)

-VMware ESXi 6.0 installed on to SD-cards in the servers (using HPe Customized ESXi installation)


One of the main reasons why I was so quick to adopt and migrate to vSphere 6.5, was I was extremely interested in the prospect of migrating a Windows based vCenter instance, to the new vCenter 6.5 appliance. This is handy as it simplifies the environment, reduces licensing costs and requirements, and reduces time/effort on server administration and maintenance.

First and foremost, following the recommended upgrade path (you have to specifically do the upgrades and migrations for all the separate modules/systems in a certain order), I had to upgrade my vDP appliances first. For vDP to support vCenter 6.5, you must upgrade your vDP appliances to 6.1.3. As with all vDP upgrades, you must shut down the appliance, mark all the data disks as dependent, take a snapshot, and mount the upgrade ISO, and then boot and initiate the upgrade from the appliance web interface. After you complete the upgrade and confirm the appliance is functioning, you shut down the appliance, remove the snapshot, mark all data disks as independent (except the first Virtual disk, you only mark virtual disk 2+ and up as independent), and you’re done your upgrade.

A note on a problem I dealt with during the upgrade process for vDP to version 6.1.3 (appliance does not detect mounted ISO image) can be found here:


Moving on to vCenter! VMware did a great job with this. You load up the VMware Migration Assistant tool on your source vCenter server, load up the migration/installation application on a separate computer (the workstation you’re using), and it does the rest. After prepping the destination vCenter appliance, it exports the data from the source server, copies it to the destination server, shuts down the source VM, and then imports the data to the destination appliance and takes over the role. It’s the coolest thing ever watching this happen live. Upon restart, you’ve completed your vCenter Server migration.

A note on a problem I dealt with during the migration process (which involved exporting VMware Update Manager from the source server) can be found here:


And as for the final step, it’s now time to upgrade your ESXi hosts to version 6.5. As always, this is an easy task with VMware Update Manager, and can be easily and quickly rolled out to multiple ESXi hosts (thanks to vMotion and DRS). After downloading your ESXi installation ISO (in my case I use the HPe customized image), you upload it in to your new VMware Update Manager instance, add it to an upgrade baseline, and then attach the baseline to your hosts. To push this upgrade out, simply select the cluster or specific host (depending on if you want to rollout to a single host, or multiple at once), and remediate! After a couple restarts the upgrade is done.

A note on a problem I dealt with during ESXi 6.5 upgrade (conflicting VIBs marking image as incompatible when deploying HPe customized image) can be found here:


After all of the above, the entire environment is now running on vSphere 6.5! Don’t forget to take a backup before and after the upgrade, and also upgrade your VM hardware versions to 6.5 (VM compatibility version), and upgrade VMware tools on all your VMs.

Make sure to visit https://YOURVCENTERSERVER to download the VMware Certificate Authority (VMCA) root certificates, and add them to the “Trusted Root Certification Authorities” on your workstation so you can validate all the SSL certs that vCenter uses. Also, note that the vSphere C# client (the windows application) has been deprecated, and you now must use the vSphere Web Client, or the new HTML5 web client.

Happy Virtualizing! Leave a comment!

Dec 072016

After successfully completing the migration from vCenter 6.0 (on Windows) to the vCenter 6.5 Appliance, all I had remaining was to upgrade my ESXi hosts to ESXi 6.5.

In my test environment, I run 2 x HPe Proliant DL360p Gen8 servers. I also have always used the HPe customized ESXi image for installs and upgrades.

It was easy enough to download the customized HPe installation image from VMware’s website, I then loaded it in to VMware Update Manager on the vCenter appliance, created a baseline, and was prepared to upgrade the hosts.

I successfully upgraded one of my hosts without any issues, however after scanning on my second host, it reported the upgrade as incompatible and stated: “The upgrade contains the following set of conflicting VIBs: Mellanox_bootbank_net.XXXXversionnumbersXXXX. Remove the conflicting VIBs or use Image Builder to create a custom ISO.”

I checked the host to see if I was even using the Mellanox drivers, and thankfully I wasn’t and could safely remove them. If you are using the drivers that are causing the conflict, DO NOT REMOVE them as it could disconnect all network interfaces from your host. In my case, since they were not being used, uninstalling them would not effect the system.

I SSH’ed in to the host and ran the following commands:

esxcli software vib list | grep Mell (This shows the VIB package that the Mellanox driver is inside of. In my case, it returned “net-mst”)

esxcli network nic list (this command verifies which drivers you are using on your network interfaces on the host)

esxcli software vib remove -n net-mst (this command removes the VIB that contains the problematic driver)

After doing this, I restarted the host, scanned for upgrades, and successfully applied the new vCenter 6.5 ESXi Customized HPe image.

Leave a comment!