Stephen



Name: Stephen Wagner

Age: 25

Location: Calgary, Alberta (Canada)

Occupation: President of Digitally Accurate Inc. (also operating as D.A. Consulting)

Interests:

-Computers (Windows, Linux, OSX)

-Wireless Technologies (Device hacking, reverse engineering, long range links, open source hardware)

-Single Board Computers (SBCs, SBC Development)

-Mountain Biking

-Electronica (House, Hard House)

Background:

(Not Completed)

 

Get in Touch:

FlickrTwitterLinkedInYouTubeGoogle+Facebook  

Apr 122014
 

Recently I decided it was time to beef up my storage link between my demonstration vSphere environment and my storage system. My existing setup included a single HP DL360p Gen8, connected to a Synology DS1813+ via NFS.

I went out and purchased the appropriate (and compatible) HP 4 x 1Gb Server NIC (Broadcom based, 4 ports), and connected the Synology device directly to the new server NIC (all 4 ports). I went ahead and configured an iSCSI Target using a File LUN with ALUA (Advanced LUN features). Configured the NICs on both the vSphere side, and on the Synology side, and enabled Jumbo frames of 9000 bytes.

I connected to the iSCSI LUN, and created a VMFS volume. I then configured Round Robin MPIO on the vSphere side of things (as always I made sure to enable “Multiple iSCSI initators” on the Synology side).

I started to migrate some VMs over to the iSCSI LUN. At first I noticed it was going extremely slow. I confirmed that traffic was being passed across all NICs (also verified that all paths were active). After the migration completed I decided to shut down the VMs and restart to compare boot times. Booting from the iSCSI LUN was absolutely horrible, the VMs took forever to boot up. Keep in mind I’m very familiar with vSphere (my company is a VMWare partner), so I know how to properly configure Round Robin, iSCSI, and MPIO.

I then decided to tweak some settings on the ESXi side of things. I configured the Round Robin policy to IOPS=1, which helped a bit. Then changed the RR policy to bytes=8800 which after numerous other tweaks, I determined achieved the highest performance to the storage system using iSCSI.

This config was used for a couple weeks, but ultimately I was very unsatisfied with the performance. I know it’s not very accurate, but looking at the Synology resource monitor, each gigabit link over iSCSI was only achieving 10-15MB/sec under high load (single contiguous copies) that should have resulted in 100MB/sec and higher per link. The combined LAN throughput as reported by the Synology device across all 4 gigabit links never exceeded 80MB/sec. File transfers inside of the virtual machines couldn’t get higher then 20MB/sec.

I have a VMWare vDP (VMWare Data Protection) test VM configured, which includes a performance analyzer inside of the configuration interface. I decided to use this to test some specs (I’m too lazy to actually configure a real IO/throughput test since I know I won’t be continuing to use iSCSI on the Synology with the horrible performance I’m getting). The performance analyzer tests run for 30-60 minutes, and measure writes and reads in MB/sec, and Seeks in seconds. I tested 3 different datastores.

 

Synology  DS1813+ NFS over 1 X Gigabit link (1500MTU):

Read 81.2MB/sec, Write 79.8MB/sec, 961.6 Seeks/sec

Synology DS1813+ iSCSI over 4 x Gigabit links configured in MPIO Round Robin BYTES=8800 (9000MTU):

Read 36.9MB/sec, Write 41.1MB/sec, 399.0 Seeks/sec

Custom built 8 year old computer running Linux MD Raid 5 running NFS with 1 X Gigabit NIC (1500MTU):

Read 94.2MB/sec, Write 97.9MB/sec, 1431.7 Seeks/sec

 

Can someone say WTF?!?!?!?! As you can see, it appears there is a major performance hit with the DS1813+ using 4 Gigabit MPIO iSCSI with Round Robin. It’s half the speed of a single link 1 X Gigabit NFS connection. Keep in mind I purchased the extra memory module for my DS1813+ so it has 4GB of memory.

I’m kind of choked I spent the money on the extra server NIC (as it was over $500.00), I’m also surprised that my custom built NFS server from 8 years ago (drives are 4 years old) with 5 drives is performing better then my 8 drive DS1813+. All drives used in both the Synology and Custom built NFS box are Seagate Barracuda 7200RPM drives (Custom box has 5 X 1TB drives configured RAID5, the Synology has 8 x 3TB drives configured in RAID 5).

I won’t be using iSCSI  or iSCSI MPIO again with the DS1813+ and actually plan on retiring it as my main datastore for vSphere. I’ve finally decided to bite the bullet and purchase an HP MSA2024 (Dual Controller with 4 X 10Gb SFP+ ports) to provide storage for my vSphere test/demo environment. I’ll keep the Synology DS1813+ online as an NFS vDP backup datastore.

Feel free to comment and let me know how your experience with the Synology devices using iSCSI MPIO is/was. I’m curious to see if others are experiencing the same results.

Apr 112014
 

Earlier today I was doing some work in my demonstration vSphere environment, when I had to modify some settings of one of my VMs that are setup as the latest version (which means you can only edit the settings inside of the vSphere Web Client).

To my surprise, when logging in, immediately I received an error: “ManagedObjectReference: type = Datastore, value = datastore-XXXX, serverGuid = XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX refers to a managed object that no longer exists or has never existed“. Also, after clicking OK, I noticed that lots of information being presented inside of the vSphere web client was inaccurate. Some Virtual Machines were being reported as sitting on different datastores (they were at one point weeks ago, however since were moved). Also, it was reporting that some Virtual Machines were off, when in fact they were on and running.

Symptoms:

-Errors about missing datastores on log on to the vSphere Web Client.

-Virtual Machines were being reported as off (turned off) even though they were running.

-Viewing VMs in vSphere client, reporting they are being stored on a different datastore then they actually are.

-Disconnecting and (re) connecting hosts have no effect on issue.

 

This freaked me out, it was a true “Uhh Ohh” moment. Something was corrupt. Keep in mind that ALL information in the vSphere client was correct and accurate, it was only the vSphere Web client that was having issues.

 

Anyways, I tried a bunch of things to fix it, and spent hours working on the problem. FINALLY I came up with a fix. If you are running in to this issue, PLEASE take a snapshot of your vCenter Server before attempting to fix it, so that you can roll back if you screw anything up (which I had to do multiple times, lol).

The Fix:

1) Stop the “VMWare vCenter Inventory Service”.

2) Delete the “data” folder inside of “Program Files\VMware\Infrastructure\Inventory Service”.

3) Open a Command Prompt with elevated privileges. Change your working directory to “Program Files\VMware\Infrastructure\Inventory Service\scripts”.

4) Run “createDB.bat”, this will reset and create a Inventory Service database.

5) Run “is-change-sso.bat https://computername.domain.com:7444/lookupservice/sdk “administrator@vSphere.local” “SSO_PASSWORD”. Change the computername.domain.com to your FQDN for your vCenter server, and change the SSO_PASSWORD to your Single Signon Admin password.

6) Start the “VMWare vCenter Inventory Service”. At this point, if you try to log on to the vSphere Web Client, it will error with: “Client is not authenticated to VMware Inventory Service”. We’ve already won half the battle.

7) We now need to register the vCenter Server with the newly reset Inventory Service. In an elevated Command Prompt (that we opened above), changed the working path to: “Program Files\VMware\Infrastructure\VirtualCenter Server\isregtool”.

8) Run “register-is.bat https://computername.domain.com:443/sdk https://computername.domain.com:10443 https://computername.domain.com:7444/lookupservice/sdk”. Change computername.domain.com to your FQDN for your vCenter server.

9) Restart the “VMware VirtualCenter Server” service. This will also restart the Management Web services.

 

BAM, it’s fixed! I went ahead and restarted the entire server that the vCenter server was running on. After this, all was good, and everything looked great inside of the vSphere Web Client. I’m actually noticing it’s running WAY faster, and isn’t as glitchy as it was before.

Happy Virtualizing! :)

Nov 142013
 

So you’re running SBS 2011, and recently you notice (or an end user reports) that when trying to log in to your SBS 2011 Remote Web Workplace (RWW) you receive:

404 – File or directory not found.

The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable.

Screenshot below:

File or directory not found SBS 2011 Remote Web Workplace

File or directory not found SBS 2011 Remote Web Workplace

 

You check your server, all is good. You test internally, and all is good. Absolutely no errors! What’s going on?

Well, as Microsoft pushes out updates to it’s Internet Explorer web browser (and with users upgrading to Windows 8, or Windows 8.1), the compatibility with the Remote Web Workplace is broken and/or lost.

To fix this, you need to add your RWW site to your Internet Explorer Compatibility list:

1)    Open Internet Explorer, and go to your Remote Web Workplace login page. (DO NOT LOG IN YET)
2)    Press the “Alt” button which brings up the internet explorer menus
3)    Drop down “Tools” and then go to “Compatibility View Settings”.
4)    Your internet domain should be in the “Add this website” box, just press the “Add” button, then hit Close.
5)    Close out of Internet Explorer, and then go back in and try getting on remotely.

Note: If you clear your internet history, you will lose the above settings and have to re set them!

And BAM! It should now work without any problems whatsoever!

Sep 272013
 

Nokia Canada has this awesome Facebook contest running right now where if you describe the new Lumia 1020 in one word, you can win a Lumia 1020 prize pack which includes:

-Nokia Lumia 1020

-Nokia Camera Grip

-JBL PlayUp Portable Wireless Speaker

-3 Month Nokia Music+ Subscription

To enter, “Like” the Nokia Canada Facebook page here: https://www.facebook.com/nokiacanada and fill out the contest application form here: https://www.facebook.com/nokiacanada?sk=app_331406573613073

The good news? I already won one of the prize packs!!! Yesterday was my birthday and around 3:00PM I received a message from Nokia Canada telling me I won one of the prize packs! Woohoo!

I’m TOTALLY excited about the phone. I’ve been harassing my Rogers reps for a while wondering when I can pre-order a 1020. This is just icing on the cake baby!

Once I get my hands on it, I’ll be writing an in-depth review on the device. I already have the feeling I’m going to love it!

 

Win Lumia 1020

Aug 232013
 

Most of you have heard about Shaw’s announcement in the past regarding their new Fiber to the Curb, or Fiber to the Premise offering, however for some reason there are no pictures, or documented customers that actually claim to have this service.

Well, I can officially say that one of my clients now has the Fiber to the Premise offering for businesses.

This all started out with me being brought on board to provide them with Managed Services. One of the main problems we’ve been having is with the current internet connection (I’m not going to mention who provides it) and how horrible the speeds and reliability are. One of my first initiatives was to see if there was any alternatives. Unfortunately, due to their location (The Foothills Industrial Area), Shaw coax was not available. I sourced out numerous other providers and we were just about to switch to a wireless internet service provider, until I decided to call Shaw one last time a week before we pulled the trigger.

To my surprise, they mentioned they just launched their Fiber offering for small businesses. The offering provided their basic coax internet service tiers and pricing, however it was provided over fiber. This is EXTREMELY attractive due to the reliability, and pricing! We had the option to go all the way to the Business Internet 250 package. Higher products were available, however these were way more expensive, included SLAs, and just wasn’t what we needed. My client opted for the Business Internet 100 package.

This morning the Shaw guys showed up, quickly brought the fiber in to the office, mounted the equipment, and we were up in running in no time (and as always they were EXTREMELY friendly, clean, and took care in setting everything up). I love Shaw for those of you who don’t know…

Anyways, here’s some pics! I’ll update this post in a week or two with average speeds.

Shaw Fiber Drop

Shaw Fiber Drop

The above picture, is the first device the Fiber plugs in to. I don’t know it’s exact purpose, but I believe it provides Shaw’s coax network over the fiber line. The coax cable then went to a Shaw Home Phone Cable modem for 2 phone lines. I believe the device also repeats, and provides a fiber connection to the Shaw Fiber modem as pictured below.

Shaw FTTP Fiber Modem

Shaw FTTP Fiber Modem

Jul 082013
 

Recently I needed to upgrade and replace my storage system which provides basic SMB dump file services, iSCSI, and NFS to my internal network and vSphere cluster. As most of you know, in the past I have traditionally created and configured my own storage systems. For the most part this has worked fantastic, especially with the NFS and iSCSI target services being provided and built in to the Linux OS (iSCSI thanks to Lio-Target).

A few reasons for the upgrade: 1) I need more storage, and 2) I need a pre-packaged product that comes with warranty. Taking care of the storage size was easy (buy more drives), however I needed to find a pre-packaged product that fits my requirements for performance, capabilities, stability, support, and of course warranty. iSCSI and NFS support was an absolute must!

Some time ago, when I first started working with Lio-Target before it was incorporated and merged in to the linux kernel, I noticed that the parent company Rising Tide Systems mentioned they also provided the target for numerous NAS and SAN devices available on the market, Synology being one of them. I never thought anything of this as back then I wasn’t interesting in purchasing a pre-packaged product, until my search for a new storage system.

Upon researching, I found that Synology released their 2013 line of products. These products had a focus on vSphere compatibility, performance, and redundant network connections (either through Trunking/Link aggregation, or MPIO iSCSI connections).

The device that caught my attention for my purpose was the DS1813+.

DS1813+

Synology DS1813+

Synology DS1813+ Specifications:

  • CPU Frequency : Dual Core 2.13GHz
  • Floating Point
  • Memory : DDR3 2GB (Expandable, up to 4GB)
  • Internal HDD/SSD : 3.5″ or 2.5″ SATA(II) X 8 (Hard drive not included)
  • Max Internal Capacity : 32TB (8 X 4TB HDD) (Capacity may vary by RAID types) (See All Supported HDD)
  • Hot Swappable HDD
  • External HDD Interface : USB 3.0 Port X 2, USB 2.0 Port X 4, eSATA Port X 2
  • Size (HxWxD) : 157 X 340 X 233 mm
  • Weight : 5.21kg
  • LAN : Gigabit X 4
  • Link Aggregation
  • Wake on LAN/WAN
  • System Fan : 120x120mm X2
  • Easy Replacement System Fan
  • Wireless Support (dongle)
  • Noise Level : 24.1 dB(A)
  • Power Recovery
  • Power Supply : 250W
  • AC Input Power Voltage : 100V to 240V AC
  • Power Frequency : 50/60 Hz, Single Phase
  • Power Consumption : 75.19W (Access); 34.12W (HDD Hibernation);
  • Operating Temperature : 5°C to 35°C (40°F to 95°F)
  • Storage Temperature : -10°C to 70°C (15°F to 155°F)
  • Relative Humidity : 5% to 95% RH
  • Maximum Operating Altitude : 6,500 feet
  • Certification : FCC Class B, CE Class B, BSMI Class B
  • Warranty : 3 Years

 

This puppy has 4 gigabit LAN ports, and 8 SATA bays. There’s tons of reviews on the internet praising Synology, and their DSM operating system (based on embedded linux) on the internet, so I decided to live dangerously and went ahead and placed an order for this device, along with 8 X Seagate 3TB Barracuda drives.

Unfortunately, it’s extremely difficult to get your hands on a DS1813+ in Canada (I’m not sure why). After numerous orders placed and cancelled with numerous companies, I finally found a distributor who was able to get me one. I’ll just say the wait was totally worth it. Initially I also purchased the 2GB RAM add-on as well, so I had this available when the DS1813+ arrived.

I was hoping to take a bunch of pictures, and do thorough testing with the unit before throwing it in to production, however right from the get go, it was extremely easy to configure and use, so right away I had it running in production. Sorry for the lack of pics! :)

I did however get a chance to setup the 8 drives in RAID 5, and configured an iSCSI block based target. The performance was fantastic, no problems whatsoever. Even maxing out one gigabit connection, the resources of the unit were barely touched.

I’m VERY impressed with the DSM operating system. Everything is clearly spelled out, and you have very detailed control of the device and all services. Configuration of SMB shares, iSCSI targets, and NFS exports is extremely simple, yet allows you to configure advanced features.

After testing out the iSCSI performance, I decided to get the unit ready for production. I created 2 shared folders, and exported these via NFS to my ESXi hosts. It was very simple, quick, and the ESXi hosts had absolutely no problems connecting to the exports.

One thing that really blew me away about this unit, is the performance. Immediately after configuring the NFS exports, mounting them and using Storage vMotion to migrate 14 live virtual machines to the DS1813+ I noticed MASSIVE performance gains. The performance gains were so large, it put my old custom storage system to shame. And this is really interesting, considering my old storage system, while custom, is actually spec’d way higher then the storage unit (CPU, RAM, and the SATA controller). I’m assuming the DS1813+ has numerous kernel optimizations for storage, and at the same time does not have the overhead of a fully Linux distribution. This also means it’s more stable since you don’t have tons of applications running in the background that you don’t need.

After migrating the VMs I noticed that the virtual machines were running way faster, and were may more responsive. I’m assuming this is due to increased IOPS.

Either way I’m extremely happy with the device and fully recommend it. I’ll be posting more blog articles later detailing configuration of services in detail such as iSCSI, NFS, and some other things. I’m already planning on picking up an additional DS1513+ (5 bay unit) to act as a storage server for VM backups which I perform using GhettoVCB.

Nice job Synology :)

Jun 132013
 

As most of you know, I’m a huge fan of the Microsoft Surface Pro tablet. I’ve been using it since day 1 of the release and absolutely love it. This thing has become such a valuable tool in my life, if anything were to happen to it, I’d replace it in a flash.

Since I’ve had mine, I’ve had numerous clients ask about it. After demo’ing the device, most have actually gone out and pulled the trigger. They all compare it to their various old tablets, and say hands down the Surface Pro is #1.

Recently one of my clients, Larry Wellspring at Synterra Technologies Ltd. (a leading seismic consulting company located here in Calgary, Alberta and a long time client of mine) was thinking of purchasing one so he didn’t have to lunk around his high performance laptop. One of the most important questions he had were the spec’s of the device and if it could handle the seismic software applications he and his business use. Since the Surface Pro is essentially a higher performing computer in the form factor of a tablet, I said chances are it would work. He went out and bought one.

For the most part, most applications worked right off the bat. However we had a few issues with Omni 3D from Gedco. The application would install fine, however we were receiving errors when launching the application:

The application was unable to start correctly (0xc0150002). Click OK to close the application.

We tried contacting Omni 3D support, however they mentioned running Omni 3D on Windows 8 was unsupported and untested, especially running it on a Tablet. They mentioned they’ve never recalled getting Omni 3D to run on a tablet. Well, we wanted to make history! :)

Trying different compatibility configurations had no affect. Ultimately, I researched the error and noticed it had something to do with C++ runtime’s. Although none of the posts had a solution to our problem, it at least pointed us in the right direction. I noticed we already had the 64-bit and 32-bit C++ 2010 runtime’s installed (I believe a different application installed these), so first and foremost, I re-installed these. It had no affect. I then decided to try installing the C++ 2008 run time installs. In our case, we installed the 64-bit version of Omni 3D, so I installed the 64-bit version of the Microsoft Visual C++ 2008 Runtime components available here.

After installing this, we went to open up Omni 3D and it worked!

Keep in mind that this should not only work and apply to Surface Pro tablets, but to anyone trying to install Omni 3D on Windows 8.

May 312013
 

Back in February, I was approached by a company that had multiple offices. They wanted my company to come in and implement a system that allowed them to share information, share files, communicate, use their line of business applications, and be easily manageable.

The first thing that always comes to mind is Microsoft Small Business Server 2011. However, what made this environment interesting is that they had two branch offices in addition to their headquarters all in different cities. One of their branch offices had 8+ users working out of it, and one only had a couple, with their main headquarters having 5+ users.

Usually when administrators think of SBS, they think of a single server (two server with the premium add-on) solution that provides a small business with up to 75 users with a stable, enterprise feature packed, IT infrastructure.

SBS 2011 Includes:

Windows Server 2008 R2 Standard

Exchange Server 2010

Microsoft SharePoint Foundation 2010

Microsoft SQL Server 2008 R2 Express

Windows Server Update Services

(And an additional Server 2008 R2 license with Microsoft SQL Server 2008 R2 Standard if the premium add-on is purchased)

 

Essentially this is all a small business typically needs, even if they have powerful line of business applications.

One misconception about Windows Small Business Server is the limitation of having a single domain controller. IT professionals often think that you cannot have any more domain controllers in an SBS environment. This actually isn’t true. SBS does allow multiple domain controllers, as long as there is a single forest, and not multiple domains. You can have a backup domain controller, and you can have multiple RODCs (Read Only Domain Controller), as long as the primary Active Directory roles stay with the SBS primary domain controller. You can have as many global catalogs as you’d like! As long as you pay for the proper licenses of all the additional servers :)

This is where this came in handy. While I’ve known about this for some time, this was the first time I was attempting at putting something like this in to production.

 

The plan was to setup SBS 2011 Premium at the HQ along with a second server at the HQ hosting their SQL, line of business applications, and Remote desktop Services (formerly Terminal Services) applications. Their HQ would be sitting behind an Astaro Security Gateway 220 (Sophos UTM).

The SBS 2011 Premium (2 Servers) setup at the HQ office will provide:

-Active Directory services

-DHCP and DNS Services

-Printing and file services (to the HQ and all branch offices)

-Microsoft Exchange

-”My Document” and “Desktop” redirection for client computers/users

-SQL DB services for LoB’s

-Remote Desktop Services (Terminal Services) to push applications out in to the field

 

The first branch office, will have a Windows Server 2008 R2 server, promoted to a Read Only Domain Controller (RODC), sitting behind an Astaro Security Gateway 110. The Astaro Security Gateway’s would establish a site-to-site branch VPN between the two offices and route the appropriate subnets. At the first branch office, there is issues with connectivity (they’re in the middle of nowhere), so they will have two internet connections with two separate ISPs (1 line of sight long range wireless backhaul, and one simple ADSL connection) which the ASG 110 will provide load balancing and fault tolerance.

The RODC at the first branch office will provide:

-Active Directory services for (cached) user logon and authentication

-Printing and file services (for both HQ and branch offices)

-DHCP and DNS services

-”My Documents” and “Desktop” redirection for client computers/users.

-WSUS replica server (replicates approvals and updates from WSUS on the SBS server at the main office).

-Exchange access (via the VPN connection)

Users at the first branch office will be accessing file shares located both on their local RODC, along with file shares located on the HQ server in Calgary. The main wireless backhaul has more then enough bandwidth to support SMB (Samba) shares over the VPN connection. After testing, it turns out the backup ADSL connection also handles this fairly well for the types of files they will be accessing.

 

The second branch office, will have an Astaro RED device (Remote Ethernet Device). The Astaro/Sophos RED devices, act as a remote ethernet port for your Astaro Security Gateways. Once configured, it’s as if the ASG at the HQ has an ethernet cable running to the branch office. It’s similar to a VPN, however (I could be wrong) I think it uses EoIP (Ethernet over IP). The second branch doesn’t require a domain controller due to the small number of users. As far as this branch office goes, this is the last we’ll talk about it as there’s no special configuration required for these guys.

The second branch office will have the following services:

-DHCP (via the ASG 220 in Calgary)

-DNS (via the main HQ SBS server)

-File and print services (via the HQ SBS server and other branch server)

-”My Document” and “Desktop” redirection (over the WAN via the HQ SBS server)

-Exchange access (via the Astaro RED device)

 

For all the servers, we chose HP hardware as always! The main SBS server, along with the RODC were brand new HP Proliant ML350p Gen8s. The second server at the HQ (running the premium add-on) is a re-purposed HP ML110 G7. I always configure iLo on all servers (especially remote servers) just so I can troubleshoot issues in the event of an emergency if the OS is down.

 

So now that we’ve gone through the plan. I’ll explain how this was all implemented.

  1. Configure and setup a typical SBS 2011 environment. I’m going to assume you already know how to do this. You’ll need to install the OS. Run through the SBS configuration wizards, enable all the proper firewall rules, configure users, install applicable server applications, etc…
  2. Configure the premium add-on. Install the Remote Desktop Services role (please note that you’ll need to purchase RDS CAL’s as they aren’t included with SBS). You can skip this step if you don’t plan on using RDS or the premium server at the main site.
  3. Configure all the Astaro devices. Configure a Router to Router VPN connection. Create the applicable firewall rules to allow traffic. You probably know this, but make sure both networks have their own subnet and are routing the separate subnets properly.
  4. Install Windows Server 2008 R2 on to the target RODC box (please note, in my case, I had to purchase an additional Server 2008 license since I was already using the premium add-on at the HQ site. (If you purchase the premium add-on, but aren’t using it at your main office, you can use this license at the remote site).
  5. Make sure the VPN is working and the servers can communicate with each other.
  6. Promote the target RODC to a read only domain controller. You can launch the famous dcpromo. Make sure you check the “Read Only domain controller” option when  you promote the server.
  7. You now have a working environment.
  8. Join computers using the SBS connect wizard. (DO NOT LOG ON AS THE REMOTE USERS UNTIL YOU READ THIS ENTIRE DOCUMENT)

I did all the above steps at my office and configured the servers before deploying them at the client site.

You essentially have a working basic network. Now to get to the tricky stuff! This tricky stuff is to enable folder redirection at the branch site to their own server (instead of the SBS server), and get them their own WSUS replica server.

 

Now to the fancy stuff!

1. Installing WSUS on the RODC using the add role feature in Windows Server: You have to remember that RODC’s are exactly what they say! !READ ONLY! (As far as Active directory goes)! Installing WSUS on a RODC will fail off the bat. It will report that access is denied when trying to create certain security groups. You’ll have to manually create these two groups in Active Directory on your primary SBS server to get it to work:

  • SQLServer2005MSFTEUser$RODCSERVERNAME$Microsoft##SSEE
  • SQLServer2005MSSQLUser$RODCSERVERNAME$Microsoft##SSEE

Replace RODCSERVERNAME with the computer name of your RODC Server. You’ll actually notice that two similiar groups already exist (with the server name different) for the existing Windows SBS WSUS install, this existing groups are for the main WSUS server. After creating these groups, this will allow it to install. After this is complete, follow through the WSUS configuration wizard to configure it as a replica for your primary SBS WSUS server.

2. One BIG thing to keep in mind is that with RODC’s you need to configure what accounts (both user and computer) are allowed to be “cached”. Cached credentials allow the RODC to authenticate computers and users in the event the primary domain controller is down. If you do not configure this, if the internet goes down, or the primary domain controller isn’t available, no one will be able to log in to their computers or access network resources at the branch site. When you promoted the server to a RODC, two groups were created in Active Directory: Allow RODC Cached Logins, and Deny RODC Cached Logins (I could be wrong on the exact name since I’m going off memory). You can’t just select and add users to these groups, you need to also select and add the computers they use as well since computers have their own “computer account” in Active Directory.

To overcome this, create two security groups under their respective existing groups. One group will be for users of the branch office, the other group will be for computers of the branch office. Make sure to add applicable users and groups as members of the security groups. Now go to the “Allow RODC Cached Logins” group created by the dc promotion, and add those two new security groups to that group. This will allow remote users and remote computers to authenticate using cached security credentials. PLEASE NOTE: DO NOT CACHE YOUR ADMINISTRATIVE ACCOUNT!!! Instead, create a separate administrative account for that remote office and cache that.

3. One of the sweet things about SBS is all the pre-configured Group policy objects that enable the automatic configuration of the WSUS server, folder redirection, and a bunch of other great stuff. You have to keep in mind that off of the above config, if left alone up to this point, the computers in the branch office will use the folder redirection settings and WSUS settings from the main office. Remote users folder redirection (whatever you have selected, in my case My Documents and Desktop redirection) locations will be stored on the main HQ server. If you’re alright with this and not concerned about the size of the user folders, you can leave this. What I needed to do (for reasons of simple disaster recovery purposes) is have the folder re-directions for the branch office users store the redirection on their own local branch server. Also, we need to have the computers connect to the local branch WSUS server as well (we don’t want each computer pulling updates over the VPN connection as this will use up tons of bandwidth). What’s really neat is when users open applications via RemoteApp (over RDS), if they export files to their desktop inside of RemoteApp, it’ll actually be immediately available on their computer desktop since the RDS server is using these GPOs.

To do this, we’ll need to duplicate and modify a couple of the default GPOs, and also create some OU (Organizational Unit) containers inside of Active Directory so we can apply the new GPOs to them.

First, under “SBSComputers” create an OU called “Branch01Comps” (or call it whatever you want). Then under “SBSUsers” create an OU called “Branch01Users”. Now keep in mind you want to have this fully configured before any users log on for the first time. All of this configuration should be done AFTER the computer is joined (using the SBS connect) to the domain and AFTER the users are configured, but BEFORE the user logs in for the first time. Move the branch office computer accounts to the new Branch office computers OU, and move the Branch office user accounts to the Branch office users OU.

Now open up the Group policy Management Management Console. You want to duplicate 2 GPOs: Update Services Common Settings Policy (rename the duplicate to “Branch Update Services Common Settings Policy” or something), and Small Business Server Folder Redirection Policy (rename the duplicate to “Branch Folder Redirection” or something).

Link the new duplicated Update Services policy to the Branch Computers OU we just created, and link the new duplicated folder redirection to the new users policy we just created.

Modify the duplicated server update policy to reflect the address of the new branch WSUS replica server. Computers at the branch office will now pull updates from that server.

As for Folder redirection, it’s a bit tricky. You’ll need to create a share (with full share access to all users), and then set special file permissions on the folder that you shared (info available at http://technet.microsoft.com/en-us/library/cc736916%28v=ws.10%29.aspx). On top of that, you’ll need to find a way to actually create the child users folders under that share/folder in which you created. I did this by going in to active directory, opening each remote user, and setting their profile variable to the file share. When I hit apply this would create a folder with their username with the applicable permissions under that share, after this was done, I would undo that variable setting and the directory created would stay. Repeat this for each remote user at that specific branch office. You’ll also need to do this each time you add a new user if they bring on more staff, you’ll also need to add all new computers and new users to the appropriate OUs, and security groups we’ve created above.

FINALLY you can now go in to the GPO you duplicated for Branch Folder redirection. Modify the GPO to reflect the new storage path for the redirection objects you want (just a matter of changing the server name).

4. Configure Active Directory Sites and Services. You’ll need to go in to Active Directory Sites and Services and configure sites for each subnet you have (you main HQ subnet, branch 1 subent, and branch 2 subnet), and set the applicable domain controller to those sites. In my case, I created 3 sites, and configured the HQ subnet and second branch to authenticate off the main SBS PDC, and configured the first branch (with their own RODC) to authenticate off their own RODC. Essentially, this tells the computers which domain controller they should be authenticating against.

 

And you’re done! (I don’t think I’ve forgotten anything). Few things to remember, whenever adding new users and/or computers to the branch, ALWAYS join using SBS wizard, add computer to the branch OU, add user to the branch OU, create the users master redirection folder using the profile var in the AD user object, and separately add both user and computer accounts as members of the security group we created to cache credentials.

And remember, always always always test your configuration before throwing it out in to production. In my case, I got it running first try without any problems, but I let it run as a test environment for over a month before deploying to production!

 

We’ve had this environment running for months now and it’s working great. What’s even cooler is how well the Astaro Security Gateway (Sophos UTM) is handling the multiple WAN connections during failures, it’s super slick!

Mar 152013
 

Well, woke up 10 minutes ago to find that my Nokia Lumia 900 notified me that new Windows Phone updates were available. Notification is for Windows Phone OS Version 7.10.8860.142 on my Canadian Rogers Branded Nokia Lumia 900.

7.10.8860.142

 

 

 

 

 

 

 

 

 

I’ve tried using Google to find out what the update includes, however information is limited. After installing this update, another new update was also available and automatically started to install: OS Version 7.10.8862.144). Right now I’m just finishing up the second.

Windows Phone 7.10.8862.144

 

 

 

 

 

 

 

 

 

You’ll notice how the cancel button is usable on the first update, while it’s not on the second. I’d bet money on the fact that the second update is in fact a firmware update versus software update (or maybe both).

I’m thinking one of these updates contains bug fixes for the live tiles and other fixes, while the other may fix the Bluetooth Sharing app. Let me know if any of you notice any additional fixes/features. Happy Updating! I’ll update when I find things out and finish the updates.

UPDATE: Just finished installing both updates. Bluetooth sharing still does not work (says I have to do a update on the phone, however no more updates are available). Can’t confirm this fixes the Live Tiles “Issue” (I’ve never had the issue so I can’t comment).