Hosting and Your House

[sc:general-category ]As some may be aware, every four years in Ontario we get new evaluations of the value of our home from the government.   This is then used to base our property taxes on.  I recently received mine and was reviewing it when I started to think about what options I had as I had been thinking of buying a new house.

While this was rattling around in my head I was also having a conversation with a colleague about where to host a WordPress installation.  These two things collided in a sudden flash of insight for a metaphor as follows:

Ownership

For the longest time the only option you had for where you placed your systems was in your own datacenter.  Sometimes this was the classic “under the desk” hosting, but in all cases, you owned and controlled your systems.

This is just like owning a house.  It’s yours to do with as you like (baring certain bylaws and home owner associations of course 😉

In many was I think we now undervalue this in IT.  Pretty much everyone’s dream is to own their own home some day and its pretty much the exact opposite these days in IT.

Co-habitation

Once the IT industry matured a little bit, it turned out there was money to be made in renting out extra space in a datacenter to others and this business model eventually morphed in to whole companies that just rented out the space and didn’t really have any significant footprint in their own datacenters.

This still makes sense in many cases as your business may not be close to a major internet truck and placing servers closer to users is always a good idea.

This is kind of like buying a condo.  You own your own servers but you pay monthly maintenance fees and are more limited in what you can do by the condo association.

Renting

Shared hosting really took off when virtualization became a big thing.  Before that you could still rent a server from a provider, but it was expensive and if you needed console access it was difficult.  Virtualization changed all that.  Vendors could load up big servers with lots of VM’s and give users direct access to pretty much the entire system.

This is like renting an apartment.  You don’t own anything and if you don’t pay your rent you get kicked out on the street!

Communal Living

The final push these days is beyond shared hosting and in to the cloud.  Don’t even rent servers, just consume services provided by others that you have no control over.  This is the ultimate extension of the move away from ownership in IT.

It’s like living at the YMCA, you don’t get any privacy or say in how things are run.  If you don’t like it, leave.

 

The “Best” Online Provider

[sc:internet-category ]A friend the other day asked me what I considered the “best” online provider.  We’re not talking about an ISP, but instead a web-based company that provides the services you use everyday.  This question is really several questions all rolled up in to one, so let’s break it down shall we?

First, what are the qualifiers for best?

  • Has multiple service offerings (since service sites like Twitter and Facebook don’t count)
  • Owns the components it is offering (sorry Yahoo)
  • Quality of services
  • Privacy policies
  • Transparency
  • Uptime
  • Company Foundations

Let’s tackle them one at a time…

Multiple Service Offerings

Any online provider these days gives you basic e-mail, but what about the other things you want.  Search, storage space, maps, maybe some office style applications?

The two big names here are Microsoft and Google.  I’m not that familiar with Apple’s offerings, but they too have some of these covered.  Services like Yahoo fit in here two as their sheer breadth of services boggle the mind.

Microsoft has come on strong in the last little while in this space and Google has made some “interesting” choices (like their recent move to drop IE8 support).  I’d have to call it a tie between the two these days, they both offer a lot of services for free (half a point each).

Owns the components it is offering

This category kind of is here to trim the field from the above.  At the end of the day Microsoft and Google invest huge amounts in to their services and build them from the ground up.  Many of the competing providers just can’t do that.  I’ll give Apple an honorable mention here as they do invest in their own services, sometimes to the detriment of their users 😉

I’m going to just give this one to Google (one point for Google), Microsoft does partner with some others like Nokia to deliver some of their services, but it is very close either way.

At this point I’m only going to focus on the two companies that really made it through the first two rounds, Microsoft and Google.

Quality of Services

Both MS and Google take service quality seriously and even a little as a month ago I would have handed this to Google just because they keep moving their products forward much faster than Microsoft.  However with the introduction of Outlook.com and a series of updates to most of their online properties, I’m going to call this one a tie instead (half a point each).

Privacy Polices

And here’s where it gets interesting, Microsoft and Google have very different opinions about your privacy.  Google looks at users as a source of revenue and uses any and all information it has about you to sell ads.  It’s Google’s Achilles heel, they only have one source of revenue and it is their task master.  The recent combination of all their privacy polices across all their services shows how focused they are on this aspect of their business.

While Microsoft isn’t a saint, in comparison they look pretty good.  So I’m giving this one to Microsoft (one point for MS).  In the end it’s the lesser evil.

Transparency

See above, Google is crippled in this area in the same way.  To protect its revenue stream it is very secretive just about everything.

Microsoft has always played it tight to the vest, but they have nothing on Google in this area.

This one goes to MS as well (one point for MS).

Uptime

Doing a quick search I didn’t find anyone tracking uptime of the major providers, but I think it’s safe to say that both have pretty good uptime numbers.  On a personal note I seem to hear about more Google outages than Microsoft, but that isn’t definitive.

I’m going with a tie on this one (half a point each).

Company Foundations

This is kind of a catch all, both companies are massive enterprises that make boat loads of money each year, but there is a difference between the two.  Microsoft has a diverse portfolio of products and services that make them money.  Windows, Office, Exchange, SQL, Xbox, etc are all big money makers and if any one of them failed Microsoft would continue to exist.

Google on the other hand is a one trick pony, virtually all of their profits come from their advertising business.  If that were to ever falter, the rest would not be able to continue.

This one goes to Microsoft (one point for MS).

Summary

And the tally is… Microsoft:4.5   Google:2.5

There you go, for me at least Microsoft is the best online provider.  But of course that’s only my opinion 😉

 

A Week of Upgrades [4/4]: Hyper-V Backup Script

[sc:windows-category ]The first three parts of this series were all about updating the software I run at home, this last part of the series will instead be about a backup script I modified for the VM servers.

So first of all, let me walk you though the manual method I have been using to make backups of the Hyper-V VM’s:

  1. Connect a USB drive to my workstation
  2. Mount a TrueCrypt volume on the USB drive and share it from the workstation
  3. Connect to the Hyper-V server through RDP
  4. Shutdown the VM (either though Hyper-V Manager or for the Linux boxes, through a console logon)
  5. Open an Explorer window and copy the VHD to a temporary directory on the server
  6. Restart the VM
  7. Copy the backup VHD across the network using Explorer
  8. Rinse, wash, repeat for each VM

This is not a fast process and has multiple steps that can take a lot of time to complete (mostly the copies) which makes is prone to taking longer than it should as I get working on something else and forget to check the progress.

After poking around the net a bit I found this article with a script that backs up a VM through PowerShell.  It’s not too bad but it does have a few things that needed changing for my own uses:

First off the script is in two parts, a calling script (backup.ps1) and the core script that actually does the backup (function.ps1).  I modified Backup.PS1 in two ways:

$script_dir = “D:\VMBackup”

$bdate = get-date -format “MMM dd yyyy”

$guest = “VM01”
$dest_server = “\\desktop\k\$guest\$bdate”
. “$script_dir\function.ps1”

$guest = “VM02”
$dest_server = “\\desktop\k\$guest\$bdate”
. “$script_dir\function.ps1”

The first change is to set $bdate using the get-date cmdlet so I don’t have to manually edit the script each time I want to run the script.

The second change is to move setting the $dest_server variables from function.ps1 to backup.ps1 so each VM can be stored in a different directory.

Function.ps1 had some more significant changes:

##
## Create a backup of the VM defined in the variable $guest
##

write-host “Backing up $guest…”
$temp_dir = “d:\VMBackup”
$VM_Service = get-wmiobject -namespace root\virtualization Msvm_VirtualSystemManagementService

$VM = gwmi -namespace root\virtualization -query “select * from msvm_computersystem where elementname=’$guest'”

$VMReturnState = $VM.EnabledState
$VMName = $VM.ElementName

if (($VM.EnabledState -eq 2) -or ($VM.EnabledState -eq 32768) -or ($VM.EnabledState -eq 32770))
{
write-host “Saving the state of $VMName”
$VM.RequestStateChange(32769)
}

while (!($VM.EnabledState -eq 32769) -and !($VM.EnabledState -eq 3))
{
Start-Sleep(1)
$VM = get-wmiobject -namespace root\virtualization -Query “Select * From Msvm_ComputerSystem Where ElementName=’$VMName'”
}

if ([IO.Directory]::Exists(“$temp_dir\$VMName”))
{
[IO.Directory]::Delete(“$temp_dir\$VMName”, $True)
}

write-host “Exporting the VM…”
$status = $VM_Service.ExportVirtualSystem($VM.__PATH, $True, “$temp_dir”)
if ($status.ReturnValue -eq 4096)
{
$job = [Wmi]$status.Job
while (!($job.PercentComplete -eq 100) -and ($job.ErrorCode -eq 0))
{
Start-Sleep(1)
$job = [Wmi]$status.Job

$i = $job.PercentComplete
write-progress -activity “Export in progress” -status “$i% Complete:” -percentcomplete $i
}
}

## Re-start the VM.
write-host “Restarting $guest…”
$VM.RequestStateChange($VMReturnState)

## Remove any old file that exist in the destination.
if ([IO.Directory]::Exists(“$dest_server\$VMName”))
{
write-host “Found existing backup at destination, deleting it…”
[IO.Directory]::Delete(“$dest_server\$VMName”, $True)
}

write-host “Removing backup VHD’s from the export…”
remove-item(“$temp_dir\$VMName\Virtual Hard Disks\* – Backup Disk.vhd”)

write-host “Copying backup to destination…”
Copy-Item “$temp_dir\$VMName” “$dest_server” -recurse

write-host “Deleting local copy…”
[IO.Directory]::Delete(“$temp_dir\$VMName”, $True)

write-host “Backup of $VMName complete!”

First off I replaced the echo commands with write-host, the original intent of this was to add the -no-new-line option in the export loop so instead of printing the percentages one per line I could add them together with some “…” between them.

Of course doing some additional poking around I found the write-progress cmdlet which gives a nice progress bar instead so I replaced it and reduced the sleep time from 5 seconds to 1 second.

In the original article they do mention moving the VM restart to right after the export and I agree this is a good idea, no sense waiting for the network copy to finish.

I added in a few extra status messages just to make sure things were progressing.  The next item I changed was the last if/else statement in the original script.  It was there as a way to handle existing backups, if one existed it renamed the old folder, did the copy across the network and then deleted the old folder.  However at no point did it do any error checking so it would never exit out, effectively making the rename a redundant item.  Instead I simply checked for the existence of an old backup and if so deleted it.  The rest was common after that so I removed the else.

I also renamed the old $dest variable to $temp_dir to make it clear it was only a temporary export location.

Another item I changed was the destination location.  My external USB drive directory structure is setup as \HOSTNAME\DATE\FILES, so I can have multiple backups of the same server.  The original code made that \HOSTNAME\DATE\HOSTNAME\FILES.  I removed the second redundant hostname entry from the script logic.

The final item I added was the “backup” VHD deletion from the export.  On the Windows servers BM’s I have at least two VHD attached, one for the OS and one for Windows Server Backup to use.  This second VHD is always called “SERVER – Backup Disk.vhd” and as I’m pulling a “clean” copy of the VHD I don’t really need the Windows Server Backup volume so I delete it before moving the export to the USB drive.

This script is pretty good and now my process for backing up the VM’s is much simpler:

  • Connect a USB drive to my workstation
  • Mount a TrueCrypt volume on the USB drive and share it from the workstation
  • Connect to the Hyper-V server through RDP
  • Execute d:\VMBackup\backup.ps1

This process is fully automated and I can let it run over night.  Likewise it only keeps the VM’s down for a short while during the export and then brings them back up as soon as possible.

The only issue I have with it is the copy of the export to the USB drive, copy-item provides NO feedback, it’s just a black hole until the copy is complete.  For large files like the VHD’s this really isn’t a very good thing.  In the manual method the copies provided feedback through the standard Windows Explorer copy dialog.

I believe I have a solution to this, however it is only for one file at a time and doesn’t recurse through the export directory so it will take a bit more work to build the recursion in to it.  As I’m now done all of the backups for this round, it will probably wait for a few more weeks until the next time I run the VM backups.

A Week of Upgrades [3/4]: OpenVPN AS Upgrade

[sc:linux-category ]The previous two articles in this series focused on upgrading to Windows Server 2012, this one is a short little romp to upgrading my OpenVPN install to the latest release.

OpenVPN doesn’t update all that often and since my last update there was only a 0.0.1 version difference, but as I was in the upgrade mode anyway so away I went.

The first node in my OpenVPN cluster upgraded without issue, it’s just a simple matter of logging in as the root user and executing two commands:

wget http://swupdate.openvpn.net/as/openvpn-as-1.8.4-Ubuntu8.i386.deb
dpkg -i openvpn-as-1.8.4-Ubuntu8.i386.deb

An easy enough process on the first node, but when I logged on to the second node, the root file system was not mounted and would not mount.  I decided to force a fsck on the root system by executing the following:

touch /forcefsck

A reboot brought the system partition back online but it was completely full.  Looking around to see what had happened it appeared the OpenVPN logs had simply grown too large and cleaning some up freed up enough space to get things running again.

After that simply running the upgrade commands above worked without further issue and all was good.

A Week of Upgrades [2/4]: Windows Server 2012 Domain Controller, DFS and SQL Upgrades

[sc:windows-category ]Yesterday I upgraded my Hyper-V servers and now its time to do the domain controllers, DFS servers and SQL server.

As always, backing up the system is a must and as my DC’s are all VM’s, I  simply shutdown the systems and copied the VHD’s to a safe location.

After that the next step was to run ADPREP, as always getting the forest up to date before installing the first 2012 DC was simple enough.  No issues were reported when running ADPREP.

As with the Hyper-V servers, I had to un-install my anti-virus software.

As the DC’s are pretty clean installs I decided to do an in place upgrade to Server 2012 and after running setup a few issues were identified:

  1. Lack of disk space.
  2. Virtual Graphics Driver

The disk space is kind of surprising, admittedly I only gave the virtual systems a 32g system partition, but it did have over 8g of free space.  Setup though wants 13g of space before it continues.  I shutdown the VM and used the disk wizard in Hyper-V to expand the virtual disk to 48g.  Then after rebooting the server, used disk manager to extend the system partition to fill the entire 48g.

To resolve the virtual graphics driver I assumed it would be just an update to the new virtual machine extensions from Hyper-V 2012.  A quick reboot and a rerun of setup though brought me back to needing another reboot.  After that the warning was still present but I proceeded anyway.

The upgrade proceeded pretty much as with the Hyper-V servers, once again sitting on the “Getting Ready…” screen for quite a while but eventually moved on.  As I was doing the upgrade later at night, I simply let it run overnight and in the morning I was greeted with a completed install.

I didn’t receive any issues with the video driver, everything was fine after logging in so it seems safe to ignore the warning.

The second domain controller proceeded in the same way and completed without issues as well.

The final step was to move the forest and domain up to the 2012 functional level, which completed without issue.

Next up are the DFS file servers and the MySQL server.  These are pretty straight forward and are all VM’s.  Other than extending the disk space, proceeded without issue.  After the upgrades to the DFS servers I did have to reboot my desktop as connecting to any of the shares on the servers came up with an access error.  After the workstation reboot all was fine again though.

One item of note is licensing, I installed Windows Server 2012 Data Center edition on the Hyper-V servers and it comes with unlimited virtual instance licensing.  Of course this means you have to install the same edition of Windows on the VM’s as you have on the physical host.  The one issue I can think of with this is that you have to use the same activation key as on the physical host, which means moving servers between host servers requires an update to the activation key on the VM’s.

The only minor glitch on any of these systems after the upgrade came up on two of them.  When installation was complete and I logged on for the first time, instead of the light blue theme, a light brown theme was selected.  There doesn’t seem to be anything in common on the systems (One was a DFS server, the other was the SQL server).

Overall, a very easy upgrade, now on to other things.