The HD7 case hunt (Part 2)

[sc:mobile-category ]Since I’ve purchased by HD7, finding a case that fits the beast has been a real challenge.

By carrier (Bell) has a vertical holster listed, but its not nearly big enough to hold the HD7.  Throughout the colder months since I purchased it, I’ve mostly just keep in in my jacket pocket and used the Otter box case, which is quite nice.

But now that summer is getting close and I’ve actually gone out a few days without a jacket, the HD7 has become a problem to carry.  I have not found anywhere in Canada to get a case so I order one from the US, by a company called PDair, which you can see here.

The case is actually quite beautiful, high quality leather, nice distinctive stitching and very well made.

There are only two issues I have with it, one of which I knew before I ordered it and the other that didn’t present itself until I had unbox it.

The first issue is the open top design.  There is no flap and clasp to keep the phone in the case if you bend over or remove the case from your belt.  I knew this was the case and honestly I believe its more of a unsubstantiated fear then a real problem, but it seems to always be in the back of my mind which makes it seem much worse than it is.

The second issue is the belt clip:

PDair HD7 Case

It’s only about 5cm long, which means unless your wearing one of those skinny dress belts, it doesn’t clip over the belt but instead rides on the belt itself.  This makes it hard to adjust and likewise feel like it isn’t securely fastened to your belt.  There were no photo’s of the clip anywhere on the site or on other vendor sites so this came somewhat as a surprise when I received the case.

You might notice that the belt clip is slightly askew, this is a manufacturing defect, but it doesn’t effect the cases function in any way so I don’t’ consider it a problem.

One additional note was that when I received mine the clip was so tight that I couldn’t get it over the belt at all, I literally had to pry it away from the case and bend it back to loosen it up.  This is of course not a bad thing as it does show the build quality of the case.

Overall I like the case quite a bit, however I’ve ordered a second case with a flap and clasp (not from PDair as they don’t have one for the HD7) with what I hope will be a much larger belt clip.

I’d recommend this case as long as you can live with the two caveats above, which I’m thinking I can’t Winking smile.

Windows Server Backup and Migrating VM’s between Hypervisors

[sc:windows-category ]With the migration from VMWare Server 2 to Hyper-V a slight changed had to occur in the configuration of the VM’s disks, namely that VMWare by default uses SCSI disks while Hyper-V requires IDE disks to boot from.

This isn’t much of a problem during the conversion process as Windows 2008 has all the drivers it needs for either SCSI or IDE booting.  Hyper-V supports SCSI disks for data storage, however since I was moving the boot disks to IDE, I moved my data disks to IDE as well.

On most of my VM’s this was not an issue as I had a single boot volume and a second disk for Windows Server Backup to use for storage.  The exception was my Exchange server which had two data disk (OS and Exchange DB) as well as the backup volume.

When I tried to install the Exchange backup volume on the second IDE controller, Hyper-V complained it was already in use (which it wasn’t) so I instead added the VHD to the SCSI controller.

After finishing the conversion of the Exchange VM and letting it run for a couple of days I check the backups, which were running fine.  I didn’t bother checking the backups on the other VM until this week.  On the rest of the servers, backups were failing with an “Unknown media type” error.

Windows Server Backup could see the backup volume and the associated backups but couldn’t add new backups to it.

The solution was pretty straight forward, simply going in to Windows Server Backup and modifying the existing backup schedule, without actually changing any values, rewrote the backup config with the new IDE based backup volume.  Backups then proceed as expected without error.

Building a new VM server Part 3: Moving the VM’s and Conclusion

[sc:windows-category ]This is part 2, read here for part 1 and here for part 2.

Updates from part 2

Since part 2, I’ve been playing with the air flow through the case to see if I can improve cooling efficiency.  In the cases stock configuration the processors were running at about 30 degrees Celsius and spiking up to about 40 under load.  With the original CPU fans installed, this dropped down about 6 degrees.

Even with the extra sound installation in the new case, the original fans were far to noisy to leave in place so I removed them and re-installed the low speed fans.  Overnight, one of the CPU fans fell off (I had only temporarily installed them with some duct tape as none of the screw holes lined up) and when I checked the CPU temperature it was up to just over 80.  The system was running fine (good to know where the thermal limits of the CPU are Smile), so I shut down the server and created some mounting brackets for the CPU fans to ensure they didn’t come off again.

One thing I had noticed was that the rear 120mm fan ran very slow (only about 600rpm), I replaced this with a 1200rpm fan and let the system run for a few hours.  The CPU temperature stayed consistently at 24-26 degrees, which was pretty much where they were with the high speed CPU fans installed.  This will be my final configuration for the systems air cooling.

One other limitation on the motherboard is that it only has two USB ports on the back panel, under normal circumstances I would need at least four (1 kb, 1 mouse, 1 UPS, 1 printer) and this would be my preference, but with the built in IP KVM, I’ll probably drop the keyboard and mouse and get buy with the two (yes I know, I could get a hub, or mount a couple of extra USB ports on the backplane from the MB, but that’s not the point Winking smile).

This plan worked right up until I noticed that the integrated KVM seemed to drop the remote keyboard control if a physical keyboard wasn’t connected, however I am still making due with just two USB ports as the KVM I have setup only uses one USB port for both mouse and keyboard.  Likewise, I’m moving the printer from USB to IP so that just leaves the UPS needing a second USB port.

Porting the VM’s

Moving the VM’s turned out to be pretty straight forward.  As described in the previous post, moving Windows 2008 VM’s across from VMWare Server to Hyper-V takes some time (primarily to convert the disk images from vmdk to vhd), but otherwise works well.

I took the time to also install SP1 on the 2008 servers so that everything had the latest and greatest from MS.  Interestingly this caused the only issue I’ve had so far other than the DFS issue, my Exchange 2010 server didn’t have enough space on the virtual hard drive to do the SP1 install.

Currently I’ve been creating 32g OS volumes on my VM’s, however the Exchange server is the oldest VM I have and at the time I only created a 24g OS volume.  Previously with VMWare I would have used the command line tools to expand the vmdk file and then gparted’s live CD to expand the partition.

Having moved the VM over to Hyper-V and converted it to a VHD before starting the SP1 install, I instead used the MS tools to expand the drive.  VMWare’s tool takes quite a while to expand the disk, even when it’s been defined as a dynamic disk, in contrast, MS’s tool executes instantly.

Having worked with 2008 R2 for a while, I knew that MS had included support for mounting a vhd directly in to the OS, so with the VM still shutdown, I mounted the expanded volume on to the Hyper-V server and used Disk Manager to extend the partition.  Gparted would have taken quite a bit of time expand the partition, however Disk Manager executed instantly as well.

Rebooting the VM’s provided no surprises and SP1 installed without further issue.

Performance

With the Hyper-V server now fully loaded down with VM’s (Exchange, DFS, SQL, SharePoint and a Domain Controller) the new server has been performing quite well, CPU temperature has remained pretty consistent around 26c.  One interesting note is that one CPU is consistently 2c above the other, even under load.

Hard drive performance has also increased, the new MB seems to have a better SATA controller on it than the old one.  One drive related item I’ve had is that when I setup the mirrored VM volume I used software raid instead of hardware raid so that if I want to move to another MB in the future I won’t have to rebuild the mirror set.  However this also means that any time Windows doesn’t shutdown cleanly the mirror set has to do a complete rebuild.  This shouldn’t be a big issue, but I did have a long power out the other day and I guess the UPS failed before the server could completely shutdown which of course caused resync.

Final Thoughts

The new server is running quite well and I am very happy with the system.  The extra RAM and support for multiple cores has significantly improved the VM performance.  Hyper-V, while not perfect is a significant improvement over VMWare Server.  Likewise there is now lots of room to expand in the future as required.

The question now is weather to convert the old VMWare server over to Hyper-V as well, or just leave it until I replace it as well.  I’m leaning towards migrating to Hyper-V now, but I think I’ll run with the system as it is for the time being.  One reason is that I have two Linux based VM left on the old server and while Windows was easy enough to migrate over, those might be a little more difficult and require rebuilds instead.

In part 2, I mentioned I’d ordered a new case and I quite like it, the sound proofing actually makes the new server quieter than the old one and the case looks quite handsome quite honestly.  There is just one small issue with it, to try and be “cool” they’ve installed bright blue LED’s for the power light and a highlight strip of blue in the middle of the front of the case.  Between these two lights enough light is generated to actually light up a small city at night Winking smile, a little bit of black duct tape took care of the issue without doing anything permanent to the case.

All in all, it was a fun, frustrating and rewarding all at the same time, hopefully I don’t have to do it again for another few years Smile.

The Good:

  • Major performance increase over the old system
  • Lots of room to expand RAM and CPU as required
  • Build in remote management on the motherboard, including IP KVM for true headless configuration

The not so bad/not so good:

  • Hyper-V, seems to work fine and better Windows integration, but a pain to convert the VM’s.

The Bad:

  • Had to move to a large tower case
  • Can’t install a better video card without sacrificing the IPKVM
  • Had to create a new VM to deal with DFS

Hyper-V and DFS

[sc:windows-category ]In my current series on building a new VM server I have selected Hyper-V as the hypervisor, in the past I have used the system running the hypervisor to also host a DFS replicated data share for my home folders and other data.

As one of the last steps in setting up the new server I enabled DFS replication, and promptly had the new host server blue screen on me.

There’s about 1.2 tb of data to be replicated and instead of decommissioning the old server and moving the drive over to the new one, I instead picked up a new hard drive for the new system.  So my first thought was some kind of issue with the new drive.

My first step was to change the SATA port and cable, but alas, exactly the same issue.

Searching on the net I did find one reference to the issue, however the only recommendation was to send MS the crash files, which I was not yet ready to do.

Still thinking it was a drive issue, I decided to stress the drive a bit and execute a robocopy of the data from the remove server, this proved to be successful without issue.

This pretty much put the nail in the coffin of the “drive issue” theory and so the next most likely item was BIOS and drivers.  There was a new motherboard BIOS available but that didn’t help anything.

After a few hours of hunting around and found all the network/SATA drivers were up to date as well so that seemed unlikely.

By this point I was pretty well convinced that it was network related and so I decided to install a D-Link Ethernet card (the onboard ports used an Intel chip), disable the Intel Ethernet cards and proceeded to re-enabled DFS.  Which promptly crashed again.

My last thought was that the virtual network card installed by Hyper-V could be conflicting somehow.  Disabling the virtual network card and re-enabling DFS replication proceeded to work without crashing.

While DFS was replicating I decided to test the theory one last time and re-enabled the virtual adapter, at which point the server blue screened again.

Having “found” the issue, I decided to build a new VM to host the DFS replica, installing a 2008 R2 instance and assigning the 2 tb disk to the VM was straight forward enough and enabling replication promptly starting populating the data without incident.

Now my question is, is this an isolated incident or will any Hyper-V/DFS installation cause the same kind of issue.

For the time being this will have to wait, once my old VM host server is decommissioned (should be shortly now) I’ll rebuild it as a Hyper-V server and give it a try, just to see.  If it blue screen’s as well, I’ll have to open a support incident with MS to track it down, if it doesn’t have the issue, I’ll just chalk it up to a weird interaction on the new hardware.

UPDATE #1: Well I guess I spoke too soon, part way through the replication the VM bluescreened, replication has started again and I’m going to see if I can get some more details the next time it happens.

UPDATE #2: One more bluscreen in the last 18 hours, I’ve noticed one other item that could be an issue, both of my previous DFS hosts are running R2, no service pack, while both the Hyper-V server and the new VM I built with SP1.  If the bluescreens continue at this pack I’ll let replication of the rest of the data complete and then try upgrading the other DFS servers to SP1 and see if they subside or not.

UPDATE #3: No more bluescreens on the VM since the upgrade to SP1, however after upgrading one of the other DFS servers to SP1 (and leaving the third with no SP) it bluescreened after about 12 hours.  I’ve upgraded all remaining servers the SP1 and everything has been running for about 48 hours without and further bluescreens.  I haven’t tried used the Hyper-V server as a DFS replicate as I removed the DFS replication role after I created the VM, but I’m pretty confident with the setup I have now and no desire to watch a server blue screen just to prove a point 🙂

 

Building a new VM server Part 2: The Hypervisor

[sc:windows-category ]This is part 2, read here for part 1.

Updates from part 1

In part 1 I mentioned the issues I had with the mid tower case and that I had ordered a full tower case to replace it.  The case arrived this week and installing the motherboard fit in perfectly, the only issue with the new case is the power supply cables only JUST reach the connectors.  I picked up a couple of extenders to take care of the issue.

The case I order was the NZXT Whisper, a full tower with sound insulation as well.  The case is very nice, with lots of drive bays and easy access to everything.

VMWare or Hyper-V or Something Else?

I’ve used VMWare Server on my VM host servers for several years, however with the additional hardware and the apparent stagnation of the product, I’ve decided to take a look at a few alternatives:

  • VMWare ESXi
  • VirtualBox
  • Microsoft Hyper-V

VMWare ESXi

ESXi is the bare metal free hypervisor from VMWare. It’s big selling features for me are:

  • Minimal overhead for the host OS
  • Truly enterprise scale
  • No monthly patching cycle

The big draw back from my view is that my existing servers host my primary file shares and of course ESXi would not have that functionality. So in effect, going to ESXi would require me to build two additional VM’s, which seems to marginalize the benefits of ESXi for me.

It was close though, perhaps next time. Some other items I don’t like about ESXi:

  • Yet another user/password store
  • Web based admin as the default
  • The Windows client is a little wonk

VirtualBox

VirtualBox is Sun’s (now Oracle) free hypervisor which runs on both Linux and Windows. I pulled down the Windows version and was quite impressed. The Windows based tools are nice and its fully integrated in Windows.

Support for multiple CPU’s and RAM are much better than VMWare Server and installation was a breeze.

In fact it could very well be my new hypervisor, except for one, small item.

Oracle.

Their track record with open source has been what can only be described as disturbing.

Hyper-V

Hyper-V of course is part of Windows Server so installation is easy enough. The integration with Windows is as expected complete.

Moving to Hyper-V has one significant issue, converting the VMWare guest VM’s to Microsoft’s format. There is a tool to do this, but I’m not sure how well it will work or if I’ll experience any glitches.

When in comes right down to it, Hyper-V seems like the best solution so that’s what I’m going with.

Converting the VM’s

Of course, going with a new hypervisor requires converting the VM images to Microsoft’s format.  There is quite a bit of information kicking around on the internet about converting, however much of it is now outdated, based on Windows 2003 guest systems.

The first system I’ve moved over I have used VMDK to VHD Converter which can be found here.  The steps I’ve followed so far are:

  • Shutdown the old system
  • Copy across the vmdk files to the new system
  • Execute the converter on the vmdk files
  • Create a new VM in Hyper-V
  • Attach the converted vhd to an IDE controller (this is important, attaching to the SCSI controller will result in a boot failure, it doesn’t matter if the vmdk was attached to SCSI or IDE)
  • Start up the new VM
  • Logon to the VM
  • The Hyper-V tools will install automatically and you’ll need to do a reboot
  • Remove the VMWare tools and reboot again
  • The network card will have changed and defaulted back to a DHCP address, if you had a static address before, go in and update the network card config
  • At this point, the old network card is still configured in the system, so if you try and re-name the network connection it will fail with an error, to get around this, open a command prompt as administrator and run the following:
    • SET DEVMGR_SHOW_NONPRESENT_DEVICES=1
    • START DEVMGMT.MSC
  • This will load the device manager, select “Show hidden devices” under the view menu and then expand the network adapters item, the old VMWare adapter should be visible, right click and select uninstall.  You can now rename your network connection.
  • If you’ve moved to a different CPU or video card, you can remove those as well.

Once the VM was converted I didn’t encounter any issues after that.

Other VM Tools

While reading around the net on how to convert the VM’s between the hypervisors I found that Microsoft has a a product called System Center Virtual Machine Manager (VMM) which automates the conversion of VM’s, among a host of other management tasks.  VMM is an enterprise calls management tool but I decided to give it a try.

After a couple of hiccups during the install of the tool, I found it didn’t support VMServer installs, just ESX.  There is a manual way to do the conversions with VMM, but that pretty much defeated the entire point of the VMM for me.

The other “issue” with VMM is that it’s really (and rightly so) directed at enterprises, it was just far too much for the few VM’s I run.  So in the end, I uninstalled it and will covert the rest of the VM with the above methodology.

Coming up next

The next step will be to do the physical swap between the old and new servers.  That will be the focus of the final part of this series.