[sc:windows-category ]This is part 2, read here for part 1 and here for part 2.
Updates from part 2
Since part 2, I’ve been playing with the air flow through the case to see if I can improve cooling efficiency. In the cases stock configuration the processors were running at about 30 degrees Celsius and spiking up to about 40 under load. With the original CPU fans installed, this dropped down about 6 degrees.
Even with the extra sound installation in the new case, the original fans were far to noisy to leave in place so I removed them and re-installed the low speed fans. Overnight, one of the CPU fans fell off (I had only temporarily installed them with some duct tape as none of the screw holes lined up) and when I checked the CPU temperature it was up to just over 80. The system was running fine (good to know where the thermal limits of the CPU are ), so I shut down the server and created some mounting brackets for the CPU fans to ensure they didn’t come off again.
One thing I had noticed was that the rear 120mm fan ran very slow (only about 600rpm), I replaced this with a 1200rpm fan and let the system run for a few hours. The CPU temperature stayed consistently at 24-26 degrees, which was pretty much where they were with the high speed CPU fans installed. This will be my final configuration for the systems air cooling.
One other limitation on the motherboard is that it only has two USB ports on the back panel, under normal circumstances I would need at least four (1 kb, 1 mouse, 1 UPS, 1 printer) and this would be my preference, but with the built in IP KVM, I’ll probably drop the keyboard and mouse and get buy with the two (yes I know, I could get a hub, or mount a couple of extra USB ports on the backplane from the MB, but that’s not the point ).
This plan worked right up until I noticed that the integrated KVM seemed to drop the remote keyboard control if a physical keyboard wasn’t connected, however I am still making due with just two USB ports as the KVM I have setup only uses one USB port for both mouse and keyboard. Likewise, I’m moving the printer from USB to IP so that just leaves the UPS needing a second USB port.
Porting the VM’s
Moving the VM’s turned out to be pretty straight forward. As described in the previous post, moving Windows 2008 VM’s across from VMWare Server to Hyper-V takes some time (primarily to convert the disk images from vmdk to vhd), but otherwise works well.
I took the time to also install SP1 on the 2008 servers so that everything had the latest and greatest from MS. Interestingly this caused the only issue I’ve had so far other than the DFS issue, my Exchange 2010 server didn’t have enough space on the virtual hard drive to do the SP1 install.
Currently I’ve been creating 32g OS volumes on my VM’s, however the Exchange server is the oldest VM I have and at the time I only created a 24g OS volume. Previously with VMWare I would have used the command line tools to expand the vmdk file and then gparted’s live CD to expand the partition.
Having moved the VM over to Hyper-V and converted it to a VHD before starting the SP1 install, I instead used the MS tools to expand the drive. VMWare’s tool takes quite a while to expand the disk, even when it’s been defined as a dynamic disk, in contrast, MS’s tool executes instantly.
Having worked with 2008 R2 for a while, I knew that MS had included support for mounting a vhd directly in to the OS, so with the VM still shutdown, I mounted the expanded volume on to the Hyper-V server and used Disk Manager to extend the partition. Gparted would have taken quite a bit of time expand the partition, however Disk Manager executed instantly as well.
Rebooting the VM’s provided no surprises and SP1 installed without further issue.
Performance
With the Hyper-V server now fully loaded down with VM’s (Exchange, DFS, SQL, SharePoint and a Domain Controller) the new server has been performing quite well, CPU temperature has remained pretty consistent around 26c. One interesting note is that one CPU is consistently 2c above the other, even under load.
Hard drive performance has also increased, the new MB seems to have a better SATA controller on it than the old one. One drive related item I’ve had is that when I setup the mirrored VM volume I used software raid instead of hardware raid so that if I want to move to another MB in the future I won’t have to rebuild the mirror set. However this also means that any time Windows doesn’t shutdown cleanly the mirror set has to do a complete rebuild. This shouldn’t be a big issue, but I did have a long power out the other day and I guess the UPS failed before the server could completely shutdown which of course caused resync.
Final Thoughts
The new server is running quite well and I am very happy with the system. The extra RAM and support for multiple cores has significantly improved the VM performance. Hyper-V, while not perfect is a significant improvement over VMWare Server. Likewise there is now lots of room to expand in the future as required.
The question now is weather to convert the old VMWare server over to Hyper-V as well, or just leave it until I replace it as well. I’m leaning towards migrating to Hyper-V now, but I think I’ll run with the system as it is for the time being. One reason is that I have two Linux based VM left on the old server and while Windows was easy enough to migrate over, those might be a little more difficult and require rebuilds instead.
In part 2, I mentioned I’d ordered a new case and I quite like it, the sound proofing actually makes the new server quieter than the old one and the case looks quite handsome quite honestly. There is just one small issue with it, to try and be “cool” they’ve installed bright blue LED’s for the power light and a highlight strip of blue in the middle of the front of the case. Between these two lights enough light is generated to actually light up a small city at night , a little bit of black duct tape took care of the issue without doing anything permanent to the case.
All in all, it was a fun, frustrating and rewarding all at the same time, hopefully I don’t have to do it again for another few years .
The Good:
- Major performance increase over the old system
- Lots of room to expand RAM and CPU as required
- Build in remote management on the motherboard, including IP KVM for true headless configuration
The not so bad/not so good:
- Hyper-V, seems to work fine and better Windows integration, but a pain to convert the VM’s.
The Bad:
- Had to move to a large tower case
- Can’t install a better video card without sacrificing the IPKVM
- Had to create a new VM to deal with DFS