Step by step Hyper-V failover cluster migration to Windows Server 2012 R2

This guide is a step by step process to migrate a Hyper-V failover cluster to Windows Server 2012 R2.  The guide demonstrates migrating from Windows Server 2008 R2, but the same guide can be applied when migrating from Windows Server 2012.

The scope of this migration covers the same basic setup that has been outlined in my earlier Hyper-V cluster setup guide starting here:  Our hardware consists of two DL360p servers, connected to an HP StorageWorks P2000 by means of iSCSI.  Our migration will follow the path of evicting a node from the existing cluster, formatting it, and bringing up a new cluster on Windows Server 2012 R2 to migrate to.

Step 1: Verification
Before beginning anything else, you want to ensure that your current infrastructure is running healthy.  At this step you want to make sure that there are no underlying hardware and/or software issues that are going to cause you any problems during the migration.  We will be dismantling our redundancy during the migration, albeit temporarily, but you want to be sure you can run all workloads on the single host without issue.  Also make sure that your backup solutions are in place and working properly.

Step 2:  Evict one node
The goal of this step is to move everything off one of the nodes so we can be in the position to take the node out of the cluster and prepare it for Windows Server 2012 R2.  So, on your source cluster you want to move all your cluster services onto a single node.  In this example, I will utilize live migration to move the virtual machines:

After you complete the migration of everything over to a single node, we can proceed with evicting the other node from the cluster.  I like to shut off the server that will be evicted at this point, before evicting.  This allows me to verify that everything is running ok on the single node before committing any changes.  To evict the node, on the source cluster within failover cluster expand nodes and right click the node to evict (note the down arrow corresponding to the node that us currently down).  Under more actions, choose the evict option:

Step 3: Prepare destination cluster
Now that we have freed up one server, we are going to use this server to prepare the destination cluster on Windows Server 2012 R2.  The setup for this follows my previous blog entry for configuring a Hyper-V cluster, I am not going to go into too much detail in the actual cluster setup here.  Setup the newly formatted server just as you were going to configure a Windows Server 2012 R2 failover cluster from scratch.  If you need guidance with this step, check out my blog entries on configuring this here:

When you get your new Windows Server 2012 R2 failover cluster up and running there will be no storage or roles.  This is because these are still dedicated to the old cluster.  We are going to use the Copy Cluster Roles feature to migrate these to the new cluster.  To do this, fire up failover cluster manager on your new Windows Server 2012 R2 node, right click the cluster name, and choose More Actions - Copy Cluster roles:

This will launch the copy cluster roles wizard:

Specify the name of the old cluster in the first step of the wizard

When you hit next the wizard will examine all the roles that can be copied over.  For a more detailed report, click View Report.  The report shown will show more details about the roles that can be copied over, including any that are not eligible to be copied:

In the final step of this wizard you are asked for confirmation.  Once you hit next, the wizard goes through the process of copying the roles over to the new cluster and presents you with a report of the results:

This process has not actually migrated any of the storage over to the new cluster yet.  If you look at the roles, and storage it will actually show as being offline on the new cluster.

Step 4: Migration

This is the first part where you will require a maintenance window to complete this task.  During this step, we will bring down all resources on the old cluster, and bring them back online on the new cluster.  The length of time you will need to complete this step varies based on how many VMs are running within your cluster.  

So for the first part, what we want to do is shut down each and every role that is located on the old cluster.  You can do this by logging into the console of each VM and shutting it down, or right clicking each VM and choosing the shut down option within failover cluster manager.  You don’t want to risk any data corruption when you take down the cluster shared volume:

Next, take down the cluster shared volume.  This is done within failover cluster manager, under cluster shared volumes.  Right click the disk and choose the option to take offline:

Once the disk and all VM’s are offline, we can jump over to the new cluster and bring them back online.  Start with the disk first:  Jump into failover cluster manager on the new server, under storage and disks, and right click your cluster shared volume, choosing the option bring online:

Once the disk is online, you can bring your VM’s up in failover cluster manager under roles, right clicking each and choosing the option start:

You will probably want to upgrade the Hyper-V integration services on each VM at this point as well.  

Almost there!

Step 5: Retire old cluster

When you have verified everything is up and running on the new cluster, we can proceed with decommissioning the old cluster to free up the server and get it added to the new cluster.  What we want to do now is remove all services (VMs) from the old cluster.  On the old server, fire up failover cluster manager and right click each VM choosing the option delete for each one:

And finally, once all services have been removed, right click the cluster name and choose the option More Actions - “Destroy cluster”:

Step 6: Finish up

Finally, we are going to take our old cluster node and bring it back into the new cluster.  Again, follow my steps in the Hyper-V failover cluster setup to get your server prepared to the point at which you would create the cluster.  Instead of creating a cluster however, we are going to add the node to the existing cluster.  We do this in failover cluster manager by right clicking our cluster name and choosing the option “Add Node…”

This will launch another wizard that will walk you through adding the node to the cluster.  During the wizard, specify the name of the additional node to be added

The next step will run a validation on your cluster.  The default option will run all tests, including storage tests.  When the storage tests are ran, the storage is temporarily taken offline to complete the test.  Therefore, before running the test, ensure you have a maintenance window to take your workloads down and make sure you offline your VMs.  You can choose to skip the tests if you want, but its recommended you don’t.  

Hopefully at this point your validation goes well.  If not, review the report and make adjustments where necessary.  If everything goes well, your node will be added to your cluster:

The final step that I do is configure the cluster quorum settings.  Since we are dealing with a 2 node cluster, we will require a disk witness.  To configure your quorum settings, right click your cluster name and choose More Actions - Configure Cluster Quorum Settings:

And finally choose the option to use the default quorum configuration:

That should do it!  If you made it this far, congrats!  

Download CryptoLocker Tripwire 1.0 

Seeing many reports of infections of all the variants of CryptoLocker got me thinking about protecting file servers in a different way.  It seems that with every virus definition, software restriction policy, etc that comes up there is a counter from the malware authors that are releasing this tricky virus.  Infection is highly inevitable, so I wanted to figure out a different method to avoid major damage.  The newest variants of CryptoLocker also appear to be purging the shadow copy stores, making a huge pain for IT admins to restore data in short time-frames. 

Enter CryptoLocker Tripwire.  

I wrote this simple app to run on the file server.  There is no installation required on the app, you just launch it and it runs.  After loading your data share folders, the app will copy a witness file that you choose to a hidden subfolder in each of the folders you have selected.  The hidden folder is prefaced with ######## so the folder is placed at the top of the list, and the witness file is copied within this folder and also named ########.  A file system watcher is started on the witness folder, and once there is a modification of the witness file several things can happen.  In the initial release, you have the option to:  1. Shut down and disable the Server service  2. Shut down and disable the Volume Shadow Copy service and 3.  Shut down the server.  I also threw in the option to fire an email alert via SMTP.  


I’ve tested this thoroughly within a private test network.  Although CryptoLocker managed to get past the initial witness file, it didn’t get far before the server stopped and disabled both services and shut down.  But since the VSS service was stopped I was able to easily restore the 3 files it touched after the witness file via shadow copy restore (after I started the service back up). 

Use this software at your own discretion, and I don’t offer any warranties or guarantees with it.  As always, be sure you have a proper and effective backup solution implemented, as well as adequate network security measures in place.

Download CryptoLocker Tripwire from my SkyDrive shared folder here:

Fire me some feedback, I’d love to hear.   But please remember, this is version 1.0 and I plan on developing this further if there is enough interest.  

Hyper-V Virtual Machines losing network connectivity under Server 2012 R2

Recently was dealt with a pretty major issue where a brand new cluster configuration had an issue with the virtual machines losing network connectivity sporadically.  It would happen without any warning, or errors.  Event viewer looked clean, and the network adapters were functioning fine.  

Problem was not all virtual machines would lose network connectivity.  One would go down, and others on the same external Hyper-V switch would function just fine.  I found out thorough troubleshooting that if I live migrated the VM from one host to another that it would restore network connectivity, but sometimes with a loss to others.  The customer was also complaining that the performance of accessing network data was poor even when the network on the VM’s was operational.  

Since I was ruling out everything in the Hyper-V cluster, I quickly started to think that the network switch the network adapters were plugged into was faulty.  It was a new switch, HP 1810, and I figured I was dealing with an ARP issue on the switch. (1810 unfortunately doesn’t give many options for managing it).  We had similar setups, with the only variable being this cluster was ‘R2’, and all ran ok for years without this issue.  I called both HP and Microsoft for support, and initially we replaced the switch.  Of course, things ran ok for a few days, but then again in the worst time possible the VM’s started losing connectivity again.  

After about 6 calls to Microsoft, and escalation to senior network support, I was given a fix:  Disable VMQ on the physical network adapters on the host boxes.  

To disable VMQ, open up device manager, right clicking each adapter (including any teams that you have configured - as well as each member of the team) choose properties, and select the advanced tab.  Finally scroll down to Virtual Machine Queues and choose the option Disable from the drop down box:

Note that when you do this it will disable, then re-enable the adapter on the host.  So make sure that you are aware of the network loss when you make this change.

Also be sure to uncheck “Enable Virtual Machine Queue” under each of the virtual machine settings as well.  Both need to be done, or you could wind up with network performance issues.

I asked the Microsoft tech if there was a KB article on this specific issue, something I could send to our mutual customer, but he stated there was nothing specifically for the lost network connectivity.  He did point me to the following KB article for poor performance under 2012 (non-R2), something that I have not witnessed yet.

Post a comment below on your thoughts, or if you have ran into this specific issue.

Step by step configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2 - Part 3

Welcome to part 3 of the step-by-step guide for configuring a Hyper-V cluster in Windows Server 2012 and Windows Server 2012 R2.  I hope you find this guide useful.  I appreciate any comments and feedback below. 

You can find parts 1 and 2 of this guide here:

Part 1:

Part 2:

A few asked me to elaborate more on configuring the cluster.  Sorry I didn’t go into too much detail during Part 2.  I’ll explain further here.

When you open up Failover Cluster Manager you have the option in the action pane to create a cluster.  Click on this to fire up the wizard:


The initial configuration screen can be skipped, and the second screen will prompt you to input the server names of the cluster nodes:

When you add the servers it will verify the failover cluster service is running on the node.  If everything is good, the wizard will allow you to add the server.  Once the servers are added, proceed to the next step.

The next step is very important.  Not only is this step required for Microsoft to ever support you if you run into any issues, but it also validates that everything you have done thus far is correct and setup properly for the cluster to operate.  Not quite sure why they give you the option to skip the tests, but I would highly recommend against this.  The warning is pretty straight forward as well:

The next portion of the cluster configuration that comes up is the validation wizard.  Like I mentioned above, do not skip this portion.   Run all tests as recommended by the wizard:

The tests will take a few minutes to run, so go grab a coffee while waiting.  Once completed, you shouldn’t have any errors.  However, as I mentioned in part 2 there is a known issue when using the P2000 with the “Validate Storage Spaces Persistent Reservation” test so you will get a warning here relating to this but you shouldn’t have any other warnings if things are setup correctly. 

View the report and save it somewhere as a reference that you ran it in case Microsoft support wants to see it. 

When you click finish you will be asked to enter your name for the cluster, as well as the IP address for the cluster.  Enter these parameters in and click next:

Then finish up the wizard and form the cluster. 

Now, there are several things we must do after the cluster is up and running to completely configure it.  I’ll go over each aspect now.

Cluster Shared Volumes:

This should be a given.  I won’t go into much detail here, sparing you the time.  If you need to read up on what a cluster shared volume is please read up on it here:

To enable the cluster shared volume navigate to storage, then disks.  Then select your storage disk, right clicking it and choosing the option “Add to Cluster Shared Volumes”

I like to rename the disks here as well, but this is not a necessary step.

Now that we have enabled Cluster Shared Volumes we should change the default path in Hyper-V manager on both nodes to reflect this.  The path should be C:\ClusterStorage\Volume1 on both nodes.  I like to keep the remaining path as well for simplicity:

Don’t forget to do this on both nodes.

Live Migration:

I dedicate a NIC for live migration.  I have always done this on recommendation that if we saturate the network link for managing the server with live migration traffic that we could cause a failover situation where heartbeat is lost.  To dedicate the network adapter for live migration you right click the Networks option in failover cluster manager, choosing Live Migration Settings.  I rename my networks in the list first so that they are more easily understood other than “Cluster Network X”

Cluster Aware Updating:

Cluster aware updating is a fantastic feature introduced in 2012 that allows for automatic updating of your cluster nodes without taking down the workloads they are servicing.  What happens with Hyper-V is that the VM roles are live migrated to another node, once all roles are off the node then updating is completed and the node is rebooted.  Then the same process happens on the other node.  There is a little bit of work to set this up, and you should have a WSUS server on your network, but the setup is worth the effort. 

To enable Cluster-Aware Updating choose the option on the initial failover cluster manager page

This will launch the management window where you can configure the options for the cluster.  Click on the “Configure cluster self-updating options” in the cluster actions pane.  This will launch the wizard to let you configure this option.

Before you walk through this wizard there is one necessary step you should complete first.  I like to place my Hyper-V nodes, and the cluster computer object in their OU within Active Directory.  I then typically grant full control over that OU to the Cluster computer object.  I find if you don’t complete this step that sometimes you will get errors in the failover cluster manager, as well as issues with Cluster-Aware updating.

The Cluster-Aware updating wizard is pretty straight forward.  The only thing you need to determine is when you want it to run.  There is no need to check off the “I have a pre-staged computer object for the CAU clustered role” as this will be created during the setup.  I don’t typically change any options from the default here, I haven’t found any reason to do so yet.  I’ll also do a first run to make sure that this is working correctly. 


The following are some tweaks and best practices I also do to ensure the best performance and reliability on the cluster configuration:

  1. Disable all networking protocols on the iSCSI NICs used, with the exception of Internet Protocol Version 4/6.  This is to reduce the amount of chatter that occurs on the NICs.  We want to dedicate
    these network adapters strictly for iSCSI traffic, so there is no need for anything outside of the IP protocols. 
  2. Change the binding of the NICs, putting the management NIC of the node at the top of the list. 
  3. Disable RDP Printer mapping on the hosts to remove any chance of a printer driver causing issues with stability.  You can do this via local policy, group policy, or registry.  Google how to do this. 
  4. Configure exclusions in your anti-virus software based on the following article:
  5. Review the following article on performance tuning for Hyper-V servers:

I hope you have been finding this guide useful.   Please leave any comments below, and thank you for visiting! 


Veeam ONE Monitor Free Edition for Hyper-V

I work for the SMB space, and many of our clients do not find value in deploying a monitoring suite such as Microsoft System Center.  Don’t get me wrong, this software suite is very valuable.  Thing is, it is valuable for IT.  Convincing SMB owners to purchase this monitoring suite for their smaller networks is challenging.  Honestly, I don’t blame small business owners for not wanting to fork out thousands of dollars for monitoring/configuration tools.  Microsoft seems to have forgotten the SMB market when it comes to management products, long discontinuing their “Essentials” suite.  So I have been on the hunt for something to monitor our Hyper-V deployments that won’t break the bank for our customers. 

Enter Veeam ONE Free Edition.  This software is a virtualization management platform delivered by a relatively new software player in the virtualization field.  Veeam has been winning all kinds of awards for their software solutions, and they are well known for their backup product. 

I installed Veeam ONE Free edition only a day ago, it was very easy and straightforward to install and become familiar with the console.  This is a great product for those smaller shops looking for better management than what Hyper-V/Failover Cluster manager can do.  I’ll cover the installation and configuration steps to get you started below.


Head on over to the download page and sign up to download the software for free.  You are going to want to download both the 563MB installation ISO, as well as the KB1841 7.0 R2 update.   I noticed on my original install that the management software did not want to connect to my failover cluster, but after installing the 7.0 R2 update things were working well. 

I’m installing this software on a Windows Server 2008 R2 virtual machine.  This current server runs some management software on it already, monitoring AV, WSUS, and running HP Systems Insight Manager.  After mounting the iso, I began the installation by clicking on the setup.exe

The installation screen is very nice, we want to install the server so go ahead and click on that option. 

I absolutely hate it when I am trying to install software that requires prerequisites and the software installation halts with an error.  Nice thing about Veeam is even though you didn’t read the documentation to find out what was required, the installer goes ahead and prompts you to install any prerequisites before it installs the main product.  The Visual C++ redistributable required a reboot, but upon logging in after the reboot the setup continued. 

I guess the ISO you download, and the product you install is the entire suite.  This is nice if you decide to purchase the full version later.  We are installing the Free version so choose this option here

Again, a system configuration check is completed before installation.  If you are missing any components it will install this for you at that time. 

The software does use a SQL Express instance.  You can choose to install this manually, or use an existing instance, or you can do what I did and install a new SQL instance during the install portion. 

Pointing to the standalone host, or cluster is done during the installation.  Enter the details here.

When the installation is complete, you are required to log off before using the software. 

Don’t forget to install the R2 update. 

The last step in installation is tweaking the amount of RAM that the SQL instance uses.  I don’t want to run out of memory on my management server so I tweaked this down to use a maximum of 4GB of RAM in SQL Management Studio


The initial configuration wizard pops up immediately when launching Veeam One Monitor.  This is a pretty straight forward setup that asks you to enter notification details for alarms notifications and email reports. 

After the initial configuration page is completed, there is really not much more you need to do.  The product is agent-less, so there is no further configuration required on the hosts/nodes.  The interface is very easy to browse around and get used to.  I have only been using it for one day and I can manage to get to where I need to look at. 

I love to be able to monitor the infrastructure in real time, being able to monitor things like IOPS on the cluster shared volume is very useful for me. 

Please Note: The gaps in the reporting is due to me rebooting the monitoring server

Keep in mind this software is completely free to download and use. Here is a screenshot of the network transfer rate report in real time 

You can even use the software to connect to the console of the virtual machine. 

I am going to be using this software more and more as the days go on, but initially this software was very easy to set-up and configure without much tweaking.   This is a perfect fit for me to use in places where I need something monitor and manage a Hyper-V infrastructure that Hyper-V manager and Failover Cluster manager do not provide.  I would definitely recommend that you try this software out to see if it fits.  I think you will be pleasantly surprised. 

Stay tuned, as I use this software more and become more familiar with it I will report my findings here.  I may also look to install a trial of the paid version to compare the differences.  For anyone that has used this software (either Free or Paid) please comment with your feedback below.  

Two unknown devices in Windows Server 2008 R2 under Hyper-V 2012 R2

For whatever reason the Hyper-V integration components for Windows Server 2012 R2 do not install the device drivers for two devices on servers running Windows Server 2008 R2.  They are listed as unknown devices in device manager:


More detailed analysis shows the device information as follows:

Device 1 Hardware Ids:

Device 2 Hardware Ids:

What I did to get these devices working is extract the file to a location (In this example the desktop).  The cabinet file is located in the Integration Services setup ISO under support/amd64.  From there, I manually launched the update driver wizard from device manager and pointed to the extracted files from the cab file. 

The first device prompts you with a driver publisher warning, not quite sure why this is the case since Microsoft is the publisher of these drivers:

Installing the driver software has not caused me any issues. 

The first device is the Microsoft Hyper-V Remote Desktop Control Channel.  The second device does not present the same publisher verification warning message.  The second device is the Microsoft Hyper-V Activation Component.  I am not quite sure what these two devices actually do, but since I hate looking at unknown devices within device manager I needed to figure this out.  If someone knows the benefit of these two devices please leave a comment below.

Step by step configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2 - Part 2

Configuration of a 2 node Hyper-V Cluster in Windows Server 2012 – Part 2.  Part one is here:

I realized that in my prior post for configuration of a 2 node Hyper-V cluster that I did not include the steps necessary for configuring the HP Storage Works P2000.  So here they are:

There are two controllers on this unit.  This is for redundancy.  If one controller fails, the SAN will remain operational on the redundant controller.  My specific unit has 4 iSCSI ports for host connectivity, directly to the nodes.  I am utilizing MPIO here, so I have two links from each server (on separate network adapters) to the SAN.  As follows:



The cables I use to connect the links are standard CAT6 Ethernet cables. 

You also want to plug both management ports into the network.  Out of the box, both management ports should obtain an address via DHCP.   Now, there is no need to use a CAT6 cable to plug the management ports in, so go ahead and use a standard CAT5e cable instead.  You can also configure the device via command line using the CLI by interfacing with the USB connection located on each of the management controllers.  I have never had to use this for anything other than when the network port is not responding.  This interface is a USB mini connection located just to the left of the Ethernet management port, and a cable is included with the unit. 


Once plugged into your Windows PC, the device comes up as a USB to serial adapter and is given a COM port assignment.  You will have to install the drivers to get the device to be recognized, drivers are not included with the Windows binaries. 

I won’t be covering the CLI interface, all configuration will be conducted via the web based graphic console.

The web based console is accessed via your favourite Internet browser.  I typically use Google Chrome as I have ran into issues logging into the console with later versions of Internet Explorer.  The default username is manage, password !manage.

Once logged in, launch the initial configuration wizard by clicking Configuration – Configuration Wizard at the top:


This will l launch the basic settings configuration wizard.  This wizard should hopefully be self-explanatory so I won’t go into many details here. 

For this example I will be creating a single VDisk encompassing the entire drive space available.  To do this, click Provisioning – Create Vdisk:


Use your best judgements on what RAID level you want here.  For my example here I am going to be building a RAID 5 on 5x450GB drives:


Now I am going to be creating two separate volumes:  One for the CSV file storage, and the other for Qurorum.  The Quorum volume will be 1GB in size for the disk witness required since we have 2 nodes, and the CSV volume will encompass the remaining space.  To create the volume click on the VDisk created above, and then click Provisioning – Create Volume.  I don’t like to MAP the volumes initially, rather explicitly mapping them to the nodes after connecting them to the SAN:


 In part 1 we added the roles, configured the NIC’s connecting for both Hyper-V VM access and SAN connections and prepped the servers.  Now we need to connect the nodes to the SAN by means of the iSCSI initiator. 

Our targets on the P2000 are,,, and for ports 1 and 2 on each controller.  As you recall from step one, the servers are directly connected without a switch in the middle. 

To launch the iSCSI initiator just type “iSCSI” in the start screen:


I typically pin this to the start screen. 

When you launch the iSCSI initiator for the first time you will presented with an option to start the service and make the service auto start.  Choose yes:


I don’t typically like using the Quick Connect option on the target screen, rather configure each connection separately.  Click on the Discovery Tab in the iSCSI Initiator Properties screen, then Discover Portal:


Next, we want to input the IP address of the SAN NIC that we are connecting to, then click on the advanced button. 


Select the Initiator IP that will be connecting to the target:


Then do this again for the second connection to the SAN.  When finished you should have two entries:


Now, back on the target tab your target should be listed as Inactive.  Click on the connect button, then in the window that opens click on the “Enable Multi-Path” button:


Now it should show connected:


Complete the same tasks on the other node as well.

Now, before we can attach a volume from the SAN we are going to have to MAP the LUN explicitly to each of the nodes.  So, we are going to have to open the web management utility for the P2000 again.  Once in, if we expand the Hosts in the left pane we should now see our two nodes listed (I have omitted server names in this screenshot):


We need to map the two volumes created on the SAN to each of the nodes.  Right click on the volume, selecting Provisioning – Explicit Mappings


Then choose the node, click the Map check box, give the LUN a unique number, check the ports assigned to the LUN on the SAN and apply the changes:


Assign the same LUN number to the other node and complete the same explicit mapping to the other node.  Then complete the same procedure for the other volume.  I used LUN number 0 for the Quorum Volume, and LUN number 1 for the CSV Volume. 

Jump back to the nodes, back into the iSCSI initiator and click on the Volumes and Devices tab, press the Auto Configure button and our volumes should show up here:


Complete the same procedure on the second node as well.  If you are having difficulty with the volumes showing up sometimes a disconnect and reconnect is required.(don’t forget to check the “Enable Multi-Path” option)

Now we want to enable multipath for iSCSI.  Fire up the MPIO utility from the start screen:


Click on the Discover Multi-Paths tab, then check off the box “Add support for iSCSI devices” and finally the Add button:


The server will prompt for a reboot.  So go ahead and let it reboot.  Don’t forget to complete the same tasks on the second node.

After the reboot we are going to want to fire up disk management and configure the two SAN volumes on the node, making sure each node can see and connect to them.  When initializing your CSV volume I would suggest making this a GPT disk rather than an MBR one, since you are likely to go above the 2TB limit imposed with MBR. 

I format both volumes with NTFS, and give them a drive letter for now:


After configuring the volumes on the first node, I typically offline the disks, then on-line the disks on the second node to be sure everything is connected and working correctly.  Don’t get worried about the drive letters assigned to the volumes, this doesn’t matter.

Getting there slowly!

Next, before we create the cluster I always like to assign the Hyper-V External NICs in the Hyper-V configuration.  Fire up Hyper-V Manager, selecting “Virtual Switch Manager” in the action pane.  We are going to create the external Virtual Switches using the adapters we assigned for the Hyper-V VM’s.  I always dedicate the network adapters to the virtual switch, un-checking the option “Allow management operating system to share this network adapter”. 

At this point we have completed all the prerequisite steps required to fire up the cluster.  Now we will form the cluster. 

Fire up Fail over Cluster Manager from the start screen:


Once opened, select the option in the action pane to create cluster.  This will fire up the wizard to form our cluster.  The wizard should be self-explanatory, so walk through the steps required.  Make sure you run the cluster validation tests, selecting the default option to run all tests.  This is the best time to be running this test, since it will take the cluster disks offline.  You don’t want to have this cluster in production finding issues wrong with it, having to run the cluster validation tests bringing the cluster down.  If we run into any issues here we can address them now before the system is in production. 

The P2000 on Windows Server 2012 will create a warning about validating storage spaces persistent reservation.  This warning can be safely ignored as noted here

Hopefully when you run the validation tests you will get all Success (other than the note above).  If not, trace back through the steps and make sure you are not missing anything.  Once you get a successful validation save the report and store it if you need to reference it for future support. 

Finish walking through the wizard to create your cluster.  Assign a cluster name and static IP address to your cluster as requested from the wizard. 

That should do it.   If you got this far you made it.  Congratulations! 

 Continue with Part 3 here:


Hyper-V Server 2012 is too good to keep ignoring

I’ve worked in a Windows background for over ten years.  The constant debate has always been there between those who are pro-Microsoft and anti-Microsoft.  Or should I say Micro$oft?  One of the biggest points from those in the anti-Microsoft corner is the outrageous pricing strategy that they use to sell software.  That they are selling software in the first place is an atrocity in its own.  Others have been about the monopolies Microsoft held at times with several versions of software it releases.  So much hatred towards one company by individuals is astounding, and the negativity towards this company is hard to break.  So hard they go against their own morals just to be anti-Microsoft.

Let me give you an example:

I have a friend in I.T. who is very anti-Microsoft.  You see, he despises anything they put out stating its garbage before even knowing what that product does.  All of his devices he manages are running some form of Linux, and he constantly boasts how cheap it is to run compared with the offerings from Microsoft.  And yes, he does own several Chromebooks.  Now, when he compares hypervisor platforms he is so pro-VMWare it is overwhelming.  VMware is a great product, I will never argue this fact.  However, to get started with this product you are going to have to shell out some serious money if you want to get it up and running.  Yes, money for software (what a concept!).  I try to tell him that VMWare was a monopoly in the virtualization field for many years, but he immediately dismisses it.  Finally I tell him that Hyper-V is free! And my response I get is “it’s not even worth that”. 

So what gives?  Hyper-V Server, the non-gui based (yes, its even command line for you gui haters!), FREE product from Microsoft is an enterprise class ready hypervisor but you turn your nose up to it.  And don’t tell me KVM is a better solution because it’s not quite there yet.  It may be some day, but it certainly is not in the same hypervisor class that is on par with VMWare’s offerings.  The download is free, here’s the link:  Fire it up and give it a good test.  Are you just scared you may actually like a product from Microsoft?

pfSense Update

Posted an exported VM of pfSense 2.1 on SkyDrive, please download here:

What went wrong with Windows 8 and why it is Microsoft’s best product

I’ve used Windows 8 for well over a year now since the first customer preview. I’ve been involved with beta testing Microsoft’s operating systems in one way or another since Windows Millennium, and have supported the operating system for well over 10 years of development.  I have to say that Windows 8 is both Microsoft’s best and worst products. You may ask why I would possibly say that, it can’t possibly be both. Let me explain.

Windows phone is a marvellous mobile platform. Its main flaw is that Microsoft were too late in the game to capture a significant enough of a market share to captivate the major app developers.  Therefore, the big apps that ran on iOS and Android were nowhere to be found on the Windows phone marketplace. In the early days of the new phone OS platform the marketplace was so dry it was hard to find a good app for free, never mind paying for one.  This became a snowball effect of sorts, since none of the most popular developers wanted to journey into these barren lands. Everyone that reviewed the platform loved the design, the interface, the operating system in general, but since it was missing the apps they used everyday they had no choice but to give the phone a thumbs down.

Unfortunately there was nothing for Microsoft to do about this.  They put a significant effort into marketing the platform, even partnering with faltering Nokia to push the platform, paying developers premiums, and offering app porting assistance, but everything they did didn’t seem to matter.  Both Apple and Google were well out of the starting gate and running an equally aggressive phone development and marketing campaign as well. 

Microsoft was also being hit hard in its bread and butter by the new tablet market. Again, both Apple and Google jumped out of the gate leaving everyone wondering where Microsoft was.  Microsoft is fortunate enough to have such a loyal share of the “primary computing device” market that they have a handicap, but they learned from their phone release that they couldn’t afford to postpone this for long. So what better path to release a tablet device operating system than using the windows phone, so called Metro, operating system.  This was a great idea from Microsoft, but whoever decided to take the almost two decade old existing Windows platform and try to merge it with the phone platform is the person who also shot the fatal bullet that killed this great product.

Microsoft has pride in Windows, and so they should. They have successfully owned the desktop market for many years, defeating Apple even after very successful marketing campaigns from them for OS X, Google releasing Chrome OS, and the countless versions of Linux flavors that have been released over the years. If you survey a group of every day common people, more than 90% of them will be using Windows on their desktop computer both at home and work.  What Microsoft has been doing for Windows over the years is exactly what people are looking for in a desktop operating system. 

However, desktops and mobile devices such as tablets are two very different devices. When you want to get down to business you are going to use a desktop.  You want the power of having what you need and are used to right at your hands. I am writing this on an HP Elitetab and keep wishing I had my HP ProBook at my lap instead. However, tablets are designed as a consumption device. If I just want to horse around, watch YouTube, read some articles on reddit, or check out the news while lounging around I love the feel and use of a tablet. The HP Elitetab feels just as nice as an iPad, and the Windows 8 metro portion of the operating system works great. 

This brings me to my title of this blog.  Why I think Windows 8 is a great product is because the “metro” interface on tablets for everyday consumption of the Internet is hands down the best experience out there. I’ve tried all three. Everything just works great, there are no limitations, apps are there, and the interface is very user intuitive.  None of the other mobile operating systems compare.  It feels very similar to how the Windows Phone platform is.  It’s different from the other two (iOS and Chrome), and a great choice for someone looking for something different. Saying this I think this same interface on the desktop is the worst product Microsoft has ever made, for almost the exact opposite reasons. When I am using my probook, I can’t get I desktop mode fast enough. I hardly open any metro app, and when I do I am just frustrated with navigating in the application so I end up abandoning it. 

Microsoft, you need to separate this product into two separate releases.  This is not one OS for all devices, it doesn’t work that way.  Touch, consumption based devices are for a totally different user experience than when that same person wants to get down to business.  They always say don’t mix business with pleasure, so why are you doing this with Windows?  I remember a Microsoft executive explaining a story about seeing a man using an iPad on the train.  Eventually that person put it away and pulled out their laptop.  The Microsoft executive mentioned that he asked the man why he carried two devices, to which he responded the laptop was for when he wanted to do real work.  The point of the story was that Windows 8 was supposed to be the “one all” operating system.  Here’s the problem, the executive mentioned it himself:  the iPad was for using, and the laptop was for doing. 

I fear that if Microsoft doesn’t split Windows then they will continue to degrade the design of Windows to a point at which users will be fed up with the platform.  Microsoft still has enough of a loyal consumer base to save Windows before its too late.  Admit you goofed, “take it like a man”, and get back to what you do best. Split these two wonderful operating systems up and release them for the platforms they are designed for. Release metro as a mobile platform, even naming it different.  What’s wrong with Microsoft Metro?