Hyper-V Virtual Machines losing network connectivity under Server 2012 R2


Recently was dealt with a pretty major issue where a brand new cluster configuration had an issue with the virtual machines losing network connectivity sporadically.  It would happen without any warning, or errors.  Event viewer looked clean, and the network adapters were functioning fine.  

Problem was not all virtual machines would lose network connectivity.  One would go down, and others on the same external Hyper-V switch would function just fine.  I found out thorough troubleshooting that if I live migrated the VM from one host to another that it would restore network connectivity, but sometimes with a loss to others.  The customer was also complaining that the performance of accessing network data was poor even when the network on the VM’s was operational.  

Since I was ruling out everything in the Hyper-V cluster, I quickly started to think that the network switch the network adapters were plugged into was faulty.  It was a new switch, HP 1810, and I figured I was dealing with an ARP issue on the switch. (1810 unfortunately doesn’t give many options for managing it).  We had similar setups, with the only variable being this cluster was ‘R2’, and all ran ok for years without this issue.  I called both HP and Microsoft for support, and initially we replaced the switch.  Of course, things ran ok for a few days, but then again in the worst time possible the VM’s started losing connectivity again.  

After about 6 calls to Microsoft, and escalation to senior network support, I was given a fix:  Disable VMQ on the physical network adapters on the host boxes.  

To disable VMQ, open up device manager, right clicking each adapter (including any teams that you have configured - as well as each member of the team) choose properties, and select the advanced tab.  Finally scroll down to Virtual Machine Queues and choose the option Disable from the drop down box:

Note that when you do this it will disable, then re-enable the adapter on the host.  So make sure that you are aware of the network loss when you make this change.

Also be sure to uncheck “Enable Virtual Machine Queue” under each of the virtual machine settings as well.  Both need to be done, or you could wind up with network performance issues.

I asked the Microsoft tech if there was a KB article on this specific issue, something I could send to our mutual customer, but he stated there was nothing specifically for the lost network connectivity.  He did point me to the following KB article for poor performance under 2012 (non-R2), something that I have not witnessed yet.

http://support.microsoft.com/kb/2902166

Post a comment below on your thoughts, or if you have ran into this specific issue.

Step by step configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2 - Part 3


Welcome to part 3 of the step-by-step guide for configuring a Hyper-V cluster in Windows Server 2012 and Windows Server 2012 R2.  I hope you find this guide useful.  I appreciate any comments and feedback below. 

You can find parts 1 and 2 of this guide here:

Part 1: http://bit.ly/1aRxST2

Part 2: http://bit.ly/1auXxGD

A few asked me to elaborate more on configuring the cluster.  Sorry I didn’t go into too much detail during Part 2.  I’ll explain further here.

When you open up Failover Cluster Manager you have the option in the action pane to create a cluster.  Click on this to fire up the wizard:

                       

The initial configuration screen can be skipped, and the second screen will prompt you to input the server names of the cluster nodes:

When you add the servers it will verify the failover cluster service is running on the node.  If everything is good, the wizard will allow you to add the server.  Once the servers are added, proceed to the next step.

The next step is very important.  Not only is this step required for Microsoft to ever support you if you run into any issues, but it also validates that everything you have done thus far is correct and setup properly for the cluster to operate.  Not quite sure why they give you the option to skip the tests, but I would highly recommend against this.  The warning is pretty straight forward as well:

The next portion of the cluster configuration that comes up is the validation wizard.  Like I mentioned above, do not skip this portion.   Run all tests as recommended by the wizard:

The tests will take a few minutes to run, so go grab a coffee while waiting.  Once completed, you shouldn’t have any errors.  However, as I mentioned in part 2 there is a known issue when using the P2000 with the “Validate Storage Spaces Persistent Reservation” test so you will get a warning here relating to this but you shouldn’t have any other warnings if things are setup correctly. 

View the report and save it somewhere as a reference that you ran it in case Microsoft support wants to see it. 

When you click finish you will be asked to enter your name for the cluster, as well as the IP address for the cluster.  Enter these parameters in and click next:

Then finish up the wizard and form the cluster. 

Now, there are several things we must do after the cluster is up and running to completely configure it.  I’ll go over each aspect now.

Cluster Shared Volumes:

This should be a given.  I won’t go into much detail here, sparing you the time.  If you need to read up on what a cluster shared volume is please read up on it here:

http://blogs.msdn.com/b/clustering/archive/2013/12/02/10473247.aspx

To enable the cluster shared volume navigate to storage, then disks.  Then select your storage disk, right clicking it and choosing the option “Add to Cluster Shared Volumes”

I like to rename the disks here as well, but this is not a necessary step.

Now that we have enabled Cluster Shared Volumes we should change the default path in Hyper-V manager on both nodes to reflect this.  The path should be C:\ClusterStorage\Volume1 on both nodes.  I like to keep the remaining path as well for simplicity:

Don’t forget to do this on both nodes.

Live Migration:

I dedicate a NIC for live migration.  I have always done this on recommendation that if we saturate the network link for managing the server with live migration traffic that we could cause a failover situation where heartbeat is lost.  To dedicate the network adapter for live migration you right click the Networks option in failover cluster manager, choosing Live Migration Settings.  I rename my networks in the list first so that they are more easily understood other than “Cluster Network X”

Cluster Aware Updating:

Cluster aware updating is a fantastic feature introduced in 2012 that allows for automatic updating of your cluster nodes without taking down the workloads they are servicing.  What happens with Hyper-V is that the VM roles are live migrated to another node, once all roles are off the node then updating is completed and the node is rebooted.  Then the same process happens on the other node.  There is a little bit of work to set this up, and you should have a WSUS server on your network, but the setup is worth the effort. 

To enable Cluster-Aware Updating choose the option on the initial failover cluster manager page

This will launch the management window where you can configure the options for the cluster.  Click on the “Configure cluster self-updating options” in the cluster actions pane.  This will launch the wizard to let you configure this option.

Before you walk through this wizard there is one necessary step you should complete first.  I like to place my Hyper-V nodes, and the cluster computer object in their OU within Active Directory.  I then typically grant full control over that OU to the Cluster computer object.  I find if you don’t complete this step that sometimes you will get errors in the failover cluster manager, as well as issues with Cluster-Aware updating.

The Cluster-Aware updating wizard is pretty straight forward.  The only thing you need to determine is when you want it to run.  There is no need to check off the “I have a pre-staged computer object for the CAU clustered role” as this will be created during the setup.  I don’t typically change any options from the default here, I haven’t found any reason to do so yet.  I’ll also do a first run to make sure that this is working correctly. 

Tweaking:

The following are some tweaks and best practices I also do to ensure the best performance and reliability on the cluster configuration:

  1. Disable all networking protocols on the iSCSI NICs used, with the exception of Internet Protocol Version 4/6.  This is to reduce the amount of chatter that occurs on the NICs.  We want to dedicate
    these network adapters strictly for iSCSI traffic, so there is no need for anything outside of the IP protocols. 
  2. Change the binding of the NICs, putting the management NIC of the node at the top of the list. 
  3. Disable RDP Printer mapping on the hosts to remove any chance of a printer driver causing issues with stability.  You can do this via local policy, group policy, or registry.  Google how to do this. 
  4. Configure exclusions in your anti-virus software based on the following article:
    http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx
  5. Review the following article on performance tuning for Hyper-V servers:
    http://msdn.microsoft.com/en-us/library/windows/hardware/dn567657.aspx

I hope you have been finding this guide useful.   Please leave any comments below, and thank you for visiting! 

 

Veeam ONE Monitor Free Edition for Hyper-V


I work for the SMB space, and many of our clients do not find value in deploying a monitoring suite such as Microsoft System Center.  Don’t get me wrong, this software suite is very valuable.  Thing is, it is valuable for IT.  Convincing SMB owners to purchase this monitoring suite for their smaller networks is challenging.  Honestly, I don’t blame small business owners for not wanting to fork out thousands of dollars for monitoring/configuration tools.  Microsoft seems to have forgotten the SMB market when it comes to management products, long discontinuing their “Essentials” suite.  So I have been on the hunt for something to monitor our Hyper-V deployments that won’t break the bank for our customers. 

Enter Veeam ONE Free Edition.  This software is a virtualization management platform delivered by a relatively new software player in the virtualization field.  Veeam has been winning all kinds of awards for their software solutions, and they are well known for their backup product. 

I installed Veeam ONE Free edition only a day ago, it was very easy and straightforward to install and become familiar with the console.  This is a great product for those smaller shops looking for better management than what Hyper-V/Failover Cluster manager can do.  I’ll cover the installation and configuration steps to get you started below.

Installation:

Head on over to the download page and sign up to download the software for free.  You are going to want to download both the 563MB installation ISO, as well as the KB1841 7.0 R2 update.   I noticed on my original install that the management software did not want to connect to my failover cluster, but after installing the 7.0 R2 update things were working well. 

I’m installing this software on a Windows Server 2008 R2 virtual machine.  This current server runs some management software on it already, monitoring AV, WSUS, and running HP Systems Insight Manager.  After mounting the iso, I began the installation by clicking on the setup.exe

The installation screen is very nice, we want to install the server so go ahead and click on that option. 

I absolutely hate it when I am trying to install software that requires prerequisites and the software installation halts with an error.  Nice thing about Veeam is even though you didn’t read the documentation to find out what was required, the installer goes ahead and prompts you to install any prerequisites before it installs the main product.  The Visual C++ redistributable required a reboot, but upon logging in after the reboot the setup continued. 

I guess the ISO you download, and the product you install is the entire suite.  This is nice if you decide to purchase the full version later.  We are installing the Free version so choose this option here

Again, a system configuration check is completed before installation.  If you are missing any components it will install this for you at that time. 

The software does use a SQL Express instance.  You can choose to install this manually, or use an existing instance, or you can do what I did and install a new SQL instance during the install portion. 

Pointing to the standalone host, or cluster is done during the installation.  Enter the details here.

When the installation is complete, you are required to log off before using the software. 

Don’t forget to install the R2 update. 

The last step in installation is tweaking the amount of RAM that the SQL instance uses.  I don’t want to run out of memory on my management server so I tweaked this down to use a maximum of 4GB of RAM in SQL Management Studio

Configuration:

The initial configuration wizard pops up immediately when launching Veeam One Monitor.  This is a pretty straight forward setup that asks you to enter notification details for alarms notifications and email reports. 

After the initial configuration page is completed, there is really not much more you need to do.  The product is agent-less, so there is no further configuration required on the hosts/nodes.  The interface is very easy to browse around and get used to.  I have only been using it for one day and I can manage to get to where I need to look at. 

I love to be able to monitor the infrastructure in real time, being able to monitor things like IOPS on the cluster shared volume is very useful for me. 

Please Note: The gaps in the reporting is due to me rebooting the monitoring server

Keep in mind this software is completely free to download and use. Here is a screenshot of the network transfer rate report in real time 

You can even use the software to connect to the console of the virtual machine. 

I am going to be using this software more and more as the days go on, but initially this software was very easy to set-up and configure without much tweaking.   This is a perfect fit for me to use in places where I need something monitor and manage a Hyper-V infrastructure that Hyper-V manager and Failover Cluster manager do not provide.  I would definitely recommend that you try this software out to see if it fits.  I think you will be pleasantly surprised. 

Stay tuned, as I use this software more and become more familiar with it I will report my findings here.  I may also look to install a trial of the paid version to compare the differences.  For anyone that has used this software (either Free or Paid) please comment with your feedback below.  

Two unknown devices in Windows Server 2008 R2 under Hyper-V 2012 R2


For whatever reason the Hyper-V integration components for Windows Server 2012 R2 do not install the device drivers for two devices on servers running Windows Server 2008 R2.  They are listed as unknown devices in device manager:

                       

More detailed analysis shows the device information as follows:

Device 1 Hardware Ids:
VMBUS\{f8e65716-3cb3-4a06-9a60-1889c5cccab5}
VMBUS\{99221fa0-24ad-11e2-be98-001aa01bbf6e}

Device 2 Hardware Ids:
VMBUS\{3375baf4-9e15-4b30-b765-67acb10d607b}
VMBUS\{4487b255-b88c-403f-bb51-d1f69cf17f87}

What I did to get these devices working is extract the Windows6.2-HyperVIntegrationServices-x64.cab file to a location (In this example the desktop).  The cabinet file is located in the Integration Services setup ISO under support/amd64.  From there, I manually launched the update driver wizard from device manager and pointed to the extracted files from the cab file. 

The first device prompts you with a driver publisher warning, not quite sure why this is the case since Microsoft is the publisher of these drivers:

Installing the driver software has not caused me any issues. 

The first device is the Microsoft Hyper-V Remote Desktop Control Channel.  The second device does not present the same publisher verification warning message.  The second device is the Microsoft Hyper-V Activation Component.  I am not quite sure what these two devices actually do, but since I hate looking at unknown devices within device manager I needed to figure this out.  If someone knows the benefit of these two devices please leave a comment below.

Step by step configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2 - Part 2


Configuration of a 2 node Hyper-V Cluster in Windows Server 2012 – Part 2.  Part one is here: http://alexappleton.net/post/44748523400/configuration-of-2-node-hyper-v-cluster-in-windows

I realized that in my prior post for configuration of a 2 node Hyper-V cluster that I did not include the steps necessary for configuring the HP Storage Works P2000.  So here they are:

There are two controllers on this unit.  This is for redundancy.  If one controller fails, the SAN will remain operational on the redundant controller.  My specific unit has 4 iSCSI ports for host connectivity, directly to the nodes.  I am utilizing MPIO here, so I have two links from each server (on separate network adapters) to the SAN.  As follows:

image

image

The cables I use to connect the links are standard CAT6 Ethernet cables. 

You also want to plug both management ports into the network.  Out of the box, both management ports should obtain an address via DHCP.   Now, there is no need to use a CAT6 cable to plug the management ports in, so go ahead and use a standard CAT5e cable instead.  You can also configure the device via command line using the CLI by interfacing with the USB connection located on each of the management controllers.  I have never had to use this for anything other than when the network port is not responding.  This interface is a USB mini connection located just to the left of the Ethernet management port, and a cable is included with the unit. 

image

Once plugged into your Windows PC, the device comes up as a USB to serial adapter and is given a COM port assignment.  You will have to install the drivers to get the device to be recognized, drivers are not included with the Windows binaries. 

I won’t be covering the CLI interface, all configuration will be conducted via the web based graphic console.

The web based console is accessed via your favourite Internet browser.  I typically use Google Chrome as I have ran into issues logging into the console with later versions of Internet Explorer.  The default username is manage, password !manage.

Once logged in, launch the initial configuration wizard by clicking Configuration – Configuration Wizard at the top:

image

This will l launch the basic settings configuration wizard.  This wizard should hopefully be self-explanatory so I won’t go into many details here. 

For this example I will be creating a single VDisk encompassing the entire drive space available.  To do this, click Provisioning – Create Vdisk:

image

Use your best judgements on what RAID level you want here.  For my example here I am going to be building a RAID 5 on 5x450GB drives:

image

Now I am going to be creating two separate volumes:  One for the CSV file storage, and the other for Qurorum.  The Quorum volume will be 1GB in size for the disk witness required since we have 2 nodes, and the CSV volume will encompass the remaining space.  To create the volume click on the VDisk created above, and then click Provisioning – Create Volume.  I don’t like to MAP the volumes initially, rather explicitly mapping them to the nodes after connecting them to the SAN:

image

 In part 1 we added the roles, configured the NIC’s connecting for both Hyper-V VM access and SAN connections and prepped the servers.  Now we need to connect the nodes to the SAN by means of the iSCSI initiator. 

Our targets on the P2000 are 172.16.1.10, 172.16.2.10, 172.16.3.10, and 172.16.4.10 for ports 1 and 2 on each controller.  As you recall from step one, the servers are directly connected without a switch in the middle. 

To launch the iSCSI initiator just type “iSCSI” in the start screen:

image

I typically pin this to the start screen. 

When you launch the iSCSI initiator for the first time you will presented with an option to start the service and make the service auto start.  Choose yes:

image

I don’t typically like using the Quick Connect option on the target screen, rather configure each connection separately.  Click on the Discovery Tab in the iSCSI Initiator Properties screen, then Discover Portal:

image

Next, we want to input the IP address of the SAN NIC that we are connecting to, then click on the advanced button. 

image

Select the Initiator IP that will be connecting to the target:

image

Then do this again for the second connection to the SAN.  When finished you should have two entries:

image

Now, back on the target tab your target should be listed as Inactive.  Click on the connect button, then in the window that opens click on the “Enable Multi-Path” button:

image

Now it should show connected:

image

Complete the same tasks on the other node as well.

Now, before we can attach a volume from the SAN we are going to have to MAP the LUN explicitly to each of the nodes.  So, we are going to have to open the web management utility for the P2000 again.  Once in, if we expand the Hosts in the left pane we should now see our two nodes listed (I have omitted server names in this screenshot):

image

We need to map the two volumes created on the SAN to each of the nodes.  Right click on the volume, selecting Provisioning – Explicit Mappings

image

Then choose the node, click the Map check box, give the LUN a unique number, check the ports assigned to the LUN on the SAN and apply the changes:

image

Assign the same LUN number to the other node and complete the same explicit mapping to the other node.  Then complete the same procedure for the other volume.  I used LUN number 0 for the Quorum Volume, and LUN number 1 for the CSV Volume. 

Jump back to the nodes, back into the iSCSI initiator and click on the Volumes and Devices tab, press the Auto Configure button and our volumes should show up here:

 image

Complete the same procedure on the second node as well.  If you are having difficulty with the volumes showing up sometimes a disconnect and reconnect is required.(don’t forget to check the “Enable Multi-Path” option)

Now we want to enable multipath for iSCSI.  Fire up the MPIO utility from the start screen:

image

Click on the Discover Multi-Paths tab, then check off the box “Add support for iSCSI devices” and finally the Add button:

image

The server will prompt for a reboot.  So go ahead and let it reboot.  Don’t forget to complete the same tasks on the second node.

After the reboot we are going to want to fire up disk management and configure the two SAN volumes on the node, making sure each node can see and connect to them.  When initializing your CSV volume I would suggest making this a GPT disk rather than an MBR one, since you are likely to go above the 2TB limit imposed with MBR. 

I format both volumes with NTFS, and give them a drive letter for now:

image

After configuring the volumes on the first node, I typically offline the disks, then on-line the disks on the second node to be sure everything is connected and working correctly.  Don’t get worried about the drive letters assigned to the volumes, this doesn’t matter.

Getting there slowly!

Next, before we create the cluster I always like to assign the Hyper-V External NICs in the Hyper-V configuration.  Fire up Hyper-V Manager, selecting “Virtual Switch Manager” in the action pane.  We are going to create the external Virtual Switches using the adapters we assigned for the Hyper-V VM’s.  I always dedicate the network adapters to the virtual switch, un-checking the option “Allow management operating system to share this network adapter”. 

At this point we have completed all the prerequisite steps required to fire up the cluster.  Now we will form the cluster. 

Fire up Fail over Cluster Manager from the start screen:

image

Once opened, select the option in the action pane to create cluster.  This will fire up the wizard to form our cluster.  The wizard should be self-explanatory, so walk through the steps required.  Make sure you run the cluster validation tests, selecting the default option to run all tests.  This is the best time to be running this test, since it will take the cluster disks offline.  You don’t want to have this cluster in production finding issues wrong with it, having to run the cluster validation tests bringing the cluster down.  If we run into any issues here we can address them now before the system is in production. 

The P2000 on Windows Server 2012 will create a warning about validating storage spaces persistent reservation.  This warning can be safely ignored as noted here

Hopefully when you run the validation tests you will get all Success (other than the note above).  If not, trace back through the steps and make sure you are not missing anything.  Once you get a successful validation save the report and store it if you need to reference it for future support. 

Finish walking through the wizard to create your cluster.  Assign a cluster name and static IP address to your cluster as requested from the wizard. 

That should do it.   If you got this far you made it.  Congratulations! 

 Continue with Part 3 here: http://bit.ly/1aVnDAj

 

Hyper-V Server 2012 is too good to keep ignoring


I’ve worked in a Windows background for over ten years.  The constant debate has always been there between those who are pro-Microsoft and anti-Microsoft.  Or should I say Micro$oft?  One of the biggest points from those in the anti-Microsoft corner is the outrageous pricing strategy that they use to sell software.  That they are selling software in the first place is an atrocity in its own.  Others have been about the monopolies Microsoft held at times with several versions of software it releases.  So much hatred towards one company by individuals is astounding, and the negativity towards this company is hard to break.  So hard they go against their own morals just to be anti-Microsoft.

Let me give you an example:

I have a friend in I.T. who is very anti-Microsoft.  You see, he despises anything they put out stating its garbage before even knowing what that product does.  All of his devices he manages are running some form of Linux, and he constantly boasts how cheap it is to run compared with the offerings from Microsoft.  And yes, he does own several Chromebooks.  Now, when he compares hypervisor platforms he is so pro-VMWare it is overwhelming.  VMware is a great product, I will never argue this fact.  However, to get started with this product you are going to have to shell out some serious money if you want to get it up and running.  Yes, money for software (what a concept!).  I try to tell him that VMWare was a monopoly in the virtualization field for many years, but he immediately dismisses it.  Finally I tell him that Hyper-V is free! And my response I get is “it’s not even worth that”. 

So what gives?  Hyper-V Server, the non-gui based (yes, its even command line for you gui haters!), FREE product from Microsoft is an enterprise class ready hypervisor but you turn your nose up to it.  And don’t tell me KVM is a better solution because it’s not quite there yet.  It may be some day, but it certainly is not in the same hypervisor class that is on par with VMWare’s offerings.  The download is free, here’s the link: http://www.microsoft.com/en-us/server-cloud/hyper-v-server/default.aspx  Fire it up and give it a good test.  Are you just scared you may actually like a product from Microsoft?

pfSense Update


Posted an exported VM of pfSense 2.1 on SkyDrive, please download here:

http://sdrv.ms/14O16Bu

What went wrong with Windows 8 and why it is Microsoft’s best product


I’ve used Windows 8 for well over a year now since the first customer preview. I’ve been involved with beta testing Microsoft’s operating systems in one way or another since Windows Millennium, and have supported the operating system for well over 10 years of development.  I have to say that Windows 8 is both Microsoft’s best and worst products. You may ask why I would possibly say that, it can’t possibly be both. Let me explain.

Windows phone is a marvellous mobile platform. Its main flaw is that Microsoft were too late in the game to capture a significant enough of a market share to captivate the major app developers.  Therefore, the big apps that ran on iOS and Android were nowhere to be found on the Windows phone marketplace. In the early days of the new phone OS platform the marketplace was so dry it was hard to find a good app for free, never mind paying for one.  This became a snowball effect of sorts, since none of the most popular developers wanted to journey into these barren lands. Everyone that reviewed the platform loved the design, the interface, the operating system in general, but since it was missing the apps they used everyday they had no choice but to give the phone a thumbs down.

Unfortunately there was nothing for Microsoft to do about this.  They put a significant effort into marketing the platform, even partnering with faltering Nokia to push the platform, paying developers premiums, and offering app porting assistance, but everything they did didn’t seem to matter.  Both Apple and Google were well out of the starting gate and running an equally aggressive phone development and marketing campaign as well. 

Microsoft was also being hit hard in its bread and butter by the new tablet market. Again, both Apple and Google jumped out of the gate leaving everyone wondering where Microsoft was.  Microsoft is fortunate enough to have such a loyal share of the “primary computing device” market that they have a handicap, but they learned from their phone release that they couldn’t afford to postpone this for long. So what better path to release a tablet device operating system than using the windows phone, so called Metro, operating system.  This was a great idea from Microsoft, but whoever decided to take the almost two decade old existing Windows platform and try to merge it with the phone platform is the person who also shot the fatal bullet that killed this great product.

Microsoft has pride in Windows, and so they should. They have successfully owned the desktop market for many years, defeating Apple even after very successful marketing campaigns from them for OS X, Google releasing Chrome OS, and the countless versions of Linux flavors that have been released over the years. If you survey a group of every day common people, more than 90% of them will be using Windows on their desktop computer both at home and work.  What Microsoft has been doing for Windows over the years is exactly what people are looking for in a desktop operating system. 

However, desktops and mobile devices such as tablets are two very different devices. When you want to get down to business you are going to use a desktop.  You want the power of having what you need and are used to right at your hands. I am writing this on an HP Elitetab and keep wishing I had my HP ProBook at my lap instead. However, tablets are designed as a consumption device. If I just want to horse around, watch YouTube, read some articles on reddit, or check out the news while lounging around I love the feel and use of a tablet. The HP Elitetab feels just as nice as an iPad, and the Windows 8 metro portion of the operating system works great. 

This brings me to my title of this blog.  Why I think Windows 8 is a great product is because the “metro” interface on tablets for everyday consumption of the Internet is hands down the best experience out there. I’ve tried all three. Everything just works great, there are no limitations, apps are there, and the interface is very user intuitive.  None of the other mobile operating systems compare.  It feels very similar to how the Windows Phone platform is.  It’s different from the other two (iOS and Chrome), and a great choice for someone looking for something different. Saying this I think this same interface on the desktop is the worst product Microsoft has ever made, for almost the exact opposite reasons. When I am using my probook, I can’t get I desktop mode fast enough. I hardly open any metro app, and when I do I am just frustrated with navigating in the application so I end up abandoning it. 

Microsoft, you need to separate this product into two separate releases.  This is not one OS for all devices, it doesn’t work that way.  Touch, consumption based devices are for a totally different user experience than when that same person wants to get down to business.  They always say don’t mix business with pleasure, so why are you doing this with Windows?  I remember a Microsoft executive explaining a story about seeing a man using an iPad on the train.  Eventually that person put it away and pulled out their laptop.  The Microsoft executive mentioned that he asked the man why he carried two devices, to which he responded the laptop was for when he wanted to do real work.  The point of the story was that Windows 8 was supposed to be the “one all” operating system.  Here’s the problem, the executive mentioned it himself:  the iPad was for using, and the laptop was for doing. 

I fear that if Microsoft doesn’t split Windows then they will continue to degrade the design of Windows to a point at which users will be fed up with the platform.  Microsoft still has enough of a loyal consumer base to save Windows before its too late.  Admit you goofed, “take it like a man”, and get back to what you do best. Split these two wonderful operating systems up and release them for the platforms they are designed for. Release metro as a mobile platform, even naming it different.  What’s wrong with Microsoft Metro?

pfSense on Hyper-V. Follow up


I’ve been running pfSense on Hyper-V for a few months.  I haven’t really encountered any issues with it, however I have read a few forums that have stated otherwise.  My setup is basic routing and NAT. 

I’ve exported the VM as a complete package, zipped it up and am sharing it on SkyDrive.  Ready for you to import into Hyper-V.  http://sdrv.ms/15jeBZ6.  Unzipped this package extracts to 5GB in size. 

When you import the VM you will find that Hyper-V console will ask you to match your adapters with your virtual switch configuration.  The first adapter is hn0 in pfsense, which is the “WAN” interface.  It is also set to pick up IP address via DHCP.  The second adapter is hn1 in pfsense, which is the “LAN” interface.  It is set to default static IP address of 192.168.1.1.  You can change this via the command line menu driven option, or via the webgui.  The default username and password is set on this VM, which is admin/pfsense.

Be interested to hear any feedback, fire me an email with any comments:  alex@northernjeep.com.  I don’t consider myself to be a pfsense expert, but I’ve set this VM months ago on a few different test beds, and basically forgot about it; it just runs away without concern.  So far my experiences with pfsense in Hyper-V have been nothing but positive.  

Step by Step Configuration of 2 node Hyper-V Cluster in Windows Server 2012 R2 – Part 1


Although the features presented in Hyper-V replica give you a great setup, there are many reasons to still want a failover cluster.  This won’t be a comparison between the benefits of Hyper-V replica vs failover clustering.  This will be a guide on configuring a Hyper-V cluster in Windows Server 2012.  Part one will cover the initial configuration and setup of the servers and storage appliance. 

The scope:
2-node Hyper-V failover cluster with iSCSI shared storage for small scalable highly available network. 

Equipment:
2 -HP ProLiant DL360p Gen8 Server
-64GB RAM
-8 1Gb Ethernet NIC, (4-port 331FLR Adapter, 4-Port 331T Adapter)
-2 146GB SAS 15K drives

HP StorageWorks P2000 MSA
-1.7TB RAW storage

Background:

When sizing you environment you need to take into consideration how many VM’s you are going to need.  This specific environment only required 4 virtual machines to start with, so it didn’t make sense to go with Datacenter.  Windows Server 2012 differs from previous versions in that there is no difference between versions.  With versions prior to 2012 if you needed failover clustering you had to go with Enterprise level licensing or above, standard didn’t give you the option to add the failover clustering feature (even though you could go with the free Hyper-V Server version which did support failover clustering).  This has changed in 2012.  No longer do you have to buy specific editions to get roles or features, all editions include the same feature set.  However, when purchasing your server license you need to cost out your VM requirements.  Server 2012 Standard includes two virtual use licenses, while Datacenter includes unlimited.  The free Hyper-V Server doesn’t include any.  Virtual use licenses are only allowed so long as the host server is not running any other role other than Hyper-V.   Because there is no difference in feature set, you can start off with standard and look to move to datacenter if you happen to scale out in the future.  Although I see no purpose in changing editions, you can convert a standard edition installation to datacenter by entering the following command at the command prompt:

dism /online /set-edition:ServerDatacenter /productkey:48HP8-DN98B-MYWDG-T2DCC-8W83P /AcceptEULA

I have found issues when trying to use a volume license key during the above dism command.  The key above is a well-documented key, which always works for me.  After the upgrade is completed I enter my MAK or KMS key to activate the server since the key above will only give you a trial. 

Next thing you are going to need to determine is whether or not you want to go with GUI or Non-GUI (core).  Again, thankfully Microsoft has given us the option to switch between both versions with a powershell entry so you don’t need to stress over which one:

To go “core”: Get-WindowsFeature *gui* | Uninstall-WindowsFeature –Restart
To go “GUI”:  Get-WindowsFeature Server-Gui-Mgmt-Infra, Server-Gui-Shell | Install-WindowsFeature –restart

Get Started:

Install your Windows Operating system on each of the nodes, but don’t add any features or roles just yet.  We will do that at a later stage.

Each server has a total of 8 NIC’s and they will be used for the following:

1 – Dedicated for management of the nodes, and heartbeat
1 – Dedicated for Hyper-V live migration
2 – To connect to the shared storage appliance directly
4 - For virtual machine network connections

We are going multipath I/O to connect to the shared storage appliance.  Of the NIC’s dedicated to the VM’s we will create a team for redundancy.  Always keep redundancy in mind.  We have two 4-port adapters, so we will use one NIC from each for SAN connectivity, and when creating a team we will use one NIC from each of the adapters as well. 

The P2000 MSA has two controller cards, with 4 1Gb Ethernet ports on each controller.  We will connect the Controller as follows:

image

Two iSCSI host ports will connect to the dedicated NICs on each of the Hyper-V hosts.  Use CAT6 cables for this since they are certified for 1Gbps network traffic.  Try to keep redundancy in mind here, so connect one port from one controller card to a single nic port on the 331FLR, and the second controller card to a single NIC port on the 331T:

image

On our hyper-V nodes we are going to have to configure the connecting Ethernet adapters with the specified subnet that co-relates to the SAN.  I tend to use 172.16.1.1, 172.16.2.1, 172.16.3.1 and 172.16.4.1 to connect. When configuring your server adapters be sure to uncheck the option to register the adapter in DNS so you don’t end up populating your DNS database with errant entries for your host
servers.  See for example:

 image

From each server ping the host interfaces to ensure connectivity. 

HP used to ship a network configuration utility with their Windows Servers.  This is not supported yet in Windows Server 2012, however the NIC’s I am using are all Broadcom.  A quick look on Broadcom’s website led me to the Windows Management Application BACS.  This utility allows you to fine tune the network adapter settings, what we need this for is to hard set the MTU on the adapters connecting to the SAN to 9000.  There is a netsh command that will do it as well, but I found it to be unreliable when testing and it rarely stuck. 

Download and install the Broadcom Management Applications Installer on each of your hyper-v nodes.  Once installed, there should be a management application called Broadcom Advanced Control suite.  This is where we want to set the jumbo frame MTU to 9000.  This management application does run in the non-gui version of Windows Server, and you can also connect to remote hosts using the utility as well.  You need to make sure you have the right adapter here, and if you are dealing with 8 NICs like I am this can get confusing so take your time here.  Luckily enough you can see the configuration of the NIC in the
application’s window:

 image

Verify connectivity to the SAN after you set the MTU.  Send a large packet size when pinging the associated IP addresses of the SAN ports using a ping command such as:

ping 172.16.1.10 –f –l 6000 

If you don’t get a successful reply here then revisit your settings until you get it right.

Network Teaming

You could create a network team in the Broadcom utility as well, however, in testing I encountered there to be issues using the Broadcom utility.  The team created fine, but didn’t initialize on one server.  Removing the errant team proved to be a major hassle.  Windows Server 2012 includes NIC teaming function, so I prefer to configure the team on the server directly using the Windows configuration.  Again, since I am dealing with two different network cards, I typically create a team using one nic port from either card on the server. 

The new NIC teaming management interface can be invoked through server manager, or by running lbfoadmin.exe from command prompt or run box.  To create a new team highlight the NICs involved by holding control down while clicking on each.  Once highlighted, right click the group and choose the
option “Add to New Team”

 image

This will bring up the new team dialog.  Enter a name that will be used for the team.  Try to stay consistent across your nodes here so remember the name you use.  I typically go with “Hyper-V External #”.  

image

We have three additional options under “Additional properties”

Teaming mode is typically set to switch independent.  Using this mode you don’t have to worry about configuring your network switches.  As the name implies, the nics can be plugged into different switches, so long as they have a link light they will work on the team.  Static teaming requires you to configure the network switch as well.  Finally, LACP is based on link aggregation which requires you to have a switch that supports this feature.  The benefit of LACP is that you can dynamically reconfigure the team by adding or removing individual NIC’s without losing network communication on the team. 

Load balancing mode should be set to Hyper-V switch port.  Virtual machines in Hyper-V will have their own unique MAC addresses that will be different than the physical adapter.  When load balancing mode is set to Hyper-V switch port, traffic to the VM will be well-balanced across the teamed NICs. 

Standby adapter is used when you want to assign a standby adapter to the team.  Selecting the option here will give you a list of all adapters in the team.  You can assign one of the team members as a standby adapter.  The standby adapter is like a hot spare, it is not used by the team unless another member in the team fails.  It’s important to note her that standby adapters are only permitted when teaming mode is set to switch independent. 

There is a lot to be learned regarding NIC teaming in Server 2012, and it is a very exciting feature.  You can also configure teams inside of virtual machines as well.  To read more, download the teaming documentation provided by Microsoft here: http://www.microsoft.com/en-us/download/details.aspx?id=30160

Once we have the network team in place it will be time to install the necessary roles and features to your nodes.  Another fantastic new feature in Server 2012 is the ability to manage multiple servers by means of server groups.  I won’t go into detail here, but if you are using Server 2012 you should investigate using Server Groups when managing multiple servers with similar roles on them.  In my case, I always create a server group called “Hyper-V Nodes”, assigning the individual servers from the server pool to the server group.

Adding the roles and features:

Invoke the add roles and features wizard by opening server manager, and choosing the manage option in
the top right, then “Add Roles and Features”

image

We want to add the Hyper-V role, and the failover clustering and multipath i/o feature to each of the nodes.  You will be prompted to select your network adapter to be used for Hyper-V.  Don’t have to worry about setting this option at the moment, I prefer to do this after installing the role.  You will also be prompted to configure live migration, since we are using a cluster here this is not required.  Live Migration feature here is for shared nothing (non-SAN) setups.  Finally, you will be prompted to configure your default stores for virtual machine configuration files and VHD files.  Since we will be attaching SAN storage we don’t need to be concerned about this step at this moment.  Click next to get through the wizard and Finish to install the roles and features.  Installation will require reboot to complete, and will actually take two reboots before the Hyper-V role is completely installed.

This covers part one of the installation.  At this point we should have everything plugged in, initial configuration of the SAN completed, and initial configuration of the Hyper-V nodes complete as well.  In part two we will be configuring the iSCSI initiator, and bringing up the failover cluster. 

Part two here: http://alexappleton.net/post/69111063826/configuration-of-2-node-hyper-v-cluster-in-windows