Skip to main content
 

Logo-Text

Search
Home
  
ServerCare home > Posts > Building a Hyper-V Cluster on two W2K8 Nodes  

 

 

October 04
Building a Hyper-V Cluster on two W2K8 Nodes

In this first part we'll focus on getting the infrastructure in place. We'll talk about the systems making up the nodes and the shared storage we need to build. Once that is in place, we'll focus in the next article on how to get Hyper-V clustered on this infrastructure. Note that we are not building a cluster –in- Hyper-V, as many people have written about; we are building a physical Fail-over cluster with two nodes, and on that we run Hyper-V.

My goals

The moment I read that Hyper-V was cluster-aware, I knew I would not rest until I had tried building a failover cluster running Hyper-V. It's one of those exciting technologies you just hàve to test out for yourself, not being satisfied with simply reading about it. (yes; that is a hint ;-) In fact, I had the sneaky feeling that Robert Larson (the MS blog in the link above) in his ten-step plan may have cut a few corners to expedite things.

First of all let's take a moment to realize the importance of hyper-V being cluster aware… In the past you would have several virtual machines running one some big iron, but once the big iron failed, being the single point of failure; all your machines would go down hard. As tempting as virtual machines are, this is completely unacceptable to most people or businesses. Fail-over clustering adds this nice and secure feeling of having a backup system, removing that single point of failure.

It must be nice for guys like Robert Larson who works for Microsoft consulting services, to have all this hardware readily available to build his labs with. Unfortunately I don't have such spare hardware, and no big budgets to spend. Still, for this project some dedicated hard and software is needed and I set my budget not to go over 1500 Euro, and use whatever spare or free equipment or software I can get, without compromising too much for performance reasons.

Figure 1. The bill (and configuration details) of one of the nodes

I believe that technologies such as clustering and virtual machines are ready to reach out from the enterprise level into the mainstream. You could argue that I'm following Google's strategy of using combining cheaper hardware with intelligent software; quantity in hardware for redundancy, and quality in software for availability.

In order to prove that I want to build something I can actually use; not just a simple proof of concept. While doing that I will still attempt to remain within budget.  With that in mind I decided to buy some hardware to build a few identical systems to create my cluster with (rather than buying pre-built –more expensive- systems). A bit risky, as at the time of writing it's still a bit of a guess if the motherboard you are buying has a bios supporting VT. More on how to figure that out can be found here. For that reason alone you may want to consider buying more expensive machines from the MS recommended hardware list (see previous link), if you are building this for a business. All in all I have not spent much more than 1400 Euro's on two nice dual code 2.66 Ghz systems with 6Gigs of memory, leaving me with 100 Euro to buy a Gigabit switch to build my SAN backbone with and still remain within budget.  

Figure 2. The two nodes; MSI Neo2 FIR main boards, Core2 Duo E6750 CPU's

This should be sufficient to build my lab with, in fact a small size business would do pretty well running a similar setup. (It's no enterprise hardware, but carefully selected and much better than what I see at some small businesses!) I would like to be able to run 3 to 4 servers on this Hyper-V cluster. Of course the real interesting part will be finding out the stability and performance of these systems. Once I'm satisfied with the design I'll make some baseline measurements using various tools such as performance monitor and see how things evolve… this should be fun!  

When money is a factor running several virtual machines on one cluster sounds like a great way to save some cash and still have redundancy. A few things have changed with Windows 2008 clustering though; where in the past it was possibly to simply share a SCSI hard-drive that option is now no longer available. For more details on the requirements for a Hyper-V cluster you might want to check out Microsoft's step-by-step guide. The article you are reading now however is written in a much more entertaining style, I even have pictures and everything! So be sure to return once you've read that, to get a more down-to-earth perspective on building a Hyper-V cluster.  The shared storage options left include dedicated hardware solutions such as fiber based SAN or something a little bit closer to our budgetary needs: iSCSI

More on iSCSI.

iSCSI is often called an emerging technology, and while I'm writing this I wonder if you can still call it 'emerging'; it has been around for years now. Strangely enough this great technology has not gained as much traction as it should have, but rest assured; with Hyper-V and Windows 2008 Failover clustering now advocating this, and with iSCSI client (initiator)software natively available in Windows 2003 server, Windows 2008 server, Vista and separate downloadable Windows XP I'm sure in the next few years this technology is going to break-through widely. Before it was a nice technology available to us if you needed some LAN based storage, but now we actually are seeing some very compelling technology that gains greatly buy the use of iSCSI. As mentioned above, it competes with the much more expensive fiber channel which is used at the higher end of the market. The advantages for iSCSI are that it is using technology (Ethernet, preferably Gigabit), commonly available, this means most IT people already have a sound knowledge of the basic technologies needed to maintain the SAN. Not all IT people have similar experience with fiber channel Switched fabric, HBA's (the fiber channel Host Bus Adapter) and the enterprise solutions associated with that technology. As mentioned the initiator software is natively available in the latest Microsoft operating systems, making the entry level to SAN even more accessible. These are the factors that will drive iSCSI to become a mainstream technology. When building enterprise solutions however; Fiber Channel in combination with a dedicated (hardware solution) NAS is still king, but at a price smaller businesses will not be willing to pay. For them: iSCSI is the alternative.

 So what is this iSCSI? Simply put, it's a fake hard drive that you can share from a server (target) on the network somewhere, and then link to from a client (initiator). On the client the initiator software makes the 'network disk' look as if it's a real hardware disk in your system. Instead of sharing a network drive on file or network level as you normally do with a file-server, you are now sharing a disk at block level, and this opens up a whole new realm of possibilities!

Figure 3. Would you believe these last two disks aren't real? The node sure thinks they are!

It's even possible for (for example) thin clients to boot from iSCSI drives, provided the right software is in place. Ideally one would have a NAS (Network Attached Storage) system with some RAID configuration for redundancy and/or performance. Even better: get the devices or servers sharing the iSCSI resources on their own fast (Gigabit) network and you'll have your very own SAN (Storage Area Network)… I could probably throw some more acronyms at you, but I'll leave it at this for now. I decided to build a small SAN on Gigabit Ethernet to allow some performance for my iSCSI drives. Gigabit Ethernet switches are cheap now, so it should not be a big drain on the budget. If you decide to go this route, make sure to read up on jumbo fames especially if you have decided to build something more permanent than a lab situation. For now, we need to build a SAN solution!

How about Open Source?

Since we know we need iSCSI to remain on budget, we simply have to figure out what NAS to get! There are quite some options, and NAS's can come as a hardware solution, or simply software installed on (usually) windows server. Typically a NAS shares its disks using several protocols, such as SMB, NFS etc. Since we only need iSCSI we might as well only look into that. For my solution I went the server with iSCSI software route, since I happen to have some spare (older) servers anyway. You'll find there are quite some iSCSI solutions out there, open as well as closed source.

I'll throw in the spoiler right here..: If you where thinking of using open source solutions like OpenFiler (current version 2.3) or FreeNAS (current version 0.69b3), think again. Starting with Windows 2008, the cluster software requires your iSCSI target software to support something called "persistent reservation commands". These are based on the SCSI Primary commands-3 specs (SPC-3), and most open source software I know of (with the possible exception of OpenSolaris) is based on SCSI-2, and does not support these reservation commands. OpenFiler for example is based on IET . The FreeNAS target is based on the NetBSD iSCSI target by Alistair Crooks and ported to FreeBSD, and –also- does not support these command. So far for Linux and BSD…

[update May 2009]

Good news! There are a few new options available now that support persistent reservervation. First of all, as you may have read from the comments on this blog the latest nightly builds of FreeNAS now seem to work fine. Additionally, if you happen to be an MSDN or Technet subscriber (as I am ;-) you can get the brand-new Windows Storage Server 2008, with the 3.2 iSCSI Target, which of course works fine too...

[/update]

[update Oct 2009]

FreeNAS 0.7.0 RC2 (as of writing just out) now uses ISTGT, and according to this page: http://www.peach.ne.jp/archives/istgt/

it supports SPC-3...! 

[/update]

If you are simply looking for a nice NAS for home use, and would like to use iSCSI to only have a few systems connect to; look no further; OpenFiler as well as FreeNAS work admirably to help you accomplish this goal. For clustering purposes however, we need to look further…

Windows iSCSI Targets

Good news on the Microsoft based front, there are two options (unfortunately not free ones) that do support the persistent reservation commands and that also support concurrent connections to the shared disk (keep in mind that for a cluster several nodes need to have access to the disk!).

 Windows based iSCSI targets that support SCSI-3 persistent reservation and concurrent connections are "StarWind" from Rocket Division Software, or Microsoft's own iSCSI Software Target 3.1.

Notice that although there is a free evaluation version (the personal version) offered by Rocket Division Software, this won't work when building a cluster, as it only allows one (single) connection. A single node cluster seems a bit of a waste of time… The alternative is a 30 day evaluation version that does support concurrent connections however and would be suitable... For our purposes the 'Server' version of StarWind would be sufficient, since it support two concurrent connections, and we have only 2 nodes anyway…

I opted to use the Microsoft 90 day evaluation version of the iSCSI Target, as it gives me more time to play around with my cluster. Both solutions support SCSI-3 persistent reservation, and both support multiple concurrent connections. Both implementations of the target software are very easy to install and configure, using a very intuitive interface. Jose Barreto has done an excellent job of describing how to configure the Microsoft software.

The Microsoft iSCSI software Target is hard to find, as it is a OEM solution that is sold along (as an option) with Windows Storage Server ; a Windows 2003 server version typically sold in combination with specialized hardware as a OEM solution by vendors such as HP, Dell etc. You can't even find it on Technet.

[update oct 2010]

Storage server 2008 as well as storage server 2008 R2 are now available from technet. In addition; the R2 version has iSCSI Target 3.3 that allows installation on a regular 2008 R2 server (previous versions required Storage server editions)

[/update] 

 The 90 day Microsoft evaluation version can be downloaded from: http://files.download-ss.com/storageserver.iso however.

 This is the software intended for Windows 2003 Storage server, but it seems to work quite well on Windows 2003 server SP2.  

Figure 4. The Microsoft iSCSI Target Management interface

 One thing that was not too intuitive was the way access is granted to the initiators, which was much more straightforward in StarWind.

[update April 4t 2011]

Microsoft has decided to make their iSCSI Target 3.3, (which was available from Technet for a while now) publicly available! http://blogs.technet.com/b/josebda/archive/2011/04/04/microsoft-iscsi-software-target-3-3-for-windows-server-2008-r2-available-for-public-download.aspx

This is great news, as now anyone can legally run this software. No longer is there a vendor lock-in with the Storage Server product. Competitors such as Satrwind have an edge, since they have features (in the upcoming 5.7 version) such as High Availability for storage (yes: supoorting Cluster Sahred Volumes, quite impressive!).  Block level deduplication, load balanced MPIO etc. etc.

So there's no real competition, as the MS 3.3 Target is only a basic target. It's great for people (like me, and possibly you) running smaller environments though! 

[/update]

[update October 4rth 2009]

Rocket Division Software graciously sponsored this article by providing a licensed copy of their server version, which has now taken the place of the earlier mentioned 90 day Evaluation version of the Microsoft target. First thing I noticed was that the logging off and on to the Target is much faster. I have not done any speed tests between the two yet, but if I find some time I might just do that. The StarWind interface is very intuitive, I did however find a few options presented in the GUI that made me wonder if Rocket Division Software have read the MS User Interface Style Guide; maybe intuitive, but definitely not standard. The iSCSI engine however has been running without a hitch for the last few days, and right now; that's my only concern.

Figure 5. More options are possible with regards to what type of virtual storage is used and snapshots of such storage in the professional and enterprise version.

 

We have now basically built the following setup:

Figure 6. The NAS layout

As you can see the storage network is physically separated from the LAN. The hartbeat is nothing more than a simple crosscable between the two node servers.

Now that we have the iSCSI Target up and running, we need to connect the iSCSI client (the 'initiator'). So we go take a look at the control panel of the nodes.

Figure 7. The iSCSI initiator in the control panel!

Figure 8. Initiator as found in the control panel of the Windows 2008 node.  

First thing we do is Discovery, simply provide the IP number of the iSCSI Target you have set up.

If the discovery went just fine on your initiator, but there are no targets, make sure to check the iSCSI Initiators tab on the properties of your Targets (on the 'server' or SAN); to test, simply add the FQDN (Fully Qualified Domain Name) of one of the initiator machines. For example: one of my nodes; node1.servercare.nl. You can also add the iSCSI Qualified Name (IQN) which is quite similar to a DNS name, with some extra numbers added. In fact if you want to go overboard; go and run the ISNS server that can be downloaded from Microsoft. It's not required for our purposes though.

Figure 9. The MS ISNS server kind of work like a DNS for iSCSI targets and initiators  

Now that the Discovery has been done, the Targets tab should show you your targets, you can simply logon to get the 'drives' you need.

Figure 10. As you can see the targets are there now!  

You'll find that now, when you go to the disk manager on de nodes there are new disks to be found. Simply make them active basic disks and format them using NTFS. Keep in mind that for this lab environment we have chosen to implement iSCSI software on a Windows 2003 server that may have other functionality. If you decide to build a more permanent solution I urge you to seriously consider a NAS that is of a more dedicated nature with all the enterprise redundancy (double power supplies, RAID levels, UPS etc) in place. Believe me: once you have a number of virtual machines running from this NAS you do not want to have this storage go down.. ever…  Having to reboot it because of some other software running on it would be very annoying to say the least.

Validating the cluster environment

 Now that we have added two new disks to the two Windows 2008 servers making up the nodes of what will become our cluster, next step is to get the Failover Clustering feature installed (note it is a feature, not a role!). Simply do this using the Server Manager.  

Figure 11. Add Features in the Server Manager 

Once this software is installed, you'll find the amongst the administrator tools you now can find the Failover Cluster Management MMC.  

.

Figure 12. The Failover Cluster Manager as found in the administrative tools.

 One of the options you can choose is Validate a Configuration… This will test both nodes.

Figure 13. This is what happens if you use FreeNAS or OpenFiler…

 After the test you get a very lengthy report with all the conclusions. Below you can see the part that shows that at the second attempt we have used the Microsoft iSCSI Target software.  

Figure 14. The Cluster Validation report.

 Once you have a report looking like the above, you have been successful at creating a Hyper-V cluster infrastructure, we are now ready to build the cluster and get Hyper-V installed! More on how to do that in the next article:

Building a Hyper-V Cluster on two W2K8 Nodes PART 2:

 Click Here to read part 2 ….

[nov 2009] update: although the building process is identical as described here, building a Hyper-V cluster under Windows 2008 R2 brings the options of having Cluster Shared Storage and Live migration; more on that here: http://www.servercare.nl/Lists/Posts/Post.aspx?ID=89

Comments

Missing Pieces, questions

Nice article, but there are som emissing hardware pieces in your example setup:
 - each machine needs two additional NICs (10/100 as I read your write-up)
 - your iSCSI server isn't specified at all
 - you mention 6 Gigs of RAM per system, yet list 2 per system in the build list

Also, your configuration raises a question - why four low-capacity SATA drives (4x 160 Gig) on each machine? For a cluster test system, two larger mirrored drives would likely have cost less, used less power, and provided more storage per Euro...

Also, have you looked into MS newest offering, Hyper-V Server 2008 - a free (as in beer) OS for hosting Hyper-V machines? Would it cluster as well as the Full OS your example uses?

Finally, the cluster you describe is great for testing/development/training use, but for any kind of real use the cost escalates fast - by my count you would need two Windows Server 2008 Enterprise licenses (for each machine and up to four virtual machines), a copy of Windows Server 2003 to host the iSCSI fileshare, and the unknown cost of the iSCSI target software... The price of your hardware is trivial compared to the license costs...

But again, I understand and agree, for testing/development/training use this is great - the Windows Server 2008 Enterprise OS can be installed from a downloadable ISO from Microsoft for evaluation purposes, Windows Server 2003 is a bit trickier. Could the MS iSCSI target software be run under Server 2008?
 on 11/5/2008 14:52

Missing pieces... you're quite right!

WOW, you have been paying attention! ;-)

In fact you are quite right. After I had received the hardware I initially ordered I decided to beef up the servers a bit more, with future projects in mind. Please note though that this didn't add much to the total bill, as prices have dropped quite a bit since the initial purchase. Besides memory I added two cheap Realtek 1 Gbit nics (the same, which is important for clustering)

The drives are in a RAID10 config, simply because I wanted to play with that, not too much tought went into it. I guess it could be cheaper. Funny thing is that I only use half of the space available. The other node has to live with a RAID5 config.

The free beer OS won't cluster....

The iSCSI server is one of my other lab W2K3 servers, I should have mentioned that.

The cluster should be usable (in fact it is) for me. but the hardware of course is, as you point out, not worth installing the OS on. This makes sense as a Lab environment, or for testing purposes. There are by the way some Microsoft customers that do get their licenses for silly prices, for them this may actually be a worthwile setup. In real (enterprise) life one would scale up the hardware, but from an installation/software point of view that would not change the concept of what is described here.

By the way, I have a Technet subscription that I used for the software. It's not too expensive, and well worth the money, if you, like me; love to tinker with this stuff...! ;-)

I haven't tested the MS iSCSI target on W2K8, but I don't see any reason why it would not work.

Hope this answers your questions!

Paul
 on 11/5/2008 21:37

hello,very good!

ok,is good,good!
 on 11/11/2008 4:15

Re: Building a Hyper-V Cluster on two W2K8 Nodes

I'm glad you liked it!

Paul
 on 11/12/2008 0:21

Quick Question...

Hello and thank you for posting these articles, they are very informative and helpful!

Your hardware diagram shows the two servers, a 100mb switch, a 1GB switch and a "heartbeat" connection.  What excactly does that consist of?  I imagine it would be another 1GB switch (and NIC cards) to allow the Hyper V/Quorum to communicate, yes?  If so, then a functional system would require each main server to have 3 NIC's and the SAN support computer to have one yes?  Many thanks, Mac.
 on 12/21/2008 21:17

Question on hartbeat

The hartbeat link is a crosscable, nothing more! Since you don't need much speed for it, a simple 100Mbit cable is sufficient!

Ideally a system would have three nic's (as advised by Ms) however, in a lab situation you'll find 2 work fine. Never use it like that in a production environment though!

The SAN; as you state, needs one. (teaming won't work (yet) so two does not make sense right now)

I hope this answers your question.

regards,

Paul
 on 12/23/2008 22:56

core help

Have you seen any blogs or step by steps on this topic, but with Core instead of the Full version of 2008?
 on 2/17/2009 19:57

re: core help

 on 2/17/2009 20:39

Been running clusters since 2000 myself.

 I was actually brought to your article from looking for almost the same solution. However.. In a real world Enterprise application.

 Current plans consist of 2 Dell servers ( unknown models at this point). I already run on equallogic SAN and LOVE IT !! So I will opt for another for this.

 My plans are to utilize the hyper-V from Microsoft compared to VMWARE's offering which is VMWARE view.

 I plan to use WYSE V10L thin clients and boot wireless to the Hyper-V image.  Total nodes will be 50.

 Why not just do the same simple setup I;ve done for years you ask?  I was actually promted into beleiving I could cross cluster ( term used ) the system and do a dual failover type cenario. Meaning.....

 I will actually be running 25 nodes on one machine and 25 on the other. If one fails the other will pick up the left over nodes.

 Almost sounded rediculious to me however Im suppose to join in on a video conference tomorrow and a rep is going to have me vpn into his test lab to show that he is running it in this exact scenario.

 Im sure there is more pieces to the puzzle but well soon see. Clustering and virtualizing has grown leaps and bounds in the last 10 years I've been doing it so Im sure someone has some tricks up their sleeves we haven't seen or heard of yet.

 Stay tuned. Ill post back with more.
 on 4/22/2009 2:59

Re: Building a Hyper-V Cluster on two W2K8 Nodes

Sounds interesting! Love to see the details!

Running VMWare at one of my customers also, and quite frankly, it's doing a great job. It will be interesting to see how both solutions compete going forward. So far I've put my money on Hyper-V...

Truely interested in your experiences:
If you find the time to get your findings into a word doc with some pics I'd be more that happy to put it up as an article, in your name of course.

cheers,

Paul
 on 4/22/2009 8:01
1 - 10Next

Add Comment

Spam Filter *


Please enter 4982 in this field. This will help prevent SPAM.

Title


Body *


Attachments