Skip to main content


ServerCare home > Posts > Hyper-V Cluster Shared Volumes  



November 06
Hyper-V Cluster Shared Volumes

If you have built your Hyper-V cluster on Windows 2008, you might want to reconsider rebuilding it on Windows 2008 R2, there are basically two reasons that really jump out to entice you to make this decision:

First of all there's the live migration everybody has been talking about, but secondly and much less visible in the mainstream media but in my opinion at least as important; Cluster Shared Volumes.

In fact I would go as far as to say that live migration is nowhere without CSV (yes it's confusing isn't it.. we used to refer to Comma Separated Values with that acronym!).

Why Cluster Shared Volumes

What's so special about CSV ? Simply put; it allows multiple nodes to access the same volume at the same time. Kind of what specialized file-systems such as used by MelioFS, NetApp or Sanbolic allow you to do.

You might have read the article I wrote some time ago about the issues I had when building my previous cluster on Windows 2008. If you didn't here's a quick recap: I bumped into the need to have several LUN's for the different machines I wanted to distribute between my nodes. Each node could only access one iSCSI lun at the time. Therefore I was basically forced to create many LUN's so that I could fail over one particular machine without having to take the other machines with me, since they were also using storage on that LUN.

Live migration and CSV's

I referred to live migration as one of the reasons to implement CSV, the reason for this is the time that a failover from one node to the other node will take. If this is done with volumes that are not CSV, then the nodes will respectively need to first dismount and then mount the volume. Once shared volumes are used the nodes will be able to fail over immediately, since they have simultaneous access to them.

If you were to do a 'ping-t' to the cluster this results in it being unresponsive for 3 or 4 pings (which as I'm sure you'll agree can feel like 'forever'!) versus 1 ping when using CSV's… Big difference!

TechNet lists a number of other advantages, but to me the above are the most important.

Creating CSV's

First we enable the shared volumes from the main page from the Failover Cluster Manager:

You'll notice that in the Failover Cluster Manager there now is a node 'Cluster Shared Volumes'.

If you right click on that it will allow you to 'add storage'. That is, if you actually have storage available. In my case I simply made an iSCSI disk available that I've created on my Windows 2008 Storage server.

To do this I used the iSCSI Target software on the Storage Server and created a new disk. As an interesting side note: Windows 2008 Storage server will create VHD's as the storage unit to tie to a target. So essentially I will be crating VHD's within VHD's when I make VM's!

For people who would like to learn more about iSCSI; take a quick look at my article about building the Hyper-V cluster .

Once we've made the new disk available to the cluster by adding the disk using the iSCSI client in the control panel on the nodes, we'll be able to put it online, initialize it, make it a simple volume, and quick format it… (this only needs to be done on one node b.t.w.)

Now we're ready to take it up in the cluster, we do this by simply adding it as a regular disk:

Once we click "Add a disk" we'll be able to select the disk we want

Notice that the disk we just added still shows up as a local disk (F:) in the storage overview.

Another thing you can see here is that the Cluster Shared Volume also appears to be local! In fact, it's on the system drive, in our case the C: drive. Don't worry, that's supposed to happen… More on this later.

We now have our disk available to the cluster... remember that both nodes need to be able to 'see' the iSCSI disk, I've made it easy on myself by simply adding the virtual drives in the Storage Manager to a Target that I had already added to my nodes (in my case it was called T2). If you decided to create a new target, make so that the target is accessible and initiated from both nodes.

Next step is to simply add the storage we just made available in the cluster to the shared storage:

Select "Add storage" and select the Cluster Disk we want to have shared.

Now we see the new shared disk available to the cluster:

I've taken the liberty to rename the shared disk to "Cluster Disk 2 data disk"

Once this is done, you'll also notice that the disk that formally was seen as a local F: drive, now seems to link to the earlier pointed out system (C: ) drive. This works like a mount point , where a disk is made available to the system through a folder.

If we look here, we'll see that the available volumes (we've got two in our example) are listed in that folder:

As mentioned; both nodes will be able to write there, but never use this accessibility besides for the Hyper-V cluster. Manually making changes here will lead to problems. The Technet article specifically states:

  • No files should be created or copied to a Cluster Shared Volume by an administrator, user, or application unless the files will be used by the Hyper-V role or other technologies specified by Microsoft. Failure to adhere to this instruction could result in data corruption or data loss on shared volumes. This instruction also applies to files that are created or copied to the \ClusterStorage folder, or subfolders of it, on the nodes

Now that you have shared volumes available you'll be able to create Hyper-V VM's on them.

One pointer I'd like to leave you with; make sure to change the default Hyper-V settings to point to this new location, as you'll find that creating new machines, even though pointing to the correct location during creation might result in errors. This is fixed by simply making the clusterstorage default.


Host Limits

Are there any limits to how many Hyper-V hosts can participate within a CSV?  In this configuration would live migration move the Virtual Machines folder only and leave the .VHD files in place?
 on 3/18/2010 15:35

Re: Host Limits

Regarding the limit: my guess would be that the hardware, OS or NTFS specifies the limit.

If I understand you second question correctly:
Yes, you still connect to the same .VHD files, so it's important to have these also Higly Available (SAN)



p.s. VMWare in version 4 actually also makes it possible to migrate the storage, although not at the same time as VMotion. I trust (hope) Hyper-V will add this feature also in the future...
 on 3/24/2010 10:33

Yes there is a limit - but not with CSV

The only limitation is the limit of windows 2008 clustering which is 16 nodes in a cluster. More details here:
 on 3/25/2010 0:49

Multiple Disks attached to VM


Would a Virtual Machine that has 2 additional SCSI controllers attached to different disks on different luns work?

I have Exchange server setup like this:

c:\OS.vhd  Lun 0
d:\ Logs.vhd Lun 1
e:\ Database.vhd Lun 2

This way I am spreading i/o as recomended. However I would like to make the Exchange Server able to Live Migrate.

Q1. Does Just the OS Lun need to be a csv or do all of them need to be?

Q2. Each Node (I have 4) will each need c: d: e: right? But can I just fail over the OS lun and leave the others?

 on 4/27/2010 23:57

re: Multiple Disks attached to VM

Q1 No, actually none of them -need- to be a CSV, however; you'll find that live migration is a whole lot faster when they -are- CSV's. This is because the dismounting & mounting of a CSV is much faster.

Q2 Yes you'll need C: D: and E:

B.T.W. keep in mind that Exchange 2010 (assuming thats where you're going with this) has specific requirements for storage when virtualising, the disks need to be fixed, and you're basically throwing Exchange's high availability functionality such as DAG out the door, as it is not supported. If you're building this for a lab: fine, if you're considering this for a business solution: reconsider or at least make sure you're 100% aware of the restrictions...!

For Live migration it's all CSV or nothing, only one disk CSV and not the rest will negatively impact the live migration process. You'll be losing valuable seconds where the system is not available.

I hope this answers your question,

 on 4/29/2010 9:21

Order of things

It seems that it would be best to set up your CSV and make the shared disk available to the cluster BEFORE proceeding with your Hyper-V role activation.

Is this correct?  Or, does it not matter which you do first?  CSV's or Hyper-V ?


 on 7/9/2010 22:06

re: order of things

Hi Ron,

Sorry for the late response. Although from a workflow point of view it makes sense to creat the CSV's first, it doesn't really matter what comes first; both approuches work fine.

 on 7/20/2010 13:35

Migrating to CSV


earlier we've created a cluster of 2 nodes but without CSV. Instead used 2 separate LUN's. Now we found out about CSV posibilities, so created one CSV and added 3rd node to the cluster. Is there a way to easily trasfer (running) virtual machines to CSV on the 3rd node?
Or the only way is to stop machines, copy VHDs to CSV, migrate machines to 3rd node and reconfigure them to use VHDs from the CSV? Which makes a lot of time for machines being offline.

Anyway thanks for very useful topic!

 on 8/11/2010 0:52

Migrating to CSV

Hi Tadas,

Unfortunately the only way I know of is, as you say; stop the machines & migrate manually & reconfigure.

Looks like you'll have to schedule some downtime!

 on 8/12/2010 7:56

iSCSI Failover causes vms to crash

We have a couple of implementations of Hyper-V two-node clusters connected to two iSCSI SANs in active/passive failover.
If we failover the SANs cluster hosts lose connetivity for less than a second but all vms on the cluster crash and reboot.
Pretty sure this should not occur. Any idea what may be the issue?  I've read about persistent reservations with iSCSI and disk Ids stored in there, but so far can't make out what to do.
Cluster hosts show volumes locked for writing when this occurs.  Sounds like both SANs give their storage to Hyper-V and it tags each SAN storage as different disks... not same storage.
Any info would be much appreciated.

 on 2/10/2011 18:48
1 - 10Next

Add Comment

Spam Filter *

Please enter 4982 in this field. This will help prevent SPAM.


Body *