cancel
Showing results for 
Search instead for 
Did you mean: 

VxVM/VCS on Windows

sunillp
Level 3

Hi,

I want to know if VxVM (CVM) / VCS on windows just supports HA or active-active as well? By active-active I mean all nodes in the windows cluster accessing shared luns concurrently/simultaneously (accessing same region on a disk in serialized fashion using something like DLM)

Thanks,
Sunil
1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Level 6
Partner    VIP    Accredited Certified

It uses SCSI Reservation.

This TechNote describes it in detail:

How does Storage Foundation for Windows (SFW) maintain SCSI reservations in clusters
http://seer.entsupport.symantec.com/docs/334047.htm


View solution in original post

8 REPLIES 8

Marianne
Level 6
Partner    VIP    Accredited Certified

Nope - only failover service groups. Exclusive lun access per cluster node.

sunillp
Level 3

By "Exclusive lun access per cluster node" do u mean that only one cluster node at a time can do IO on a lun?

Is it that only one cluster node is owner of a particular lun for the lifetime and only that node can do IO on that lun OR is it that any cluster node can do IO on a lun by co-ordinating among themselves/taking appropriate locks/ownership or redirecting IO to the owner of the lun?

Basically I want to know if multiple cluster nodes can do IO on a given lun may be exclusively? If yes what mechanism they use for exclusive access.

Marianne
Level 6
Partner    VIP    Accredited Certified

It uses SCSI Reservation.

This TechNote describes it in detail:

How does Storage Foundation for Windows (SFW) maintain SCSI reservations in clusters
http://seer.entsupport.symantec.com/docs/334047.htm


sunillp
Level 3

One more question: Oracle RAC is supported on windows (also on linux/solaris) which is a perfect example of active-active cluster/application. Many sites say that on linux/solaris we can use Oracle RAC with VCS i..e create a shared dg/volumes and use them for storing database.

Is this also possible in windows i.e. can we create shared dg/volumes in windows and let oracle RAC create db on this which will be shared by all cluster nodes?

Marianne
Level 6
Partner    VIP    Accredited Certified
RAC seems to be unsupported with SFW/HA.
Latest Software Compatibility List :
ftp://exftpp.symantec.com/pub/support/products/Storage_Foundation_for_Windows/337682.pdf


sunillp
Level 3

Does Veritas support CVM on Windows. If yes in this case can multiple nodes access shared storage (clustered dg/volumes) assuming there is no clustering software like VCS or MSCS? I guess its VCS which ensures "Exclusive lun access" or is it CVM (clustered volume manager)?

Does it make sense to have CVM without VCS/MSCS? Does CVM make it mandatory to have VCS/MSCS on top of it for proper functionality?

I read somewhere that  in Windows cluster, there is no real active-active configuration. There are two active-passive configurations on two different hosts, first config active on first node and passive on second while second config active on second node and passive on first node. Is this the case with both VxVM/VCS and VxVM/MSCS?

Please answer each question separately, though they look very similar.

Marianne
Level 6
Partner    VIP    Accredited Certified
There is no CVM for Windows. The reason for this is explained in the first TechNote that I've posted:

Windows file systems do not currently handle shared access, so having data imported on multiple nodes could lead to overlapping access to data blocks, which may result in loss of data consistency or larger corruption.

CVM only available on Unix/Linux platforms.

Correct - this is the case with both VxVM/VCS and VxVM/MSCS?

Wally_Heim
Level 6
Employee
Hi Sunillp,

Marianne is right on all counts,  SFW/SFW-HA does not support for a CFS (clustered file system - that allows mutliple windows server to access the same volume.) 

Oracle does make a version of RAC that runs on Windows but as Marianne pointed out SFW/SFW-HA does not support it.

There are no plans that I'm aware of for supporting a CFS with SFW/SFW-HA.

Thanks,
Wally