Forum Discussion

Gornak's avatar
14 years ago

Using SFHA on Windows, under VMware

We are considering options for clustering Windows (2k3R2, 2k8R2, X64) nodes that are actually VMware VMs (ESX 4.1).  Is there documentation that describes setting this up, and what limitations might exist?  For example, does using cluster effectively kill the ability to VMotion?  Do you have to use RDMs for storage?

Sorry for being vague, I don't know enough (yet) to ask targeted questions.

Thanks in advance!

  • Hi,

     

    I don't believe you have to use RDM's, if you can configure a VMDK that is available to both hosts that would work fine.

     

    As for Vmotion, if you think about it, after you cluster, there is no need to vmotion the server (with the application online). If you need to reboot or perform maintenance on a host containing the active cluster node, simply failover to the other node (which should be on a different host). I know this is technically downtime, but as you'll now, even with Vmotion the connection will drop for a few seconds. Depends how many seconds you can accept.

     

    When you're running VCS inside a guest, the only storage configuration capable of support vMotion is iSCSI in guest, or NFS inside the guest using the mount agent. RMD/VMDK's are not supported for vMotion from a VCS perspective.

     

    You can also have a look at ApplicationHA, its basically a VCS agent running in the guest, which is monitored from vCenter (similar to what VCS for ESX used to provide before Vmware locked down the console).

     

    I've attached the doc about VCS and Vsphere :)

4 Replies

  • I agree with Riaan, I would however recommend VCS One for complete virualisation protection or Application HA for just application availability within a VM without the need to cluster the virtual machines themselves. 

    If you still desire to cluster the VM'S then iSCSi is the way to go. I would also recommend that you pay particular attention to the network configuration required for the VCS heartbeats as these networks will need to span the ESX Hosts.  

    VCS for ESX is end of life. 

    Regards

    Lee

  • Hi,

     

    I don't believe you have to use RDM's, if you can configure a VMDK that is available to both hosts that would work fine.

     

    As for Vmotion, if you think about it, after you cluster, there is no need to vmotion the server (with the application online). If you need to reboot or perform maintenance on a host containing the active cluster node, simply failover to the other node (which should be on a different host). I know this is technically downtime, but as you'll now, even with Vmotion the connection will drop for a few seconds. Depends how many seconds you can accept.

     

    When you're running VCS inside a guest, the only storage configuration capable of support vMotion is iSCSI in guest, or NFS inside the guest using the mount agent. RMD/VMDK's are not supported for vMotion from a VCS perspective.

     

    You can also have a look at ApplicationHA, its basically a VCS agent running in the guest, which is monitored from vCenter (similar to what VCS for ESX used to provide before Vmware locked down the console).

     

    I've attached the doc about VCS and Vsphere :)

  • There is a product called  "Cluster Server for VMWare ESX" which runs VCS in ESX (essentially Linux) which allows you to vmotion - see http://www.symantec.com/business/products/newfeatures.jsp?pcid=pcat_business_cont&pvid=2221_1 (vmotion is mentioned in datasheet).  If you run VCS in the VM using SFWHA, then I don't think vmotion is supported.

    Another alternative maybe "Cluster Server One" which used to have a lot of VM functionality, but not sure is latest version still allow this - best to ask question on the "Cluster Server One" forum.

    Mike