Showing results for 
Search instead for 
Did you mean: 
Level 4

Storage Foundation Cluster File System (CFS) provides several advantages over a traditional file system  when running in a VMware guest OS.  The primary benefit is that multiple virtual servers (up to 64) using CFS can simultaneously access data residing on the same physical LUN or virtual disk.  A secondary advantage is the ability to provide faster application failover within virtual servers than a VM restart allows.  And by using the VMware hypervisor to manage the virtual servers and their resources, applications gain all the advantages of running in a virtualized environment.

Utilizing CFS, an application can be recovered in a few seconds when running in a virtual machine (VM). The CFS High Availability Module will detect outages instantly and CFS will provide immediate access to the application data in another VM.  Since the application is already running in another VM with access to the same data, no additional time is required to map the storage to a new VM and the application can immediately begin accessing the data needed to recover the service. Because several VMs can have simultaneous access to the data, other activities like reporting, testing, backup, etc… can be implemented without incurring in any CPU overhead on the primary application VM.

Business Intelligence applications are another area where a robust cluster file system can provide scale-out capabilities. Sharing the same data across all the virtual servers minimizes storage consumption by avoiding duplicate copies of data. By making the data immediately accessible for processing across all the servers, data processing and transformation cycles can be reduced.

Finally, CFS can provide a framework for an adaptable grid infrastructure based on virtual machines. With the CFS cluster capability to add/remove nodes dynamically without bringing the cluster down, administrators can tailor the cluster size to the changing dynamics of the workload.  The cost of managing such a grid can be reduced using CFS storage optimization features such as compression, de-duplication, snapshots, and thin provisioning.

Cluster File System connection to storage can be implemented in two ways on VMware, depending on the application requirements:




Best performance, SCSI-3 PGR fencing

Raw Device Mapping -Physical (RDM-P)

Vmotion and other VMware HA features

VMFS virtual disk with multi-writer flag enabled


Support for both options is documented in Using Veritas Cluster File System (CFS) in VMware virtualized environments.

Each approach has its own pros and cons. RDM-P provides a direct connection between the virtual machine file system and the underlying physical LUN. Because of this, applications will likely achieve higher performance vs. a VMFS connection. Additionally, disk-based SCSI-3 data fencing  can be implemented for data protection and it is possible to create a cluster or both physical and virtual machines. The downside of using RDM-P is that is does not allow the hypervisor  to perform VMware management activities such as vSphere vMotion.

A VMFS virtual disk (VMDK) architecture can provide a more flexible model for managing storage, i.e. cluster nodes can be dynamically moved across ESX servers, allowing sever maintenance while keeping all cluster nodes attached to the shared data.   Normally, in order to prevent data corruption, VMFS prevents access by more than one VM at a time to a VMDK.   In order to allow CFS to provide simultaneous access to the virtual disk(s) by all the nodes in the cluster, the VMFS multi-writer flag must be enabled on the VMDK.  The HOWTO8299 document mentioned above provides detailed instruction regarding this. It should be noted that applications can expect slightly lower performance when using VMFS vs. RDM-P, due to VMFS overhead.  Additionally, VMware virtual disks do not emulate the SCSI-3 PGR data fencing command set at this time, so extra precaution should be taken to prevent inadvertent mapping of cluster VMDK to non-cluster virtual machines.

A detailed explanation on how to install and configure CFS with VMDK files can be found in the Storage Foundation Cluster File System HA on VMware VMDK Deployment Guide, which is attached to this article. This deployment guide presents a very specific example using the following infrastructure:

  • RedHat Enterprise Linux 6.2
  • Storage Foundation Cluster File System HA 6.0.1
  • ESXi 5.1

A four nodes cluster will be created using two different ESX servers as shown in this diagram:


This guide is not a replacement for any other guide (VMware or otherwise), nor does it contain an explicit supportability matrix. It is intended to document a particular implementation that may help users in their first implementation of CFS in a VMware environment using VMFS virtual disks as the backend storage. Please refer to product release notes, admin, and install guides for further information.


Carlos Carrero

Level 2
Employee Accredited Certified

Great article Carlos. Many thank indeed for your excellent work.

Level 3
Partner Accredited Certified

Thanks for the article Carlos. This is quite informative. 


We have a customer who wants to Implement a Windows SFHA Cluster with a shared vmdk disk. The Cluster nodes/VM's will be hosted on seperate ESX hosts. Few questions:

a) Is this configuration supported in Windows?

b) What happens in case one VMware ESX host fails? Will the VM in the second host continue having access to the shared vmdk?

c) This question is not related to Symantec but more specific to VMware ESX. How should the datastore in VmWare be configured. My understanding is that a SAN LUN datastore can only be seen by one VMware host at a time?

Level 4

Hi Denis,

Thank you for your note. Cluster File System is not supported on Windows. That means we cannot provide parallel access to the same file from different nodes. But SFHA is supported to run on VMware. In that case you will have an Active/Passive configuration, where only one of the nodes will be having access to the files. The storage will be mapped to all the other servers and here you have two alternatives: VMDK files or Raw Device Mapping. In any of those cases, I will recommend you to use the new VMWareDisk resource, that will take care of the disk attach and detach from the VM, so you still can use vMotion even when using SFHA.

A SAN LUN can be mapped to several ESX at the same time. You can map that LUN directly to the VMs (Raw Device Mapping) or you can create an VMFS on the top and expose it to the ESX servers.

For more info on that, I would recommend you to take a look to this demo from my colleague Lorenzo:

Best regards,


Not applicable

Very descriptive artcle, I enjoyed reading it has so much to leran. nice share,thank you.

Level 6
Employee Accredited Certified
Version history
Last update:
‎02-11-2013 02:21 PM
Updated by: