Dynamic mirrored quorum in campus cluster
Scenario:
Customer wants to create 2-node Microsoft Cluster in a campus cluster setup:
Array1 at building A and Array2 at building B with all volumes mirrored across arrays.
My problem is with this section in the Admin Guide:
"It is strongly recommended that a dynamic mirrored quorum contain three disks because a cluster disk resource cannot be brought online unless a majority of disks are available. With the quorum volume in a two-disk group, loss of one disk will prevent the quorum volume from coming online and make the cluster unavailable. "
Any advice about how to overcome this problem in a campus cluster? The quorum dg will consist of 2 disks (one per array).
Does anyone know if there is an attribute to force-import dg in Microsoft Cluster should there be a site/array failure at one of the buidings?
SFW 5.1 SP1 on W2008 R2.
Hi Marianne,
Microsoft does not support dynamic disks in their clusters. To use dynamic disks in a cluster you need to have SFW and then Symantec will support the dynamic disks. When using dynamic disks in a cluster Symantec should be the first point of contact to ensure that dynamic disks and related cluster resources to determine if the problem is related to the usage of dynamic disks.
As for you issue with mirrored quorum (witness disk in Windows 2008), if you have an even number of disks spread evenly between an 2 arrays and you loose access to 1 array then you will not be able to online the VMDg resource because you do not have a majority of the disks in the disk group.
This will only affect online/import of the VMDg resource/clustered dynamic disks. If the disk group is already imported you can loose upto 50% of the disks (everything from a single array) and the disk group will remain imported. However, the diskgroup will fail and be deported when more than 50% of the disks are lost. This is done so that the disk groups will be accessible by the node that had them imported when the SAN failure happened causing loss of access to the remote array and the remote node will not be able to come online because it does not have a majority of the disks.
The use of a third array is a recommendation to add another layer of redundancy to the mix. If you loose access between your two sites then the site that still sees the third array will always have a majority of the disks and it can online the disk groups as needed.
It will still work if you only have 2 arrays but you will not be able to online the service groups or quorum if they are accidently taken offline when the SAN link between the two sites is down. We do have a lot of customers running with this type of configuration. It is just a matter of what are the customer's concerns and how much infrastructure they have or are willing to get to cover situations like this. If they feel fine with 2 arrays and a manual process to online the disk groups if they are deported with the link between arrays down then that is OK. If they want a more automated approach and have/can afford a third site than that is also OK.
Thanks,
Wally