05-15-2014 04:06 AM
With a conventional VxVM mirrored campus cluster, if an array is down, then the diskgroup agent does a force import so that the diskgroup is imported with only half the disks - how does this work with an FSS campus cluster:
So suppose you have Node A and B, both with only local storage and you create a shared FSS diskgroup with a mirrored volume across the disks.
When the cluster starts, if node B is not available what happens:
If the answer is 1 or 2, do you need to use some specific diskgroup detach policy or VCS resource attributes for it to work
Thanks
Mike
Solved! Go to Solution.
05-15-2014 11:02 PM
The requirment of all the nodes contributing storage to be up "for auto-importing the dg" is only during cluster startup (all nodes down and each node coming up). Once the dg got imported, lets say with DAS storage contributed by 3 nodes, if any node leaves the cluster, the disks for that node will get detached from dg configuration ("FAILED WAS" entry in vxdisk list) and underlying objects like plexes on that storage will be disabled/detached. At this stage, any other node (that is not contributing storage) can leave and join the cluster. The detached disks that are part of shared dgs are not considered as requirement for the node to join.
During cluster startup, auto-import of the dg succeeds when all the nodes that contribute storage to a dg comes up. If the nodes are coming up one by one, you will see that the DG import happens when the last node that contributes disks to dg comes up.
05-15-2014 06:35 AM
When the cluster is starting freshly after all nodes down, all the disks for a dg need to be there to import the dg. This is because we don't know which disks have latest configuration copy or data. So the behaviour that will be seen is of [3]
User can import the dg manually by using force import option. In this case, the other disks that are not available will be detached and any plexes on that disks will also be detached from mirror volume.
05-15-2014 07:50 AM
Thanks.
I was at Vision last week and Carlos Carrero (Symantec Product Manager for CFS) said he had set up multinode clusters with 3-way mirrors (and I think there were more than 3 nodes, some possibly with no storage), so in this case, if you loose a node with storage and then ANY other node reboots, will it be able to join the membership as the joining node will not be able to see all the disks as one of the nodes exporting the disks is down - or do all the disks only need to be there when the CVM cluster first starts, so after this nodes can leave and join without all disks being available - i.e is it like gab seeding, so that all nodes only have to be there on initial startup and nodes can join gab later on, even if all nodes are not available.
By the way, are you a Symantec Employee, your knowledge sounds as if you are, but you do not have "Symantec Employee" against your name
Mike
05-15-2014 11:02 PM
The requirment of all the nodes contributing storage to be up "for auto-importing the dg" is only during cluster startup (all nodes down and each node coming up). Once the dg got imported, lets say with DAS storage contributed by 3 nodes, if any node leaves the cluster, the disks for that node will get detached from dg configuration ("FAILED WAS" entry in vxdisk list) and underlying objects like plexes on that storage will be disabled/detached. At this stage, any other node (that is not contributing storage) can leave and join the cluster. The detached disks that are part of shared dgs are not considered as requirement for the node to join.
During cluster startup, auto-import of the dg succeeds when all the nodes that contribute storage to a dg comes up. If the nodes are coming up one by one, you will see that the DG import happens when the last node that contributes disks to dg comes up.
05-16-2014 12:50 AM
Thanks - this sounds fine, so if a cluster node does not boot on cluster start-up then already need to manually seed gab, so just need to add to this to manually force import diskgroups.
Mike