cancel
Showing results for 
Search instead for 
Did you mean: 

How does FSS work in a campus cluster

mikebounds
Level 6
Partner Accredited

With a conventional VxVM mirrored campus cluster, if an array is down, then the diskgroup agent does a force import so that the diskgroup is imported with only half the disks - how does this work with an FSS campus cluster:

So suppose you have Node A and B, both with only local storage and you create a shared FSS diskgroup with a mirrored volume across the disks.

When the cluster starts, if node B is not available what happens:

  1. The CVMCluster resource force imports the diskgroup with only half the disks
     
  2. The CVMVolDg resource force imports the diskgroup with only half the disks
     
  3. Neither of the above happens and the diskgroup does not import, which means an FSS campus cluster can only cope with loosing a node when the other is up and once the node is lost, you cannot reboot the surviving node until the lost node comes back up.

If the answer is 1 or 2, do you need to use some specific diskgroup detach policy or VCS resource attributes for it to work

Thanks

Mike

1 ACCEPTED SOLUTION

Accepted Solutions

Sudhakar_Kasina
Level 2
Employee

The requirment of all the nodes contributing storage to be up "for auto-importing the dg" is only during cluster startup (all nodes down and each node coming up). Once the dg got imported, lets say with DAS storage contributed by 3 nodes, if any node leaves the cluster, the disks for that node will get detached from dg configuration ("FAILED WAS" entry in vxdisk list) and underlying objects like plexes on that storage will be disabled/detached. At this stage, any other node (that is not contributing storage) can leave and join the cluster. The detached disks that are part of shared dgs are not considered as requirement for the node to join.

 

During cluster startup, auto-import of the dg succeeds when all the nodes that contribute storage to a dg comes up. If the nodes are coming up one by one, you will see that the DG import happens when the last node that contributes disks to dg comes up.

View solution in original post

4 REPLIES 4

Sudhakar_Kasina
Level 2
Employee

When the cluster is starting freshly after all nodes down, all the disks for a dg need to be there to import the dg. This is because we don't know which disks have latest configuration copy or data.   So the behaviour that will be seen is of  [3]

User can import the dg manually by using force import option. In this case, the other disks that are not available will be detached and any plexes on that disks will also be detached from mirror volume.

 

 

mikebounds
Level 6
Partner Accredited

Thanks.

I was at Vision last week and Carlos Carrero (Symantec Product Manager for CFS) said he had set up multinode clusters with 3-way mirrors (and I think there were more than 3 nodes, some possibly with no storage), so in this case, if you loose a node with storage and then ANY other node reboots, will it be able to join the membership as the joining node will not be able to see all the disks as one of the nodes exporting the disks is down - or do all the disks only need to be there when the CVM cluster first starts, so after this nodes can leave and join without all disks being available - i.e is it like gab seeding, so that all nodes only have to be there on initial startup and nodes can join gab later on, even if all nodes are not available.

By the way, are you a Symantec Employee, your knowledge sounds as if you are, but you do not have "Symantec Employee" against your name

Mike

Sudhakar_Kasina
Level 2
Employee

The requirment of all the nodes contributing storage to be up "for auto-importing the dg" is only during cluster startup (all nodes down and each node coming up). Once the dg got imported, lets say with DAS storage contributed by 3 nodes, if any node leaves the cluster, the disks for that node will get detached from dg configuration ("FAILED WAS" entry in vxdisk list) and underlying objects like plexes on that storage will be disabled/detached. At this stage, any other node (that is not contributing storage) can leave and join the cluster. The detached disks that are part of shared dgs are not considered as requirement for the node to join.

 

During cluster startup, auto-import of the dg succeeds when all the nodes that contribute storage to a dg comes up. If the nodes are coming up one by one, you will see that the DG import happens when the last node that contributes disks to dg comes up.

mikebounds
Level 6
Partner Accredited

Thanks - this sounds fine, so if a cluster node does not boot on cluster start-up then already need to manually seed gab, so just need to add to this to manually force import diskgroups.

Mike