Inter cluster communication
I am trying to automate scripts that we run across multiple clusters during a Disaster Recovery scenario. We have an HTC (Hitachi True Copy) resource on the database cluster that makes the local storage a P-VOL before importing the database disk group. This includes database storage on the local cluster as well as application storage located on another CFS cluster. We currently run one script to failover the database to the DR site. Once the storage is failed over, we run another script on the CFS cluster to unfreeze (persistent) the service groups we need to run, 'vxdg -Cs import' the shared disk group, and run fsck on the shared file systems we are importing, then we start the shared mount points that have CVMVolDg / CFSMount resources.
I am looking for a way to tie these two scripts together, but we have security restrictions in our environment such that there is no root-to-root communication between the clusters. TCP port 14141 is enabled between the clusters.
Does anyone have any suggestions for kicking off the script on the second CFS cluster? I have been able to kick off a trigger from the database cluster to the application cluster, and was thinking about invoking a preonline trigger to import the shared disk group and run fsck. I was also thinking of invoking a postoffline trigger to deport the shared disk group, but the postoffline trigger only accepts two arguments, <system> and <group>. One issue with triggers is I need to make sure they only run on one node. I would do this using:
export VCS_HOST=cluster-vip
halogin admin
hatrigger -preonline 0 <CVM master> <group> IMPORT
The preonline script would then check the 4th argument and see that it is IMPORT and run the 'vxdg -Cs import' and 'fsck' commands.
I guess another way to do this would be to set UserIntGlobal to 1 for the service group which the preonline and postoffline triggers can check for and import/export the shared disk group, but then I end up with trying to make sure that only one system (the CVM master) runs the import command and only one system runs the export command after all other CFS service groups are offline. In the case of these clusters, the CFS mount points will not necessary all be mounted on the CVM master, and the CVM master won't always be the last node to offline the shared mount point.
Does anyone have any suggestions?