Sean,
If Joe's understanding is correct, then you would be better to have DB and App using different HORCM pairs as Joe says, and if this is not possible then you may be able to use RemoteGroup agent, also as Joe says. However you still need to address import and deport. The way this normally works is that when you online a HTC resource in a CVM cluster, the HTC agent calls the import action to import the diskgroup.
This should work if you put HTC agent in app cluster - if you do this then HTC agent should be able to tell that horcm has already done takeover and cvm dg is still imported - see extract below from HTC agent online script
if ($ret == 0) {
VCSAG_LOG_MSG("N", "devices in group $groupname are all read/write enabled; no action is required", 18, "$groupname");
$res->create_lockfile();
$res->cvm_import();
exit(0);
Here you would want to use RemoteGroup agent as Joe explains to make sure that DB cluster has done the Horcm takeover first as you don't want to be in a situation where App runs Horcm takeover at DR site when DB is still running on primary site.
But is sounds like you want to trigger App failover from DB cluster. You could use RemoteGroup agent for this too as RemoteAgent can be used to online and offline remote groups as well as monitor them - however I assume DB is a failover SG and App is a parallel SG, so you would need a seperate RemoteGroup resource in the DB cluster for each system in the App cluster.
If you need to do down the scripting route, here are a few pointers:
- You can run VCS commands across clusters using halogin as you know, but I would not use "admin" account. I would use an operator account that is narrowed down to just the SGs you need. Then use halogin manually as a one off task on the DB cluster as root to the admin cluster using this VCS user which will create a .vcspwd file in roots home directory on the DB cluster (do this on each node). Then when you subsequently use VCS ha commands (hagrp, hares etc) with VCS_HOST set, the credentials in .vcspwd will be used. By not using admin account, this is both safer as you are not giving VCS user more privileges than necessary, but also if admin password is changed, it won't break your scripts.
- You can run VCS actions across cluster to run any script you like - I have done this to even run a Windows command from a UNIX cluster to a Windows VCS cluster. To to this on your remote cluster place the script(s) you want to run in /opt/VRTSvcs/bin/FileOnOff/actions (will need to create actions directory) and create a FileOnOff resource and add script names to FileOnOff SupportedActions attribute. So for example you could create a script which onlines a service group on all systems
Create FileOnOff resource in App SG (containing CVMVolDg and CFSMounts and maybe HTC)
Create script online_all in /opt/VRTSvcs/bin/FileOnOff/actions which uses the resource name which VCS passes to action script to work out SG name (hares -value resname -attribute Group) and then onlines the group on all systems
Modify FileOnOff type - "hatype -modify FileOnOff SupportedActions online_all
Now from DB cluster you can run "hares -action FileOnOff_res_name online_all (after setting VCS_HOST variable)
- You could write you own scripts which use cvm_import function from the HTC agent without doing a horcm takeover and I assume the cvm_import function deals with running the import on the CVM master.
Mike