11-14-2012 06:58 PM
Hi Guys,
I am configuring 8 nodes GCO (4 nodes per site) for CFS configuration on SFCFS 6.0.1, my replication is over Hitachi True copy, I have done base configuration up till HTC agent and tested GCO failover up till HORCM switch its working ok. Now i need to configure CFS in global SG.
1. Once I configure configure CFS mounts on active site (4 nodes), then configuration for all CFS mount points need to manually populate to cluster configuration on remote cluster? or some way to auto populate the configuration to remote cluster within GCO for initial setup and managebility point of view in day 2 day operation for support teams.
2. As per instructions on CFS admin guide, i am fine to create CFS on local site but there is not clear about Global clusters cofiguration update, specially while using OEM storage replication technologies.
High level steps for CFS
i) create shared DG from master node on CFS cluser nodes
ii) Add that DG in to cluster configuration using "cfsdgadm"
iii) Create volumes & filesystems
iv) Add the mounts to cluster configuration using "cfsmntadm"
v) Verify the configuration & mount CFS mounts using "cfsmount"
Thanks for your help in advance.
Mani
11-15-2012 03:28 AM
In 5.1 you had to manually populate cluster configuration on remote cluster and GCO would not do this for you and I see nothing in the 6.0 or 6.0.1 release notes to see that this feature has been introduced.
One important step which is often missed in CFS configurations with GCO and hardware replication is to the configure import, deport, vxdctlenable actions on the CVMVolDg agent - see extract from HORCM VCS agent install guide:
To configure the agent in a Storage Foundation for Oracle RAC or SFCFSenvironment:1 Configure the SupportedActions attribute for the CVMVolDg resource.2 Add the following keys to the list: import, deport, vxdctlenable.3 Use the following commands to add the entry points to the CVMVolDgresource:haconf -makerwhatype -modify CVMVolDg SupportedActionsimport deport vxdctlenablehaconf -dump -makeroNote that SupportedActions is a resource type attribute and defines a list of actiontokens for the resource.
Mike
11-18-2012 12:19 PM
Thanks Mike,
It will be bit tedious job for support team for always updating remote cluster configuration :( , which manual steps, as they adding additional CFS mounts. I was expecting some nice features on SFCFSHA 6.0.1 some thing better for GCO manageability.
Any further help will be appreciated.
Cheers!
Mani
11-18-2012 02:49 PM
If you use cfsdgadm to add CFS mounts then you can just rerun the same commands on remote cluster. You can also cut and paste resources in GUI or you can use main.cmd. To use main.cmd, after adding resources to prod site;
On prod site, change directory to conf directory and make sure config is saved - this will fail it is is already closed and saved, so just ignore if this is the case
cd /etc/VRTSvcs/conf/config haconf -dump -makero
After last command allow a few seconds for it to save main.cf and create a main.cmd file and then grep out resource names which depending on your naming convention, you may be able to grep out several mounts:
grep "resource_name_pattern" main.cmd > add_cfs.sh
Have a look at the file created and it should contain lines "hares -add" followed by "hares -modify", followed by "hares -link for the resource(s) matched by your grep. You can then copy this file to remote site and execute it to create resource and links on your remote site.
You can edit the "add_cfs.sh" file to create new resources on prod, by doing global subsitutions if your new mounts are similary named to existing mounts (instead of using cfsdgadm) or if you are good with scripting you can create your own scripts which create SF mounts and add them to VCS
Mike