Migration of VCS Cluster to new servers
Hello,
I have a two nodes veritas cluster with SQL & file shares ressources.
The goal is to migrate the data and ressources to a new servers.
How can I create first a standalone node and trasport the data and configuration of the cluster.
And recreate the cluster as the original one.
Thank you in advance for your help.
Hello Jim,
I believe the steps what I gave are still valid only change is now you would go with setup of single node..
1. Setup OS, Storage Foundation High Availability, SQL etc on one node. You would need to make sure all the applications & dependency packages are installed on new machine.
2. Once VCS run is tested on new node (as single node cluster), shut the cluster component down in new server.
3. Copy of all the data from old cluster shared storage & ship it to new shared storage. Make sure you create similar filesystems so that cluster configuration can be straight away imported. If drive configurations or application directories are changing, VCS configuration will need to modified as well.
4.Once shared storage on new setup is ready with all data copied. Copy VCS configuration files from old cluster to new cluster
/etc/VRTSvcs/conf/config/main.cf
/etc/VRTSvcs/conf/config/types.cf
/etc/llttab (ONLY if any specific tunables are configured, else leave this file)
5. Once all the above files are copied, shutdown the cluster in old machines. Ensure all the IP addresses used are removed so that no IP conflict happens.
6. Start the cluster on new single node (hastart -oneenode). This should start the cluster on new server as single node cluster using the data from shared storage.
7. Post a successful run, you can setup 2nd node & join the cluster with VCS.
Let me know if any doubts ..
GI think steps are:
- Agree with Gaurav, that Setup OS, Storage Foundation High Availability, SQL and better to start off with a 1-node to 1-node GCO cluster (or a 2-node RDC) rather than adding later. But you need to think about what name to install SQL as so doesn't conflict with existing cluster - a few things you could try:
a. Use same SQL server name, but use a local host entry to use a different IP
b. Isolate new cluster on the network
c. The SQL install procecure in a cluster used to be to change the virtual name as the last step by running an SQL command to delete physical node name and add virtual name so you may be able to use a temporary name and change to real name when you go live
- How you configure VCS depends on how comfortable you are editing main.cf and how wizard driven current SQL installation is. I haven't setup an SQL cluster since 5.x and then you could create main.cf manually rather than using wizards which was the better option when creating mulitple instances as you could create first one using wizard and then just copy main.cf entries for subsequent instances. So personally I would copy main.cf entries from old cluster to new cluster rather than re-running wizard, but do which ever you feel more comfortable doing.
Note also the main.cf will be different as you will now be using array based replication so you will need to consult SnapMirror agent docs together with SQL solutions guide for DR
- For migrating storage from old SAN to new SAN, then you can copy data or backup and restore if downtime is not an issue, or you could install a temporary VVR licence and use replication over IP and remove VVR once migration is complete if you need as little down time as possible
For the migration I would do a test run first - so do a hotbackup and restore to new cluster and then test failing over etc.
Mike
- Agree with Gaurav, that Setup OS, Storage Foundation High Availability, SQL and better to start off with a 1-node to 1-node GCO cluster (or a 2-node RDC) rather than adding later. But you need to think about what name to install SQL as so doesn't conflict with existing cluster - a few things you could try: