09-02-2013 11:28 PM
Solved! Go to Solution.
09-08-2013 01:23 AM
With active node being node "a"
If you choose "a" first, then you create an outage straight away when you switch the group to "b" and then after you have upgrade "a", you create a second outage as you switch group back to "a", so you can upgrade "b"
Where as if you upgrade "b" first, then there is no initial outage, and then after you have upgrade "b", you create your first and only outage as you switch to "b", so you can upgrade "a". Note there should be no need to switch group from "b" to "a" after upgrading "a" as your service groups should be equally capable of running on either node.
Mike
09-03-2013 12:06 AM
The exact steps depend on the VCS version (which you have not provided) and your configuration.
The VCS Installation Guide will have a section "Upgrading VCS" with procedures and supported upgrade paths - suggest you look at this first then ask about specific points if you are still not clear.
eg: VCS 5.1SP1 (Solaris) Installation Guide
https://sort.symantec.com/public/documents/sfha/5.1sp1/solaris/productguides/html/vcs_install/
If you are using a different version of VCS, go to https://sort.symantec.com/documents and select the product guide(s) for your version.
09-03-2013 01:00 AM
Your procedure will not work as you cannot upgrade all VCS components one node at a time as "had" daemon in VRTSvcs will not communicate over differerent versions. If you are just upgrading VCS (not SF), then you can just force stop VCS on all nodes (hastop -all force) leaving applications running, unload GAB and LLT and upgrade VCS (including GAB and LLT) on ALL nodes.
If you have SF, then you can use your procedure to upgrade SF after upgrading VCS - see https://www-secure.symantec.com/connect/forums/cluster-upgrade-version-5-6 for a similar discussion.
Mike
09-03-2013 01:17 AM
09-03-2013 01:49 AM
The procedure could be used to patch the O/S and SF, but not VRTSvcs as if you upgrade VRTSvcs on one node first, then t"had" daemon on upgrraded node will no longer be able to communicate with the "had" daemon on the other node running at an older version.
Mike
09-04-2013 12:54 AM
09-04-2013 01:55 AM
Correct - if you run "hastop -all -force", this stops "had" daemon and VCS agents, but does NOT stop applications, so they remain running - you can then stop and LLT (gabconfig -U and lltconfig -U) and unload GAB and LLT from kernel (modinfo to get ids and then use modunload).
Then you can upgrade VCS while applications are running and once VCS upgrade is complete, just start, LLT, GAB and VCS.
Mike
09-04-2013 02:37 AM
Hi Mike,
Please tell me if my patching procedure is fine on 2 ndes vcs.
- hastop -local -evacuate ;stop vcs on node a and evacuate group on node b who runs vcs.
- init 1, single user in node a
- apply patch
- reboot node a
- hastart ;start vcs on node a
- hastop -local -evacuate;stop vcs on node b and evacuate groups on node a
- init 1;node b on single user mode
-apply patch
- reboot node b
- hagrp -switch to <node_a> ,group who normally runs on node a.
Or maybe:
hastop -all -force and then aply patches on both nodes in single user mode,then reboot the nodes with vcs started.
tnx so much,
marius
09-04-2013 03:25 AM
Which of the following are you patching:
I think you are patching 1 and 3 - but please clarify.
Are you trying to patch with minimum down time, or does down time not matter?
For VCS patching, what version do you have now and what version are you patching to?
For O/S patching - the same question - what version do you have now and what version are you patching to?
Mike
09-04-2013 07:22 AM
09-04-2013 08:02 AM
VCS 5.1, nor VCS 5.1 with latest SP and RP support Solaris 11, so if you want to patch to Solaris 11, then you will need to upgrade VCS first to 6.x and then upgrade each node to Solaris 11.
If downtime doesn't matter, then just use vcs installer scripts to upgrade to 6.0 and read VCS install guide for any additional steps.
Mike
09-04-2013 08:15 AM
09-04-2013 09:06 AM
For minimum downtime:
Run "hastop -all -force", this stops "had" daemon and VCS agents, but does NOT stop applications, so they remain running - you can then stop and LLT (gabconfig -U and lltconfig -U) and unload GAB and LLT from kernel (modinfo to get ids and then use modunload).
Then you can upgrade VCS to 6.x while applications are running and once VCS upgrade is complete, just start, LLT, GAB and VCS.
Then patch O/S - choose node which has the least service groups running first (example in an active-passive) cluster - choose the passive node) - suppose this is node a - then:
- hastop -local -evacuate ;stop vcs on node a and evacuate group (if any running) to node b who runs vcs.
- init 1, single user in node a
- apply patch
- reboot node a
- hastart ;start vcs on node a
- hastop -local -evacuate;stop vcs on node b and evacuate groups to node a
- init 1;node b on single user mode
-apply patch
- reboot node b
Mike
09-07-2013 11:40 PM
09-08-2013 01:23 AM
With active node being node "a"
If you choose "a" first, then you create an outage straight away when you switch the group to "b" and then after you have upgrade "a", you create a second outage as you switch group back to "a", so you can upgrade "b"
Where as if you upgrade "b" first, then there is no initial outage, and then after you have upgrade "b", you create your first and only outage as you switch to "b", so you can upgrade "a". Note there should be no need to switch group from "b" to "a" after upgrading "a" as your service groups should be equally capable of running on either node.
Mike
09-13-2013 11:05 PM
09-25-2013 07:24 AM
Hi marius,
There is no way you can upgrade a sol 10 host to sol 11.
Oracle Solaris do not provide a way for this. Solaris 11 install has to be a fresh install.