Clarification on what happens during a rolling SFHA upgrade
Prior to 5.1SP1, the way I upgraded SFHA was to:
- Force stop VCS leaving applications running, unload GAB and LLT and upgrade VCS on ALL nodes
- Upgrade SF on inactive node
- Switch SGs from one llive node to upgraded node and upgrade node that have SGs were moved from
The problems with this is that:
- If there was an issue with VCS upgrade as all nodes are upgraded, you may have to backout, where as if you could upgrade VCS on one node at a time, then you could switch services to non-upgraded node if there was an issue with new VCS version.
- This procedure didn't work with CVM as you can't unload GAB and LLT
The way rolling upgrades was explained to me by Symantec Product Management when 5.1SP1 came out was that VCS now had the ability to communicate on different versions and so for instance a VCS node on 5.1SP1 could co-exist in a cluster with a node at 5.1SP1RP1 meaning you could upgrade the whole stack including VCS one node at a time.
However, I have a customer who applied RP3 to 5.1SP1RP1 on Solaris one node at a time and he got error:
Local node leaving cluster before joining as engine versions differ. Local engine version: 0x5010a1e; Current cluster version: 0x5010a00
So I am now wandering if only LLT and GAB support communicating on different versions and VCS does not and therefore the rolling upgrade procedure is:
Upgrade LLT, GAB, Xxfs and Vxvm using "installer -upgrade_kernelpkgs inactive_node_name" on node at a time when node is inactive
Upgrade VCS on all nodes using "installer -upgrade_nonkernelpkgs node1 node2 ..." on all nodes at the same time where I am guessing VCS is forced stop to leave applications running.
Can anyone clarify?
Thanks
Mike
indeed - only LLT/GAB handles the rolling upgrade. VRTSvcs will not.
I will look into the 5.1 SP1 release notes and see if it needs more clarification.