12-13-2017 08:41 PM - edited 12-13-2017 08:46 PM
We have a 2 nodes cluster. One of our node(passive) crashed completely except disks at production environment. We placed crash node disks on a new machine successfully. Now we are planning to attach this new node to existing cluster. I am assuming that we need to do below activities. Please share your insight for the said activity.
- Connect both heartbeats with new node physically.
- Freeze Cluster
- Modify /etc/llttab file on both nodes and make sure that heartbeats are UP(verify thru lltstat -nvv)
- update resources NIC details
12-13-2017 09:04 PM - edited 12-13-2017 09:07 PM
I would also add the new system to autostart lists for relevant service groups and modify /etc/llthosts but this depends whethere you are implementing the replacement system with old host name or new host name.
These articles cover the procedure pretty well https://www.veritas.com/support/en_US/article.000006725 and https://www.veritas.com/support/en_US/article.000033371
When I am lazy I'm using the installer option:
# cd /opt/VRTS/install
# ./installsfha -addnode
This will add the node correctly as well.
12-13-2017 09:26 PM
I am planning to go with old node name :) (Thanks for your quick response)
12-14-2017 09:09 PM - edited 12-14-2017 09:10 PM
Activity is completed.
Following are the activities I performed.
1- Plugged in heartbeats, public ethernet cables. Modify MAC addresses under /etc/llttab for all NICs. Run command /etc/init.d/llt restart. Verify from lltstat -nvv|less command as well as from Java Console > Cluster properties > System connectivity tab (This shows all links in green)
2- Ensure Shared SAN storage cables, verify SAN thru vxdisk list command.
3- Run hastart command which cleared the red fault from the down/faulted node instantly and also I am able to check the result of hastatus command.
12-20-2017 01:50 AM
It seems you had a similar question a couple of months ago?
12-29-2017 08:36 PM
By default, VCS (had) is started automaticaly on system reboot., by default, why you need to run hastart to start had? If the cluster is configured in this way (need manually start had), this is not a recommended configuration as if the system crashed/rebooted by a fault (lost power) and there is no eng around, the cluster node would not get re-started automatuically to resime HA asap.