OS = Windows2008R2
SQL Server = 2008
One of our client have Two Nodes Cluster for SQL Server. Passive Node OS is miss behaving while rebooting and seems not able to getting fixed by client. Client is willing to re-install the OS of Passive Node. Below are the steps which seems below to do this activity.
1.) Run the Cluster Configuration Wizard.
2.) Delete the Passive Node.
3.) Install OS, sfha (same patch level as Active Node).
4.) Install SQL Server (same patch level as Active Node). # at this step we further do the below My Idea steps
5.) Run the Cluster Configuration Wizard on Active Node.
6.) Add the Passive Node.
When we do step # 4. in an environment which is a brand new fresh, while SQL installation we give the Central Storage/SAN for the data files(.mdf and .ldf) and then click on NEXT to continue the SQL Server installation. In my this situation we have already data files created on SAN drive and these drives are mounted on Active Node. How can we locate .mdf and .ldf file while SQL installation step when we need to define data files. Recommandation required please.
Define the data files (.mdf and .ldf) to any where else. when SQL installation complete. Below are the steps seems to be taken.
Offline Service Group on Active Node.
Import the DiskGroup on Passive Node from VEA (Veritas Enterprise Administrator).
Call SQL Server DBA to define the path setting under SQL Server which locate SQL Server to see data files under DiskGroup Volumes.
Deport the DiskGroup.
Online Service Group on Active Node again.
Run step # 5 and # 6.
I normally assign small LUNs to the inactive node and then mount using same drive letters (or folder mounts ) as used on active node. I usually keep these small LUNs as they are useful when patching SQL also as for patching you want to upgrade the binaries on both nodes and upgrade the single database, but SQL upgrades both, so for one upgrade you can upgrade binaries with empty database from fresh install on small LUNs.
Since you are removing the node to be rebuilt from the existing SQL 2008 service group and reinstalling the OS, then you can take advantage of a new feature in SFW-HA's SQL 2008 Service Group Configuration Wizard. The new feature is that all node do not have to be installed with the same path to the Master database files. The SG wizard will reconfigure the nodes to match the node where the wizard is being run.
This means that on the node that you rebuilt the OS, all you have to do in install SQL with the correct SPs and patches to match the other node. Then once the rebuilt node is back in the cluster, run the SFW-HA SQL SG Config wizard on the active node to add in the rebuilt node to the service group. The wizard will automatically adjust SQL on the rebuilt node to use the master database path that the cluster is using.
Thanks both guys.
What I have come to know is below
- Re-install a new OS on Passive Node as same patch level as Active Node
- Install SFHA on Passive Node as same patch level as Active Node.
- Install SQL Server on Passive Node as same patch level as Active Node with default database(.mdf and .ldf files) location.
- Add Passive Node in to existing Cluster.
- Run the SFHA SQL 2008 Service Group configuration wizard on Active Node and select the option like something "rebuilt node to the service group".
Is this correct above please?
That is the basics for what needs to be done. The only thing that I would add would be that the cluster and service group configuration needs to be cleaned up of the rebuilt node information before you add the passive node back to the cluster. Depending on why the node OS is being reinstalled this could be a manual process of cleaning up the active node or it could be removing the node to be rebuilt from the cluster using the config wizards.
When running the SQL 2008 service group wizard, you will select the "Modify existing service group" option and add the rebuilt node. The wizard should then produce a popup stating that it discovered the SQL configuraiton is different on the node and ask if you want it to correct the other node. If you select "Yes" then the wizard modifies the other node so that it will look to the shared disks for the master data files. The rest of the wizard will be pretty much click next except for the NIC selection screen for the rebuilt node.
Picture is more clear for me now. In your above post you have two paragraphs. The second one is talked about Service Group configuraion wizard which is described detailed.
But as per the first paraghraph you are talking about Adding the new/rebuilt node under existing Cluster. What I can understand that when I add the new/rebuilt node under existing Cluster I have to remove the configuraion of this new/rebuilt node from the existing cluster. In my case I can remove/delete this Node from the Cluster Configuration Wizard so removing the node will also delete the configuration of this node from existing cluster. What if I dont have this option (remove/delete this Node from the Cluster Configuration Wizard.). Means what could be the manual procedure to do this(removing the existing configuration of new/rebuilt node from existing Cluster)
If you are not able to remove the node from the surviving cluster configuration with the manual and you have to do it manually, you will need to modify 3 configuration files on each of the surviving nodes. Here is a basic run down on what need so to be changed.
Gabtab - decrease the cluster node count by 1.
LLThost - remove the host entry of the node needing to be removed.
Main.cf - edit entries from the main.cf that refer to the node needing to be removed. Some lines can be deleted and other need to be modifed to not include the node. The editing is really dependant on the specific cluster configuration. Use caution when performing this step.
Once these are done then HAD, GAB and LLT need to be stopped on all surviving node and then restarted. If there are more than 1 surviving node, then care needs to be taken so that the cluster builds from the modified main.cf.