SQL Server 2008 on Veritas Cluster
Hello,
On the purpose of testing the new version of Veritas cluster, I created 2 nodes that contains SQL server binaries.
This 2 nodes are identical on configuration, and attachement to the SAN over ISCSI.
On the official documentation, they recommand to make the DATA of the instance from the first node on the shared storage.
And for the second node, they DATA can be reside on the local disk.
I do this, but when I was configuring the service group, and when selecting the systems (nodes) that can participate on the cluster service group.
An error message appears telling man that the nodes are not identical, and the seconde node can't find the instance.
Can you please help me about this problem.
Thank you.
You may want to download the SQL 2008 guide from here:
https://sort.symantec.com/documents/doc_details/sfha/6.1/Windows/ProductGuides/
Symantec Cluster Server Implementation Guide for Microsoft SQL Server 2008 and 2008 R2 - English
See Chapter 3 Installing SQL Server
Installing SQL Server on the first system
Installing SQL Server on the additional systemsSo, on the first system, you install the Instance root directories on local disk (e.g. C:\Program Files) and the Data directories on the Shared drive.
Make a note of the SQL instance name and instance ID. You must use the same instance name and instance ID when you install the SQL Server instance on additional systems.When we get to the 'additional systems' section, I must admit that this section of the manual is very confusing:
If you choose to install the SQL database files to a shared disk (in case of a
shared storage configuration), ensure that the shared storage location is not
the same as that used while installing SQL on the first system. Do not overwrite
the database directories created on the shared disk by the SQL installation on
the first system.Actually - the installation path MUST match the original system.
What I have done with SQL installations was to install on node 1, stop SQL, set Services to manual, deport the diskgroup on node 1, import the diskgroup on node 2 and map volume(s) to same drive letter(s) as node 1.
Rename the data directories on the shared drive that were created by node 1 installation, then repeat installation on node2 to match root and data directories.Because everything is new, you can choose which data directories to keep. The contents should be identical.
Where one should be very careful is when additional nodes are added once the Instance has gone into production with live data in the shared volumes.
Here a 'dummy' or temporary volume should be used for initial installation on the new node.
So, if the production instance has data on F-drive, then on the new node, mount a temporary volume to F-drive so that the instance can be installed to match the production instance paths exactly.
After installation has completed, this dummy/temporary F-drive can be removed, because actual production data will be mounted on the F-drive when the service group fails over to this node.Hope this helps.