cancel
Showing results for 
Search instead for 
Did you mean: 

SQL Server 2008 on Veritas Cluster

Jimb2k
Level 4

Hello,

On the purpose of testing the new version of Veritas cluster, I created 2 nodes that contains SQL server binaries.

This 2 nodes are identical on configuration, and attachement to the SAN over ISCSI.

On the official documentation, they recommand to make the DATA of the instance from the first node on the shared storage.

And for the second node, they DATA can be reside on the local disk.

I do this, but when I was configuring the service group, and when selecting the systems (nodes) that can participate on the cluster  service group.

An error message appears telling man that the nodes are not identical, and the seconde node can't find the instance.

Can you please help me about this problem.

Thank you. 

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

You may want to download the SQL 2008 guide from here:

https://sort.symantec.com/documents/doc_details/sfha/6.1/Windows/ProductGuides/ 

  Symantec Cluster Server Implementation Guide for Microsoft SQL Server 2008 and 2008 R2 - English 

See Chapter 3 Installing SQL Server 

Installing SQL Server on the first system 
Installing SQL Server on the additional systems 

So, on the first system, you install the Instance root directories on local disk (e.g. C:\Program Files) and the Data directories on the Shared drive.
Make a note of the SQL instance name and instance ID. You must use the same instance name and instance ID when you install the SQL Server instance on additional systems.

When we get to the 'additional systems' section, I must admit that this section of the manual is very confusing:

If you choose to install the SQL database files to a shared disk (in case of a
shared storage configuration), ensure that the shared storage location is not
the same as that used while installing SQL on the first system. Do not overwrite
the database directories created on the shared disk by the SQL installation on
the first system.

Actually - the installation path MUST match the original system.

What I have done with SQL installations was to install on node 1, stop SQL, set Services to manual, deport the diskgroup on node 1, import the diskgroup on node 2 and map volume(s) to same drive letter(s) as node 1.
Rename the data directories on the shared drive that were created by node 1 installation, then repeat installation on node2 to match root and data directories.

Because everything is new, you can choose which data directories to keep. The contents should be identical.

Where one should be very careful is when additional nodes are added once the Instance has gone into production with live data in the shared volumes.
Here a 'dummy' or temporary volume should be used for initial installation on the new node.
So, if the production instance has data on F-drive, then on the new node, mount a temporary volume to F-drive so that the instance can be installed to match the production instance paths exactly.
After installation has completed, this dummy/temporary F-drive can be removed, because actual production data will be mounted on the F-drive when the service group fails over to this node.

Hope this helps.

 

View solution in original post

8 REPLIES 8

spin999
Level 3
Employee Accredited

Simple question first, does the second node have access to the iSCSI LUN?  On Netapp its an iGroup, where you put the wwn of both nodes in the iGroup and that determines who can mount the LUN on the system.  Is this working?

VCS is going to have to know how to bring the data luns online on the secondary system before it can update the SQL configuration on Node 2 to say that databases now exist on drive X.

Jimb2k
Level 4

Thank you spin for your quick answer.

For your information, I use openfiler for storage (testing purpose).

And both nodes have access to the ISCSI Lun.

AM I Wrong in anything in this configuration ?

 

spin999
Level 3
Employee Accredited

It makes sense, but there's not much information to go on.  When you were deploying, did you review the deployment guide for SQL server:

http://static-sort.symanteccloud.com/public/documents/sfha/6.0.2/windows/productguides/pdf/VCS_NetApp-SQL2012_6.0.2.pdf
 

Chapter 3 has a lot of information on how to deploy SQL on both nodes, how to deploy SF and VCS, and how to use iSCSI.  Don't let the fact that they mention Netapp deter you, it will probably work very similarly in your environment.

 

Hope that helps.  

 

Chris

 

Jimb2k
Level 4

I think so, and I follow the guide as recommended.

But this error is dispointing me, if I choose only the first node, the service group can be bring online.

When First I added the second node, I got the error message.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

You may want to download the SQL 2008 guide from here:

https://sort.symantec.com/documents/doc_details/sfha/6.1/Windows/ProductGuides/ 

  Symantec Cluster Server Implementation Guide for Microsoft SQL Server 2008 and 2008 R2 - English 

See Chapter 3 Installing SQL Server 

Installing SQL Server on the first system 
Installing SQL Server on the additional systems 

So, on the first system, you install the Instance root directories on local disk (e.g. C:\Program Files) and the Data directories on the Shared drive.
Make a note of the SQL instance name and instance ID. You must use the same instance name and instance ID when you install the SQL Server instance on additional systems.

When we get to the 'additional systems' section, I must admit that this section of the manual is very confusing:

If you choose to install the SQL database files to a shared disk (in case of a
shared storage configuration), ensure that the shared storage location is not
the same as that used while installing SQL on the first system. Do not overwrite
the database directories created on the shared disk by the SQL installation on
the first system.

Actually - the installation path MUST match the original system.

What I have done with SQL installations was to install on node 1, stop SQL, set Services to manual, deport the diskgroup on node 1, import the diskgroup on node 2 and map volume(s) to same drive letter(s) as node 1.
Rename the data directories on the shared drive that were created by node 1 installation, then repeat installation on node2 to match root and data directories.

Because everything is new, you can choose which data directories to keep. The contents should be identical.

Where one should be very careful is when additional nodes are added once the Instance has gone into production with live data in the shared volumes.
Here a 'dummy' or temporary volume should be used for initial installation on the new node.
So, if the production instance has data on F-drive, then on the new node, mount a temporary volume to F-drive so that the instance can be installed to match the production instance paths exactly.
After installation has completed, this dummy/temporary F-drive can be removed, because actual production data will be mounted on the F-drive when the service group fails over to this node.

Hope this helps.

 

Jimb2k
Level 4

Hello Marianne,

Thank you for your answer and this sames instresting.

I want just to be sure if the temporary drive is necessary to complete the installation or not.

 

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

When the installation is brand new on all nodes, there is no need for a temporary drive/volume.
All databases are new, so there is no risk of overwriting production data.

The temp volume is only needed when you want to add another node once the instance has gone into production and you need to install to the same path without the risk of overwriting production databases.

Jimb2k
Level 4

Thank you marianne, that's working like a charm.

Thank you again