cancel
Showing results for 
Search instead for 
Did you mean: 

VCS for SQL Server

Arshpreet
Level 2

Hello, 

We have a 3 node VCS cluster running 3 SQL Server 2008 Standard Edition instances in A/A/P role. I want to add another SQL instance in this existing cluster configuration. I read most of your forums and documentation guides.  I would like ask few queries related to this.

 

Plan to add another sql instance on this configuration would be,

1. Storage has been added to the cluster nodes for this new instance by our Storage admin (he created it by creating a separate servicegroup)

2. What would be my installation order for SQL Server? Always run SQL Setup on active node? (since active node hosting ServiceGroup of this instance will always hold the shared disk resources) . Can I install SQL server on passive node as well?

3.  Can I apply SP and Patches to the SQL installation during its installation on individual nodes? or Does SP and Patches has to wait until server it ready first in RTM version?

4. Now once installation completed on NODE1, Stop the sql related services and set it to manual. 

5. Switch the ServiceGroup for this instance to NODE2. It will now failover all the shared storage to NODE2 as well. 

6. Start SQL installation here again with same instance name on NODE2.

Confusing part: Since on VCS sql server installation is going to be happend as a standalone installation only. (not failover cluster installation as opposed to window clustering). This standalone installation on the same mount will replace the existing  System data and log files of previous installation happened on NODE1? How will you install this SQL instance without effecting the exisitng files and directories out there?

I have read somewhere that you need to rename the exisiting directories out there and make sure you do not override them. But My question is, Will this rename won't effect the sql when it will running back on node 1?

Also when I am going to install on NODE3, I have to rename Both of NODE1 and NODE2 installation directories. so under the shared storage at the end i will be getting 3-3 DATA and LOG files for each installations, but under different directory structure. I am not sure how the last step [VCS SQL Configuration Wizard] will then reduce these 3 directory structures under the shared storage to only 1 ? What is the trick here? I am sorry, if I am thinking too much.. But this not documented anywhere.

7. Last step is, When done installing SQL on all nodes you simply run the VCS SQL Configuration Wizard to create the new SQL service group.

 

PS: I am new to my company and company do not have proper documentations on how this setup was done previously and VCS clustering solution is pretty much new to me.

 

Thank you for your help on this,

- Arshpreet 

 

1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Level 6
Partner    VIP    Accredited Certified

This is how we installed multiple instances on a 3-node cluster:

Firstly, bear in mind that each instance will have a preferred startup node with one or more failover nodes.

Secondly, bear in mind that new instance need to be installed on all nodes where instance can failover to. Instance installation requires a reboot, as well as patch installation. So, if any other instance is running on this node, it needs to be failed over to another node first.

We used a drive letter per instance with mount points for data files and logs.
You have to ensure that drive letter/mount points are unique for each instance.
Drive letters are fine if you have a small amount of instances, e.g.
Instance1: E:\data, F:\logs, 
Instance2  G:\data, H:\logs, etc

To add a new instance, add new lun(s) that will be added to new diskgroup.
Mount luns to drive letters/mount points that will be unique in the cluster.
Install as per instructions in VCS for MsSQL agent guide (binaries in Program Files and data files in new drive letters/mount points). Reboot when prompted. Patch and reboot.
Set SQL startup to manual. Stop SQL, unmount volumes, deport dg and import on next system. 
Mount volumes to same drive letter/mount point and clean out data folders. You can rename the folders, but there is no point. Databases and logs are brand new, so no issue if they are recreated.
Install same way as before.
Repeat for all nodes.
Have a new virtual hostname and IP address ready and run SQL service group wizard on the node where new instance is currently running.

More steps are needed, but this is all documented in the SQL agent guide.

Will find link to VCS for SQL Guide and post shortly. 
(Last time I did this was SF/HA 5.1!)

Oh! You forgot to mention your SF/HA version?

View solution in original post

3 REPLIES 3

Marianne
Level 6
Partner    VIP    Accredited Certified

This is how we installed multiple instances on a 3-node cluster:

Firstly, bear in mind that each instance will have a preferred startup node with one or more failover nodes.

Secondly, bear in mind that new instance need to be installed on all nodes where instance can failover to. Instance installation requires a reboot, as well as patch installation. So, if any other instance is running on this node, it needs to be failed over to another node first.

We used a drive letter per instance with mount points for data files and logs.
You have to ensure that drive letter/mount points are unique for each instance.
Drive letters are fine if you have a small amount of instances, e.g.
Instance1: E:\data, F:\logs, 
Instance2  G:\data, H:\logs, etc

To add a new instance, add new lun(s) that will be added to new diskgroup.
Mount luns to drive letters/mount points that will be unique in the cluster.
Install as per instructions in VCS for MsSQL agent guide (binaries in Program Files and data files in new drive letters/mount points). Reboot when prompted. Patch and reboot.
Set SQL startup to manual. Stop SQL, unmount volumes, deport dg and import on next system. 
Mount volumes to same drive letter/mount point and clean out data folders. You can rename the folders, but there is no point. Databases and logs are brand new, so no issue if they are recreated.
Install same way as before.
Repeat for all nodes.
Have a new virtual hostname and IP address ready and run SQL service group wizard on the node where new instance is currently running.

More steps are needed, but this is all documented in the SQL agent guide.

Will find link to VCS for SQL Guide and post shortly. 
(Last time I did this was SF/HA 5.1!)

Oh! You forgot to mention your SF/HA version?

Arshpreet
Level 2

Thanks Marianne for the response,

Our VCS version is 5.1 only. In my case, Drives are layout as a mount points within the same drive letter in front. 

E:\SQLData\VirtualHostName\DATA\ --Data mount is 200GB
E:\SQLData\VirtualHostName\LOGS\ --Logs mount is 100GB
E:\SQLData\VirtualHostName\RegRep\ --RegRep mount is 3GB

So what I understood from your post above is, (Re-writing the steps again)

1. Start from ACTIVE NODE, (call it node1, failover three other running instances to either node2 or node3) Install as per instructions in VCS for MsSQL agent guide (binaries in Program Files and data files in new drive letters/mount points).

That means when installing "Instance root directory" should be on "E:\Program files\Microsoft SQL Server\"  -----> where E: drive is a local volume on each Nodes. 

and "DATA Directories" which includes 'Data root directory', 'system database directory', 'user database directory', 'use database log directory', 'tempdb directory', 'tempdb log directory', 'backup directory'. ---------> These  should be on shared volume/mount points E:\SQLDATA\VirtualHostName\Data, and E:\SQLDATA\VirtualHostName\Logs.

2. Reboot when prompted. Patch and reboot.-- Before reboot failover other running instances to another node.

2. Set SQL startup to manual. Stop SQL

3. unmount volumes, deport dg and import on next system. 

4. Mount volumes to same drive letter/mount point on NODE2 . ( That makes NODE2 active now)

5. clean out data folders. You can rename the folders, but there is no point. Databases and logs are brand new, so no issue if they are recreated. -- That means, there will be no issue if those directories got over-written with this new installation? It sounds like VCS configuartion only uses the last installation happened on last node for the clustering purposes?

6.Install same way as before. Repeat for all nodes.

7. Have a new virtual hostname and IP address ready and run SQL service group wizard on the node where new instance is currently running.

What other steps are required other than all above? Can you please check if above bold items are all correctly understood? I am still not sure why storage has displayed separate mount for RegRep. 

Any idea?

scogeb
Level 3

Arshpreet, I have several clusters with SQL on them.  I'm running 6.0.1 on Windows 2008 R2 Enterprise with SQL 2008 R2 SP2.  Below are the steps I take for my main 4 node A/A/A/A cluster.

 

1. Add storage and make sure it's available on all nodes.
2. Online disks on 1st node
3. Create Mount folders on 1st node
4. Create disk group on 1st node
5. Create volumes on 1st node
6. Install SQL on 1st node making certain everything is installed to shared storage
7. Install SQL SP on 1st node
8. Install SQL and SQL SP on all other nodes, making certain it's installed with the exact same instance name and instance ID.  The disk groups and volumes should all be mounted on the 1st node at this time.  Install SQL to the default location on the other nodes.Create Service Group
9. From the 1st node, create the SQL service group using the SQL Server 2008 Agent Configuration Wizard.
10. Test failover to each node in the cluster

Let me know if you have any questions, as I do this quite often and never have any issues.

-Scott