cancel
Showing results for 
Search instead for 
Did you mean: 

Veritas Cluster 5.1 for Windows - How many NICs should I have...

Hello,

Our company would like to build a two node cluster in an active/passive.

OS: Windows 2003 32-bit

Clustering software: Symantec Veritas Cluster 5.1 for Windows

SAN: HP MSA 2000 for SAS

Utilizing Windows 2003 and MSA as the SAN. Cluster software will be Symantec Veritas Cluster 5.1 for Windows

Database software: SQL Server 2005

Number of Instance: 1

I would like to have some idea first on the networking side, I have read the Installation guide as well as the VCS and SQL guide, however I cannot picture out the difference between private and public. Something like how much do I need in total for private and public. Please consider the environment that I would be dealing as I mentioned above.

 

Thank you,

Casper

1 Solution

Accepted Solutions
Accepted Solution!

The minimum requirement is 2

The minimum requirement is 2 NICs - in this case:

One NIC is used as a dedicated VCS hearbeat

The second NIC is used as your public network AND as a low priority VCS heartbeat - this means, all cluster communication is sent over the dedicated VCS hearbeat (cluster is very chatty), apart from a low frequency "I am alive" on the low priority heartbeat.  Should you loose the dedicated VCS hearbeat, then the low priority heartbeat will be promoted which will put additional traffic on the public network.

The recommended numbers of NICs is 4:

2 for your public network (usually in a "team")

2 dedicated heartbeats

and if the 2 public network heartbeats are completely independent of the 2 dedicated heartbeats, then you can make the public network a 3rd low pri heartbeat.

All heartbeats, whether dedicated or low-pri, must be independent, so not share a dual or quad NIC card and go through separate switch and ideally separate VLANs.

Mike

View solution in original post

15 Replies
Accepted Solution!

The minimum requirement is 2

The minimum requirement is 2 NICs - in this case:

One NIC is used as a dedicated VCS hearbeat

The second NIC is used as your public network AND as a low priority VCS heartbeat - this means, all cluster communication is sent over the dedicated VCS hearbeat (cluster is very chatty), apart from a low frequency "I am alive" on the low priority heartbeat.  Should you loose the dedicated VCS hearbeat, then the low priority heartbeat will be promoted which will put additional traffic on the public network.

The recommended numbers of NICs is 4:

2 for your public network (usually in a "team")

2 dedicated heartbeats

and if the 2 public network heartbeats are completely independent of the 2 dedicated heartbeats, then you can make the public network a 3rd low pri heartbeat.

All heartbeats, whether dedicated or low-pri, must be independent, so not share a dual or quad NIC card and go through separate switch and ideally separate VLANs.

Mike

View solution in original post

Hi Casper, Mike is correct

Hi Casper,

Mike is correct about the NIC cards.  Most customers are running 2 dedicated NICs and either a single public NIC or a public NIC team of 2 or more NICs as Mike indicated.

Other items that you should consider, for optimal SQL performance the drives used by SQL should be formated with a 64 KB cluster unit size/allocation unit size.  64 KB is the data size that SQL reads and writes in.  By using 64 KB you minimize the number of I/O operations to the drives per SQL I/O. 

Most volumes have a default cluster unit size of 4 KB.  This means that for every SQL I/O operation the drive would process 16 seperate 16 operations to process the same 64 KB of data.

It would also be advisable to use SFW-HA so that you can create a stipe volume for better disk I/O thoughput and the ability to grow the volume size as needed when you databases grow.

Thanks,

Wally

Clarification on the number of NIC and its assignment...

Thanks Mike and Wally!

After reading your recommendation let me know if I got it correctly:

We will be needing 4 NICs for each node
  - Two NICs per node in Teaming for the public network
  - Two independent NIC for private network (heartbeat)

This means if we translate it to actual implementation:

Node 1 (assume)
Computer Name: Computer1
IP Address of the host: Two NICs in teaming let say, 10.10.1.10

Node 2 (assume)
Computer Name: Computer2
IP Address of the host: Two NICs in teaming let say, 10.10.1.11

Cluster name: myCluster

1. What about the cluster IP address? Will it be just a virtual IP address?
That I can assign like 10.10.1.12 ?

Next question, about the heartbeat
2. For the heartbeat, from the documentation, it says there, TCP/IP should be disabled.
This means, for the two independent NIC I will not have IP address assign to them?

3. Can the two independent heartbeat be connected with a cross over cable?

Thank you,
Casper

Hi Casper, Here are the

Hi Casper,

Here are the answers to your questions.

 

1. It depends.  The ClusterService group is optional unless you are using GCO.  But if you set up a ClusterService group then yes the IP address 10.10.1.12 will work and it will be virtual meaning the cluster will move it from node to node depending on where the ClusterService group is running.  You will also need another virtual IP address for SQL.

2. LLT does not require TCP/IP so you can uncheck IPv4 and IPv6 from the heartbeat NIC properties.  However, if you want to use LLT over UDP then you will need to configure TCP/IP on the heartbeats.

3. You can use cross over cables but keep in mind that if one node is shutdown for maintenance then the other server will report the NICs as unpluged.  This can cause some temporary issues with VCS heartbeats in this situation.  If you are OK with the heartbeat on the active node reporting problems when the passive node is powered down then use cross over cables.  If you are concerned with the heartbeats being reported then put in hubs or switches for the heartbeats.  You can always implement cross over cables during testing and if you don't like what you see then switch to hubs or switches for the heartbeats.

 

Thanks,

Wally

Hello,Thanks for responding

Hello,

Thanks for responding to my questions.

I have prepared a network diagram based on what I understood. Would you think this is a good implementation? The corporate switch will be the switch that other computers are connected, while the independent switch will be a new 5ports switch that will solely be used for the heartbeat.

And also, I'm a little confuse about the ClusterService Group, from my experience on MS Clustering, this is mandatory like putting the clustername, cluster virtual IP of the cluster. For VCS, how it differs or what is the relevance of having a GOC compare to MS Clustering? Thanks.

sample network implementation of two node cluster

Firstly, most importantly,

Firstly, most importantly, you must not put the heartbeats on the same switch - you must use 2 independent switches, otherwise failure of the switch will cause both heartbeats to fail - in this case, both nodes will assume the other has died and both will try to run your application service group and you will get split brain which can corrupt your data.

For VCS, you need a cluster name and a cluster id (number from 0 - 255 or maybe higher than 255 depending on O/S and version).  For managing the cluster you can use the physical IPs of the nodes and the management tools are intelligent so that if Java GUI or VOM are connected to node1 then node1 dies, they will try to connect to cluster via another node, but this does require host resolution to be working.

So a cluster VIP (virual IP) is not really required except if you have GCO.  With GCO (Global cluster option) you have 2 or more cluster connected (usually for DR) so these clusters must be connected via a VIP to communicate with each other.

Mike

Thanks Mike! Hello All, I

Thanks Mike!

Hello All,

I will update my network diagram to make it two independent switch for the heartbeat.

We will be using MSA2000 for SAS, do you have a suggestion as to direct attach it or
on a separate switch?

If budget is constraint will direct attach be OK?

From the manufacturer side, it has the option for direct connect to server:
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01756253/c01756253.pdf 

Looks to me that MSA2000

Looks to me that MSA2000 effectively contains 2 4-port switches, so if you were to connect via one external SAN switch then you would introduce a single point of failure and even if you connect via 2 external SAN switches you are still increasing the risk of an outage due to hardware failure.

So in terms of HA (high availabilty), seems to me that it is better to connect direct and reasons to connect via a switch is if this is necessary due to the need to connect multiple servers or multiple storage arrays which exceed connections available.

The same is really true for the heartbeats - using switches introducing more hardware components that could fail, but you can only use cross-over cables with 2 nodes, so many customers use switches to make it easier to add a 3rd node to the cluster.

Mike

This means, a direct from

This means, a direct from MSA2000 to nodes is the suggestion to go right?

In this case, I have to purchase a mini-SAS adapter and it's cable for the HP Proliant DL388 G7 and plug it on the available PCI slot, is this correct?

Yes I would direct attach

Yes I would direct attach using Dual paths, so ideally you should have 2 HBA cards in your servers.  I can't see any "HA" advantage of connecting via switches, only advantages of scalability of adding more nodes and storage arrays.

Mike

  What will be the role of

 

What will be the role of Volume Manager if we have MSA2000? HOw will the integration of these both come into play?

 

Thanks guys.

Hi Amcasperforu, With just

Hi Amcasperforu,

With just the VCS for Windows product (not SFW-HA) you cannot run dynamic disks.

SFW (Volume Manager) will allow you tp create software based RAID on the luns (disks) presented by the MSA2000. 

If you ever run out of space, you can present more luns from the MSA2000 or increase the size of the luns that you are already presenting.  Then you can use SFW to expand the volumes on the fly and not have to do a backup, recreate the volume with a larger size and restore the data as you would with just the MSA2000.

Thanks,

Wally

Volume Manager (SFW) will

Volume Manager (SFW) will manage the multiple paths to the MSA2000 and let you increase voumes online.

Thanks Mike. Will there be

Thanks Mike. Will there be conflict if we install the MPIO of HP MSA2000 on each node? Since the MSA2000 has the driver too for multi pathing.. And if SWF do the same, that makes me think to use only one... right?

Thanks too Wally! I have

Thanks too Wally! I have checked the MSA2000 and suprisingly, it gave the opportunity to configure the RAID level on volumes then presentation.

It also can do online resizing of volume size then doing diskpart on Windows OS side to reflect.  As I also mention HP provide a software driver for multipathing (MPIO) ...Do you see redundacy here? Thanks.