cancel
Showing results for 
Search instead for 
Did you mean: 

Clustered NFS

mikebounds
Level 6
Partner Accredited

Can someone confirm my understanding of CNFS in that clients can acces the same share from different nodes in the CNFS cluster.

So for example suppose you have 3 nodes, A, B and C and filesystem "/myshare" which is CFS mounted on all nodes and NFS shared on all nodes, then you could have 3 failover servers each containing a VIP - VIP1, VIP2, VIP3.

Then:

Client 1 can access /myshare via VIP1 running on Node A

Client 2 can access /myshare via VIP2 running on Node B

Client 3 can access /myshare via VIP3 running on Node C

If node C fails then VIP3 may fail to Node A and then clients 1 and 2 would both be accessing share on Node A, but by different VIPs

This is my understanding, but the example in the 5.1 CFS admin guide only shows 1 VIP service group which means in the example, there is no concurrent access of the share which I thought was the point of CNFS, or is the point of CNFS that you don't have to unshare and share when you fail over as shares are always there but there can only be one VIP for the share?

Mike

3 REPLIES 3

Douglas_Snyder
Level 5
Employee Accredited Certified

Mike,

There is (or should be) only 1 VIP for each CNFS service.  The VIP is bound to each public NIC so each server in the CNFS cluster can handle requests.  CNFS is a stateless application so each NFS request can be on a different node in the cluster.  For example client 1 may be accessing a file on /myshare on node A but when the next file is accessed the request may be on node B.

You can read a bit more in the CNFS whitepaper here:

http://eval.symantec.com/mktginfo/enterprise/white_papers/b-veritas_storage_foundation_cluster_file_...

Before NFS clients connect to the CNFS server, the server’s virtual host name and IP address must be registered in the DNS server. The clients connect to any of the CNFS servers using the virtual IP addresses. By using DNS to direct clients to the CNFS cluster, the clients are load balanced between all the nodes of the CNFS farm. The load balancing is taken care of by DNS and typically employs a round-robin mechanism to do so.

mikebounds
Level 6
Partner Accredited

I still don't understand if you are supposed to have one or multiple Virtual IPs  (VIPs) - you say one VIP, but then in your post you use "virtual IP addresses" in the plural as does the white paper, which more explicity says " list of virtual IP addresses".  i.e. you cannot round robin with one VIP so you must have more than one VIP.   

Here is my understanding:

Before CNFS was available, you could only share a given CFS filesystem from ONE node at a time and so you had to use ONE VIP, so you had one service group (SG) containing that VIP and this service group could fail between SFCFS systems without having to import diskgroups and mount filesystems - and this is backed up by SFCFS admin guide (5.1) which says:

In previous releases, the SFCFS stack only allowed an active/passive setup for
NFS serving due to the complexity of lock reclamation in an active/active configuration

My understanding of CNFS is that you can share a given filesystem from multiple nodes at the same time and therefore to access from mulitple nodes, you need multiple VIPS, each in its own service group and this is backed up by SFCFS admin guide which says:

This Clustered NFS feature allows the same file system mounted across multiple
nodes using CFS to be shared over NFS from any combination of those nodes
So my understanding is that for CNFS you have a /myshare CFS mount and a share resource in a parallel SG and for a 3-node cluster you would have 3 failover SGs:
SG1 containing VIP1 which by default is on Node A
SG2 containing VIP2 which by default is on Node B
SG3 containing VIP3 which nb default is on Node C
 
Supposing you then have 3 NFS clients, then my understanding is that you could either manually load balance - i.e.
client1 uses VIP1
client2 uses VIP2
client3 uses VIP3
 
or you use DNS round robin so that you have for example DNS name "myshare_dns" which the clients use and this name is registered with DNS with the 3 VIPs so that:
When client1 accesses "myshare_dns" DNS resolves this as VIP1
Next, client2 accesses "myshare_dns" DNS resolves to next IP - VIP2
Next client3 accesses "myshare_dns" DNS resolves to next IP - VIP3
 
Which of this methods you use is up to you and it is my understanding that there is no requirement to use round robin DNS as DNS round robin is not even mentioned in the SFCFS admin guide.
 
Note if you DNS round robin the physical IPs of the VCS cluster nodes, then this would work when all 3 nodes are up, but if a node goes down, then when DNS gives out that IP to a client, then connection will fail, hence why you use VIPs, so that if a node fails, then VIP will fail to another node (which will then have 2 VIPs on it)
 
If my understanding is wrong for any of above, please point out and if I am wrong and only one VIP is used in one service group and this service group is running on Node A, then please explain how clients access the share on Node B and C when the one and only VIP is on node A.
 
Thanks
 
Mike

 

mikebounds
Level 6
Partner Accredited

Is anyone able to answer this - in essence the example in the 5.1 CFS admin guide shows one VIP service group (SG) and I thought the whole point of CNFS is that you could have 2 VIP SGs, so that you can access the NFS share from both nodes at the same time.  

You can have 1 VIP SG with normal CFS + NFS, and if you can only have 1 VIP service group with CNFS, then the only difference between CNFS and CFS + (normal) NFS would seem to be that the share resource is in failover group for CFS + NFS, but in parallel group for CNFS.

Below is VIP SG in CFS Admin guide example:

group vip1 (
SystemList = { system01 = 0, system02 = 1 }
AutoStartList = { system01, system02 }
PreOnline @system01 = 1
PreOnline @system02 = 1
)
IP vip1 (
Device = bge0
Address = "10.182.111.161"
NetMask = "255.255.252.0"
)
NIC nic1 (
Device = bge0
)
requires group cfsnfssg online local firm
vip1 requires nic1
 
So I would also expect to see a "group vip2" which starts on system2 (and then you can set up DNS round robin to use vip1 and vip2)
So is this a bad example to show what CNFS provides, or am I misunderstanding what CNFS provides.
 
Mike