I still don't understand if you are supposed to have one or multiple Virtual IPs (VIPs) - you say one VIP, but then in your post you use "virtual IP addresses" in the plural as does the white paper, which more explicity says " list of virtual IP addresses". i.e. you cannot round robin with one VIP so you must have more than one VIP.
Here is my understanding:
Before CNFS was available, you could only share a given CFS filesystem from ONE node at a time and so you had to use ONE VIP, so you had one service group (SG) containing that VIP and this service group could fail between SFCFS systems without having to import diskgroups and mount filesystems - and this is backed up by SFCFS admin guide (5.1) which says:
In previous releases, the SFCFS stack only allowed an active/passive setup for
NFS serving due to the complexity of lock reclamation in an active/active configuration
My understanding of CNFS is that you can share a given filesystem from multiple nodes at the same time and therefore to access from mulitple nodes, you need multiple VIPS, each in its own service group and this is backed up by SFCFS admin guide which says:
This Clustered NFS feature allows the same file system mounted across multiple
nodes using CFS to be shared over NFS from any combination of those nodes
So my understanding is that for CNFS you have a /myshare CFS mount and a share resource in a parallel SG and for a 3-node cluster you would have 3 failover SGs:
SG1 containing VIP1 which by default is on Node A
SG2 containing VIP2 which by default is on Node B
SG3 containing VIP3 which nb default is on Node C
Supposing you then have 3 NFS clients, then my understanding is that you could either manually load balance - i.e.
client1 uses VIP1
client2 uses VIP2
client3 uses VIP3
or you use DNS round robin so that you have for example DNS name "myshare_dns" which the clients use and this name is registered with DNS with the 3 VIPs so that:
When client1 accesses "myshare_dns" DNS resolves this as VIP1
Next, client2 accesses "myshare_dns" DNS resolves to next IP - VIP2
Next client3 accesses "myshare_dns" DNS resolves to next IP - VIP3
Which of this methods you use is up to you and it is my understanding that there is no requirement to use round robin DNS as DNS round robin is not even mentioned in the SFCFS admin guide.
Note if you DNS round robin the physical IPs of the VCS cluster nodes, then this would work when all 3 nodes are up, but if a node goes down, then when DNS gives out that IP to a client, then connection will fail, hence why you use VIPs, so that if a node fails, then VIP will fail to another node (which will then have 2 VIPs on it)
If my understanding is wrong for any of above, please point out and if I am wrong and only one VIP is used in one service group and this service group is running on Node A, then please explain how clients access the share on Node B and C when the one and only VIP is on node A.
Thanks
Mike