Cluster configuration - RDC or GCO?
Hi all, and thanks again for your help
In other discussions at the forum was asking about the possibilities of setting up a single node cluster with GCO option. As the analysis progressed, we realized that it was possible to configure a RDC configuration. I will Try to summarize the characteristics that we currently have and after that maybe you can help
The customer informed to us that the ISP provider will expand the bandwidth to 3 Mbps, and the distance is about 110Km ... The scheme will be "easier" because both sites have their internet out through the same component so we can switch the same IP for services beetwen sites, now the key factors to implement this are the Latency, bandwith and the application writes behavior
I ran a ping to get an idea of the latency between sites, i understand that we must add more latency by other factors (replication, networking components, etc)
bash-3.00# ping -s XXX.XXX.99.161 1472
PING XXX.XXX.99.161: 1472 data bytes
1480 bytes from XXX.XXX.99.161: icmp_seq=0. time=28.9 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=1. time=36.2 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=2. time=41.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=3. time=21.7 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=4. time=34.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=5. time=20.2 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=6. time=25.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=7. time=24.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=8. time=44.4 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=9. time=44.8 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=10. time=47.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=11. time=27.9 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=12. time=45.3 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=13. time=34.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=14. time=17.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=15. time=24.7 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=16. time=17.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=17. time=24.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=18. time=18.3 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=19. time=36.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=20. time=18.4 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=21. time=20.5 ms
The ping probes and replies are carried by the ICMP protocol. ICMP is carried within an IP packet. The IP protocol has a header overhead of 20 bytes, and ICMP has a header overhead of 8 bytes, making 28 total bytes of overhead within a maximum packet size of 1500 bytes. This leaves 1500-28 = 1472 as the longest ping request which can be made without fragmentation.
Which plans to replicate is about 40GB of data initially, the deltas do not exceed 5GB...in fact the average of kw/s not exceed 95.90 Kilobytes per second (writes). We will have two NICs for heartbeats, one NIC for replication and one NIC for Application service.
Here in this forum have helped me to understand the possibilities that it provides SF + VVR, with the basic info provided and considering that the replication will be asynchronous, can we deploy RDC in this scenario?
Hi cmoreno1978,
It looks like you can deploy an RDC or a GCO configuration if you want. Althought it is recommended to have multiple heartbeat links when configuring a RDC configuration.
The main difference that you will need to keep in mind if the way the two of them do failover between the sites.
A RDC is configured as a single cluster and will fail the service group between the two sites automatically. With this configuration and default settings, the remote site will always try to bring up the service group if communications are lost to the primary site. The default settings of the RVGPrimary resource will perform a take over in this case and may cause some issues with data loss if the replication link is not up to date.
A GCO, by default will not allow site failover automatically. Site failover will require Administrative intervention so you will be aware of this before it happens. As a result the takeover is only done if you give GCO the authority to move the group while the link is down.
The other item that you need to be aware of with an RDC configuration is TCP/IP related. You have mentioned that you are running 4 NICs and 1 WAN link. You have not mentioned how you are going to setup IP on the 4 NICs. If all 4 links are on the same IP subnet, then Windows will only use 1 NIC for outbound traffic based on the order they appear in the routing table. This is a function on how Microsoft implemented their TCP/IP stack and it cannot be changed by SFW-HA. Multiple IP subnets can be configured if desired.
Thanks,
Wally