cancel
Showing results for 
Search instead for 
Did you mean: 

VCS behavior using GCO between two distance sites with different public IPs

cmoreno1978
Level 3
Partner Accredited

After checking the requirements and recommendations for setting up a global cluster (thanks for the contributions of the forum) between two distant sites (more than 100 km). Now we have some doubts and questions regarding the behavior of the cluster at the time of failover the the services groups to remote site, which VCS component is responsible for redirecting client requests to a remote site and new public IP?

 

If we have different public IPs address for each site how can we redirect this traffic to the remote site? Our ISP providers has the DNS servers and Routing components...

 

An important fact is that in the primary site we are using a equipment that does NAT, but I think we can stop using this feature and use the public IPs directly.

 

As always, thanks in advance for your help

1 ACCEPTED SOLUTION

Accepted Solutions

Wally_Heim
Level 6
Employee

Hi cmoreno1978,

 

You did not mention what platform that you are using.  With SFW-HA (the windows product), DNS updates can be done by setting attributes in the Lanman resource.  You need to enable DNSUpdateRequired and DNSUpdateCriticalforOnline attributes to get DNS to update during failover.  You might also want to enable/set additional DNSOptions attribute to PurgeDuplicates which erases the DNS entries from other sites and AdditionalDNSServers attribute to update more DNS servers than what is configured on the public NIC of the server.

 

If you are using the Linux or Solaris versions of the product there is also a DNS agent that can be used to update DNS records during a failover.  I'm not sure of the exact name of the DNS agent but I'm sure it has DNS in the name.

 

Thanks,

Wally

View solution in original post

8 REPLIES 8

Wally_Heim
Level 6
Employee

Hi cmoreno1978,

 

You did not mention what platform that you are using.  With SFW-HA (the windows product), DNS updates can be done by setting attributes in the Lanman resource.  You need to enable DNSUpdateRequired and DNSUpdateCriticalforOnline attributes to get DNS to update during failover.  You might also want to enable/set additional DNSOptions attribute to PurgeDuplicates which erases the DNS entries from other sites and AdditionalDNSServers attribute to update more DNS servers than what is configured on the public NIC of the server.

 

If you are using the Linux or Solaris versions of the product there is also a DNS agent that can be used to update DNS records during a failover.  I'm not sure of the exact name of the DNS agent but I'm sure it has DNS in the name.

 

Thanks,

Wally

mikebounds
Level 6
Partner Accredited

The UNIX agent is called DNS.  Prior to 5.1 the DNS agent could only update CNAME records, so you would have:

dr-appname -> dr-ip

prod-appname -> prod-ip

and the DNS updates a global address the clients use, "appname" to point to either prod-appname or dr-appname.

I believe in 5.1 you can update DNS A-records too.

Unless the agent has been updated very recently, the UNIX DNS agent cannot update a secure windows DNS server.

Mike

cmoreno1978
Level 3
Partner Accredited

Hi Wally and Mike, thanks for your help...and sorry for delay response

We are using Unix platform: Under Solaris 5.10 and SF HA/DR 5.0 with VVR option

The customer just told us that your ISP can set the link to extend the LAN beetwen sites (For example: Main site -> IP 10.10.10.130/25  Alternate site -> ip 10.10.10.20 /25), meaning that we could have two private IP addreses on each node and that another component (such as a load balancer or any) to do NAT of the requests from the WAN.

maybe they requires that the cluster is configured with Virtual address 10.10.10.10 which is nated with the public address xxx.xxx.xxx.xxx and with the URL www.customer.com 

cmoreno1978
Level 3
Partner Accredited

Hi all,

 

Any suggestion? With this kind of setting an RDC can be configured right? so the redirection of remote clients would not be in charge of the VCS.

My guess is that the bandwith (1 Mbps)  and performance of the network allows us to do something like this without compromising the platform to a split brain condition. The ISP provides them with few options =(

Wally_Heim
Level 6
Employee

Hi cmoreno1978,

 

I'm not sure what kind of suggestions that you are looking for.   If the ISP can allow the same IP subnet to be used at two different sites, then VCS can switch the same IP address between the two sites and the ISP will direct traffic to the correct site depending on the where the IP is online at.

 

RDC is different than GCO.  RDC is Replicated Data Cluster.  It is a single cluster that is replicating the data between the individual nodes in the cluster.  For this to work you need to have multiple heartbeats that have to have latency of under 500ms.

 

GCO is linking of multiple clusters where the distance between them do not always allow for under 500 ms heartbeat lantencies.  Again, you have multiple clusters and configuration and administration is more complicated. 

 

The same replication methods can be used in both GCO and RDC. 

As for bandwidth, 1mbps seems low for replications but it could be OK for GCO heartbeats.  Without knowing more about what you are trying to replicate, your expected data change rate, how far replication can be behind on your secondary from the primary to meet your specific needs and many other factors it is impossible for us to say if this is enough or not.

 

Thanks,

Wally

 

cmoreno1978
Level 3
Partner Accredited

Hi Wally

 

I have updates for this ...

Today the customer informed to us that the ISP provider will expand the bandwidth to 3 Mbps, and the distance is about 100Km ... The scheme will be "easier" because both sites have their internet out through the same component

I ran a ping to get an idea of the latency between sites, i understand that we must add more latency by other factors (firewalls, etc)

bash-3.00# ping  -s XXX.XXX.99.161 1472
PING XXX.XXX.99.161: 1472 data bytes
1480 bytes from XXX.XXX.99.161: icmp_seq=0. time=28.9 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=1. time=36.2 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=2. time=41.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=3. time=21.7 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=4. time=34.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=5. time=20.2 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=6. time=25.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=7. time=24.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=8. time=44.4 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=9. time=44.8 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=10. time=47.6 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=11. time=27.9 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=12. time=45.3 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=13. time=34.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=14. time=17.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=15. time=24.7 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=16. time=17.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=17. time=24.5 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=18. time=18.3 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=19. time=36.1 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=20. time=18.4 ms
1480 bytes from XXX.XXX.99.161: icmp_seq=21. time=20.5 ms


The ping probes and replies are carried by the ICMP protocol. ICMP is carried within an IP packet. The IP protocol has a header overhead of 20 bytes, and ICMP has a header overhead of 8 bytes, making 28 total bytes of overhead within a maximum packet size of 1500 bytes. This leaves 1500-28 = 1472 as the longest ping request which can be made without fragmentation.

Which plans to replicate is about 40GB of data initially, the deltas do not exceed 5GB...

 

Wally_Heim
Level 6
Employee

HI cmoreno1978,

 

if you have more questions please open a new post since this one is already marked as answered.

 

Thanks,

Wally

cmoreno1978
Level 3
Partner Accredited

I'm sorry Wally! you're right...

 

Thanks for help guys...