cancel
Showing results for 
Search instead for 
Did you mean: 

Veritas cluster MultiNICA, IP Conservation Mode (ICM), not recover corectly

ediaz
Level 3

Hi to all, I have a cluster 5.1 Redhat last version or parches. and I have two network interfaces, eth0 and eth4 that are MultiNICA IP Conservation Mode (this is because are external ips and don't want pay for more IP.

This is with simulate ips

machine1 eth0 has 192.168.1.10 (in the configuration of redhat /etc/networking this eth0 has 192.168.1.10)

machine1 eth4 has 192.168.1.10 (in the configuration of redhat /etc/networking this eth4 not has any IP)

machine2 eth0 has 192.168.1.11 (in the configuration of redhat /etc/networking this eth0 has 192.168.1.11)

machine2 eth4 has 192.168.1.11 (in the configuration of redhat /etc/networking this eth4 not has any IP)

Service has 192.168.1.12

Today I make some testing for running service, and pluging off the eth0 the sevice and the ip go to eth4.... OK (Machine1)

pluging off the eth4 in machine1 the service fails and go to machine 2 OK..

pluging on eth0 and eth4 in machine 1 and the service allways said that are in fail. this is the message

MultiNICA:nic205:monitor:On eth0 current statistics 0 are same as that of Previous statistics 0

 

I will go to the machine and see ifconfig -a and see that the machine don't have any ip configured in any interfaces (if make a network restart the system go to ok again but if use Performance Mode the system recovers good I don't need restart the network..

I don't want to use ip command to configure the ips, and for this I put Options "broadcast 192.168.1.255" Because I don't want to clean the default interfaces.

Any idea why this are happeing?

 

regards!!

1 REPLY 1

AlanTLR
Level 5

This sounds like more of a RedHat issue than a VCS one, if the IP addresses aren't coming back up after plugging them back in.

Do you get this same behavior from the second machine?  From a VCS standpoint, and from what I can gather of your description, you're also failing over the MultiNIC, which sould make sense that your network isn't coming back up on the first node after you plug it in.  Manually failing back over to the first node should bring this network back up.

If you want the network to stay up, you may need to create a monitor (snmp, or something similar) for that interface that brings it up and is triggered when the service is failed over.

Can you post the portion of your MultiNICA configuration from your main.cf?  That will give a better idea of how your service groups are configured.