β10-27-2015 09:54 AM
Hi,
I want to set a low prio network over a bond which is part of a bridge. I have a cluster with two nodes.
This is the network configuration (it is the same for both nodes):
node 2:
[root@node2 ]# cat /proc/version
Linux version 2.6.32-504.el6.x86_64 (mockbuild@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Tue Sep 16 01:56:35 EDT 2014
[root@node2 ]# ifconfig | head -n 24
bond0 Link encap:Ethernet HWaddr 52:54:00:14:13:21
inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:761626 errors:0 dropped:0 overruns:0 frame:0
TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB)br0 Link encap:Ethernet HWaddr 52:54:00:14:13:21
inet addr:10.10.11.102 Bcast:10.10.11.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:49678 errors:0 dropped:0 overruns:0 frame:0
TX packets:50264 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:44061727 (42.0 MiB) TX bytes:28800387 (27.4 MiB)eth0 Link encap:Ethernet HWaddr 52:54:00:14:13:21
UP BROADCAST RUNNING PROMISC SLAVE MULTICAST MTU:1500 Metric:1
RX packets:761626 errors:0 dropped:0 overruns:0 frame:0
TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB)
[root@node2 ]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.525400141321 no bond0[root@node2 ]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:14:13:21
Slave queue ID: 0
node 1:
[root@node1]# ifconfig | head -n 24
bond0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23
inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link
UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
RX packets:816219 errors:0 dropped:0 overruns:0 frame:0
TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB)br0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23
inet addr:10.10.11.101 Bcast:10.10.11.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:813068 errors:0 dropped:0 overruns:0 frame:0
TX packets:640374 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1181039350 (1.0 GiB) TX bytes:604216197 (576.2 MiB)eth0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23
UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1
RX packets:816219 errors:0 dropped:0 overruns:0 frame:0
TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB)[root@node1]# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.5254002e6d23 no bond0[root@node1 ]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: eth0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0Slave Interface: eth0
MII Status: up
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 52:54:00:2e:6d:23
Slave queue ID: 0
the llt configuration files are the following:
[root@node1 ]# cat /etc/llttab
set-node node2
set-cluster 1042
link eth3 eth3-52:54:00:c3:a0:55 - ether - -
link eth2 eth2-52:54:00:35:f6:a5 - ether - -link-lowpri bond0 bond0 - ether - -
[root@node1 ]# cat /etc/llttab
set-node node1
set-cluster 1042
link eth3 eth3-52:54:00:bc:9b:e5 - ether - -
link eth2 eth2-52:54:00:31:fb:31 - ether - -link-lowpri bond0 bond0 - ether - -
However this seems that is not working. When I check the llt status, the node thinks the interface is down in the other one.
[root@node2 ]# lltstat -nvv | head
LLT node information:
Node State Link Status Address
0 node1 OPEN
eth3 UP 52:54:00:BC:9B:E5
eth2 UP 52:54:00:31:FB:31
bond0 DOWN
* 1 node2 OPEN
eth3 UP 52:54:00:C3:A0:55
eth2 UP 52:54:00:35:F6:A5
bond0 UP 52:54:00:14:13:21
[root@node2 ]# lltstat -nvv | head
LLT node information:
Node State Link Status Address
* 0 node1 OPEN
eth3 UP 52:54:00:BC:9B:E5
eth2 UP 52:54:00:31:FB:31
bond0 UP 52:54:00:2E:6D:23
1 node2 OPEN
eth3 UP 52:54:00:C3:A0:55
eth2 UP 52:54:00:35:F6:A5
bond0 DOWN
Do you know if I have something worng?
Is this a valid configuration?
Thanks,
Javier
β10-27-2015 08:38 PM
Hi Javier,
What is the verison of VCS you have on systems ? LLT bond support was introduced from 5.0MP3 onwards
Did you add this dynamically or you have rebooted (or unloaded /reloaded) LLT modules after configuration ?
From the below technote, it appears that you have edited the llttab file correctly however did the LLT modules loaded after config was changed ?
https://www.veritas.com/support/en_US/article.TECH62995
G
β10-27-2015 09:07 PM
In addition as mentioned in the TECHNOTE provided by Gaurav, can you confirm if the bonded NIC from all the cluster hosts are connected to the same switch.
Please refer to the Note section in the TECHNOTE 000035027 .
You can configure NIC bonds (aggregated interfaces) as private links under LLT. LLT treats each aggregated interface as a single link. So, you must configure these NICs that form the bond in such a way that the NICs are connected to the same switch or hub.
Regards,
Sudhir
β10-28-2015 02:40 AM
Hi,
My VCS version is the following:
[root@node1 litp-admin]# haclus -value EngineVersion
6.1.10.000
I did both. Fisrt I used the following commands to set the llt network:
lltconfig -u bond0
lltconfig -t bond0 -d bond0 -l
And then I modified the /etc/llttab files and restarted the nodes.
After this the interfaces are DOWN in the other nodes.
sudhirh, In this case I only have one ethernet interface attach to the bond, so I think it should not be a problem with the swich.
I have found that if I unlink the bond from the bridge:
brctl delif br0 bond0
Then the interfaces are up:
[root@node2 ]# lltstat -nvv|head
LLT node information:
Node State Link Status Address
0 node1 OPEN
eth3 UP 52:54:00:BC:9B:E5
eth2 UP 52:54:00:31:FB:31
bond0 UP 52:54:00:2E:6D:23
* 1 node2 OPEN
eth3 UP 52:54:00:C3:A0:55
eth2 UP 52:54:00:35:F6:A5
bond0 UP 52:54:00:14:13:21[root@node1 ]# lltstat -nvv|head
LLT node information:
Node State Link Status Address
* 0 node1 OPEN
eth3 UP 52:54:00:BC:9B:E5
eth2 UP 52:54:00:31:FB:31
bond0 UP 52:54:00:2E:6D:23
1 node2 OPEN
eth3 UP 52:54:00:C3:A0:55
eth2 UP 52:54:00:35:F6:A5
bond0 UP 52:54:00:14:13:21
β01-07-2016 01:01 AM
Hi,
According technote :https://www.veritas.com/support/en_US/article.000035027
and https://www.veritas.com/support/en_US/article.000015218
bond work as llt is supported.
but :
...
..
if you have modify llttab, bond0 still in down state, you can further check system log.