05-02-2011 08:55 AM
I have 2 solaris systems running VCS5.1 MP1 , solaris 10u9 , vxfencing is configured but when i test network failure , both the systems panic
What could be the reason for it Is there anything i 'm doing wrong.
Following are the o/p of the fencing configurations
node1# cat /etc/vxfentab
Solved! Go to Solution.
05-03-2011 06:20 AM
Have you run the same tests on your fencing disks?
/opt/VRTSvcs/vxfen/bin/vxfentsthdw -r -g oddfendg
If that also completes successfully, you might want to double-check DMP support/configuration.
Start by verifying array settings (extract from Hardware TN:(
05-02-2011 11:51 AM
Please post /var/adm/messages for both nodes.
Please post files as attachments.
05-02-2011 10:04 PM
05-02-2011 10:08 PM
Hi Marianne,
I have attached the excerpt of the messages file from both the systems .. from the moment one NIC fails between the 2 nodes and jeopardy membership is formed and then to network partition. Let me know if more details are required.
Thanks.
05-03-2011 02:12 AM
I agree with Gaurav - you need to run vxfentsthdw tests to confirm that all data and fencing disks pass all tests.
This seems to be the problem (both nodes reported similar errors):
VxVM vxdmp V-5-0-0 dmp_pr_do_preempt: failed on path (30/0x140) with status = 0x2 sense
You can run vxfentsthdw on data as well as fencing disks in non-destructive mode, either one disk at a time or all disks in disk group (-r for non-destructive, -n to use rsh (drop this option if using ssh), -g for diskgroup):
/opt/VRTSvcs/vxfen/bin/vxfentsthdw -r -n -g <diskgroup>
05-03-2011 04:22 AM
05-03-2011 06:20 AM
Have you run the same tests on your fencing disks?
/opt/VRTSvcs/vxfen/bin/vxfentsthdw -r -g oddfendg
If that also completes successfully, you might want to double-check DMP support/configuration.
Start by verifying array settings (extract from Hardware TN:(
05-03-2011 06:30 PM
Agree with Marianne ... get the storage config checked again like zoning /masking etc .. array should be tuned as per Symantec requirements mentioned in hardware technotes ... also would recommend few more basic stuff..
- You have pasted output of files from one node, have a check on both nodes to see that contents of file are same.. check following:
/etc/vxfenmode
/etc/vxfendg
/etc/vxfentab ---- (don't go with just disk names, you should see with serial number that disks are same, you can use "vxdmpinq or vxscsiinq" commands for this located in /etc/vx/diag.d ) file should have same disks visible from both nodes.
G