12-04-2011 12:20 AM
Hi,
I have two node cluster and today I ran Virtual Fire Drill in order to see if everything is OK.
I've noticed I'm getting errors such as:
* udid.vfd: Failed to get disk information for disk san_vc0_13.
* udid.vfd: UDIDS for device <san_vc_8 do not match on cluster nodes.
[...]
Anyway, I've connected to both servers, san_vc0_13 doesn't exist on the second node.
About the other errors the disks name is not match.
Reviewing this case, I've seen we have 2 more temporary disks that the first node has, that has nothing to do with the cluster (this situation is OK).
the problem is, that these 2 disks, took the number 2 and 3 (san_vc0_2/3), so now the disks names are mismatch and it seems I'm missing a disk on the second node.
the cluster diskgroups are viewed by both nodes and they are OK (even within the cluster), the only problem is the names are not match and VFP is warning about it.
Anyway to change the disk's names and make it permanent (by UDID or so) ?
Thanks,
Solved! Go to Solution.
12-06-2011 06:11 AM
I can see the disknames are inconsistent across the nodes ... for e.g ..
Server 1
set namingscheme | |||||||||||||||
Bases the name of a device on the enclosure name (ebn), or on the device name that is used by the operating system (osn). The change is immediate, and does not require vxconfigd to be restarted. | |||||||||||||||
For TPD devices, the effect of the device naming scheme also depends on the setting of the enclosure-specific attribute tpdmode. If the tpdmode is set to pseudo, the TPD naming is used, regardless of the device naming scheme. To use enclosure-based naming for TPD devices, set the tpdmode to native for the enclosure. Set the tpdmode attribute with the vxdmpadm setattr command. See the vxdmpadm(1) man page for details. | |||||||||||||||
The following options can be set:
|
12-04-2011 08:47 AM
let us know the OS & SF version ....
When you say a disk is missing on second node, is that the disk is missing by its name or you can count one less in the configuration itself ? Any failures reported in "vxdisk list" ?
G
12-04-2011 10:34 AM
Are the 'temporary' disks in the same diskgroup or different diskgroup?
If they are in the same diskgroup, the error is justifiable...
12-04-2011 11:08 PM
There are no missing disks on the second one, there some other disks attached to the first server (not related to the cluster), that is the reason (I believe) why the names are messed up.
I'll explain with examples...
[root@server-01 ~]# vxdisk -o alldgs listDEVICE TYPE DISK GROUP STATUSdisk_0 auto:none - - online invaliddisk_1 auto:none - - online invalidsan_vc0_0 auto:cdsdisk - (dg_fencing) onlinesan_vc0_1 auto:cdsdisk - (dg_fencing) onlinesan_vc0_2 auto:cdsdisk - (dg_fencing) onlinesan_vc0_3 auto:cdsdisk dg_oracle01 dg_oracle onlinesan_vc0_4 auto:cdsdisk dg_test101 dg_test1 onlinesan_vc0_5 auto:cdsdisk dg_oradump01 dg_oradump onlinesan_vc0_6 auto:cdsdisk dg_oratmp01 dg_oratmp onlinesan_vc0_7 auto:cdsdisk dg_oraundo01 dg_oraundo onlinesan_vc0_8 auto:cdsdisk dg_oradata101 dg_oradata1 onlinesan_vc0_9 auto:cdsdisk dg_oradata201 dg_oradata2 onlinesan_vc0_10 auto:cdsdisk dg_oradata301 dg_oradata3 onlinesan_vc0_11 auto:cdsdisk dg_oradata401 dg_oradata4 onlinesan_vc0_12 auto:cdsdisk dg_oradata501 dg_oradata5 onlinesan_vc0_13 auto:cdsdisk dg_oraarch01 dg_oraarch onlinesan_vc0_14 auto:cdsdisk - - onlinesan_vc0_15 auto:cdsdisk dg_temp101 dg_temp1 online
[root@server-02 ~]# vxdisk -o alldgs listDEVICE TYPE DISK GROUP STATUSdisk_0 auto:none - - online invaliddisk_1 auto - - errorsan_vc0_0 auto:cdsdisk - (dg_fencing) onlinesan_vc0_1 auto:cdsdisk - (dg_fencing) onlinesan_vc0_2 auto:cdsdisk - (dg_fencing) onlinesan_vc0_3 auto:cdsdisk - (dg_oracle) onlinesan_vc0_4 auto:cdsdisk - (dg_oradump) onlinesan_vc0_5 auto:cdsdisk - (dg_oratmp) onlinesan_vc0_6 auto:cdsdisk - (dg_oraundo) onlinesan_vc0_7 auto:cdsdisk - (dg_oradata1) onlinesan_vc0_8 auto:cdsdisk - (dg_oradata2) onlinesan_vc0_9 auto:cdsdisk - (dg_oradata3) onlinesan_vc0_10 auto:cdsdisk - (dg_oradata4) onlinesan_vc0_11 auto:cdsdisk - (dg_oradata5) onlinesan_vc0_12 auto:cdsdisk - (dg_oraarch) online
12-05-2011 08:51 PM
Please upgrade your SF/HA - lots of enhancements/fixes were included in newer versions (including Fire Drill).
Version 4.x reached EOSL earlier this year. See this: https://www-secure.symantec.com/connect/forums/heads-technical-support-storage-foundation-4x-ends-july-31-2011-forum-copy
12-05-2011 11:27 PM
Sorry, my bad... I'm using version 5.1 and not 4.0 (this was VRTSsfmh and not storage foundation).
by the way, by that article you specified it says Linux is still supported.
12-06-2011 05:27 AM
12-06-2011 05:42 AM
Both systems information:
Red Hat Enterprise Linux 5.6 x86_64
Vertias Storage Foundation High Availability 5.1SP1
12-06-2011 06:11 AM
I can see the disknames are inconsistent across the nodes ... for e.g ..
Server 1
set namingscheme | |||||||||||||||
Bases the name of a device on the enclosure name (ebn), or on the device name that is used by the operating system (osn). The change is immediate, and does not require vxconfigd to be restarted. | |||||||||||||||
For TPD devices, the effect of the device naming scheme also depends on the setting of the enclosure-specific attribute tpdmode. If the tpdmode is set to pseudo, the TPD naming is used, regardless of the device naming scheme. To use enclosure-based naming for TPD devices, set the tpdmode to native for the enclosure. Set the tpdmode attribute with the vxdmpadm setattr command. See the vxdmpadm(1) man page for details. | |||||||||||||||
The following options can be set:
|
12-06-2011 10:38 AM
Guarav's information is spot on, specifically the feature that would result in disk names matching across cluster nodes relies on the USE_AVID functionality of the device naming sub-system.
Unfortunately the SANVC devices do not present an AVID value so Volume Manager is generating the device numbers on an as-discovered basis for this array.
This means that you will not be able to have the expectation that device names will match across cluster nodes when using the SANVC.
12-11-2011 01:48 AM
Thank you Gaurav, this was it!