cancel
Showing results for 
Search instead for 
Did you mean: 

Disks name not match within two node cluster

tzabary
Level 3

Hi,

 

I have two node cluster and today I ran Virtual Fire Drill in order to see if everything is OK.

I've noticed I'm getting errors such as:

* udid.vfd: Failed to get disk information for disk san_vc0_13.

* udid.vfd: UDIDS for device <san_vc_8 do not match on cluster nodes.

[...]

 

Anyway, I've connected to both servers, san_vc0_13 doesn't exist on the second node.

About the other errors the disks name is not match.

 

Reviewing this case, I've seen we have 2 more temporary disks that the first node has, that has nothing to do with the cluster (this situation is OK).

the problem is, that these 2 disks, took the number 2 and 3 (san_vc0_2/3), so now the disks names are mismatch and it seems I'm missing a disk on the second node.

the cluster diskgroups are viewed by both nodes and they are OK (even within the cluster), the only problem is the names are not match and VFP is warning about it.

Anyway to change the disk's names and make it permanent (by UDID or so) ?

 

Thanks,

1 ACCEPTED SOLUTION

Accepted Solutions

Gaurav_S
Moderator
Moderator
   VIP    Certified

I can see the disknames are inconsistent across the nodes ... for e.g ..

Server 1

san_vc0_4 auto:cdsdisk dg_test101 dg_test1 online
san_vc0_5 auto:cdsdisk dg_oradump01 dg_oradump online
 
while server 2
 
san_vc0_4 auto:cdsdisk - (dg_oradump) online
san_vc0_5 auto:cdsdisk - (dg_oratmp) online
 
& also, the number of disks on  servr 1 are more as compared to server 2 however considering your statement that "in the first server, dg_test, dg_temp and disk 14 are used for some testing/benchmarking..." that means the number of visible disks are equal on two nodes however the names are not consistent across the nodes.
 
To make the naming schema consistent across the nodes, you need to use vxddladm command
 
man page at --  https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/manualpages/html/man/volume_manager/html/man1m/vxddladm.1m.html
 
There is an option with vxddladm..
 
vxddladm set namingscheme={ebn|osn}[persistence={yes|no}]
    [lowercase={yes|no}] [use_avid={yes|no}]
 
In the above command, if you use "persistence=yes" & "use_avid=yes" .. it should assign the names refering to array volume ID which would be consistent across the nodes ...
 
more notes on using this option which may help:
 
set namingscheme
  Bases the name of a device on the enclosure name (ebn), or on the device name that is used by the operating system (osn). The change is immediate, and does not require vxconfigd to be restarted.
  For TPD devices, the effect of the device naming scheme also depends on the setting of the enclosure-specific attribute tpdmode. If the tpdmode is set to pseudo, the TPD naming is used, regardless of the device naming scheme. To use enclosure-based naming for TPD devices, set the tpdmode to native for the enclosure. Set the tpdmode attribute with the vxdmpadm setattr command. See the vxdmpadm(1) man page for details.
  The following options can be set:
persistence
  Specifies whether the names of disk devices that are displayed by VxVM remain unchanged after disk hardware has been reconfigured and/or the system rebooted.
If persistence is on, the DDL assigns device names from the persistent device name database, rather than generating new names according to the OSN or EBN naming scheme.
If the naming scheme is OSN, name persistence is off by default. The generated names are not likely to differ from the names in the persistent name database, unless a change causes the OS to assign a new path name for a device.
If the naming scheme is EBN, name persistence is on by default. Certain configuration changes on the array side could cause the generated name to be different from the name in the persistent name database. When name persistence is on, the name from the persistent names repository is used for the DMP meta-device, unless
the user changes it. .
lowercase By default, the names of the enclosure are converted to lowercase, regardless of the case of the name specified by the ASL. The EBN names are therefore in lowercase. Use the lowercase=no option to suppress the conversion to lowercase.
use_avid For EBN scheme, indicates that the Array Volume ID (AVID) is used together with the enclosure name for the disk device name. The disk devices are named as enclosure_avid. The default value is yes. If use_avid is set to no, the devices are named as enclosure_name_index_of_device where index is obtained by sorting the DMP devices based on the LUN_SERIAL_NO.
 
hope this helps
 
Gaurav

View solution in original post

10 REPLIES 10

Gaurav_S
Moderator
Moderator
   VIP    Certified

let us know the OS & SF version ....

When you say a disk is missing on second node, is that the disk is missing by its name or you can count one less in the configuration itself ?  Any failures reported in "vxdisk list" ?

 

G

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Are the 'temporary' disks in the same diskgroup or different diskgroup?

If they are in the same diskgroup, the error is justifiable...

tzabary
Level 3

There are no missing disks on the second one, there some other disks attached to the first server (not related to the cluster), that is the reason (I believe) why the names are messed up.

I'll explain with examples...

 

[root@server-01 ~]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto:none - - online invalid
san_vc0_0 auto:cdsdisk - (dg_fencing) online
san_vc0_1 auto:cdsdisk - (dg_fencing) online
san_vc0_2 auto:cdsdisk - (dg_fencing) online
san_vc0_3 auto:cdsdisk dg_oracle01 dg_oracle online
san_vc0_4 auto:cdsdisk dg_test101 dg_test1 online
san_vc0_5 auto:cdsdisk dg_oradump01 dg_oradump online
san_vc0_6 auto:cdsdisk dg_oratmp01 dg_oratmp online
san_vc0_7 auto:cdsdisk dg_oraundo01 dg_oraundo online
san_vc0_8 auto:cdsdisk dg_oradata101 dg_oradata1 online
san_vc0_9 auto:cdsdisk dg_oradata201 dg_oradata2 online
san_vc0_10 auto:cdsdisk dg_oradata301 dg_oradata3 online
san_vc0_11 auto:cdsdisk dg_oradata401 dg_oradata4 online
san_vc0_12 auto:cdsdisk dg_oradata501 dg_oradata5 online
san_vc0_13 auto:cdsdisk dg_oraarch01 dg_oraarch online
san_vc0_14 auto:cdsdisk - - online
san_vc0_15 auto:cdsdisk dg_temp101 dg_temp1 online
 
and server 2...
 
[root@server-02 ~]# vxdisk -o alldgs list
DEVICE TYPE DISK GROUP STATUS
disk_0 auto:none - - online invalid
disk_1 auto - - error
san_vc0_0 auto:cdsdisk - (dg_fencing) online
san_vc0_1 auto:cdsdisk - (dg_fencing) online
san_vc0_2 auto:cdsdisk - (dg_fencing) online
san_vc0_3 auto:cdsdisk - (dg_oracle) online
san_vc0_4 auto:cdsdisk - (dg_oradump) online
san_vc0_5 auto:cdsdisk - (dg_oratmp) online
san_vc0_6 auto:cdsdisk - (dg_oraundo) online
san_vc0_7 auto:cdsdisk - (dg_oradata1) online
san_vc0_8 auto:cdsdisk - (dg_oradata2) online
san_vc0_9 auto:cdsdisk - (dg_oradata3) online
san_vc0_10 auto:cdsdisk - (dg_oradata4) online
san_vc0_11 auto:cdsdisk - (dg_oradata5) online
san_vc0_12 auto:cdsdisk - (dg_oraarch) online
 
all of the cluster related disks are dg_ora*
dg_fencing used for fencing, and disk_0/1 is internal.
 
in the first server, dg_test, dg_temp and disk 14 are used for some testing/benchmarking...
storage foundation version is 4.0.1

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Please upgrade your SF/HA - lots of enhancements/fixes were included in newer versions (including Fire Drill).

Version 4.x reached EOSL earlier this year. See this: https://www-secure.symantec.com/connect/forums/heads-technical-support-storage-foundation-4x-ends-july-31-2011-forum-copy

tzabary
Level 3

Sorry, my bad... I'm using version 5.1 and not 4.0 (this was VRTSsfmh and not storage foundation).

by the way, by that article you specified it says Linux is still supported.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

You never mentioned your OS....

Patch version on 5.1?

tzabary
Level 3

Both systems information:

Red Hat Enterprise Linux 5.6 x86_64

Vertias Storage Foundation High Availability 5.1SP1

Gaurav_S
Moderator
Moderator
   VIP    Certified

I can see the disknames are inconsistent across the nodes ... for e.g ..

Server 1

san_vc0_4 auto:cdsdisk dg_test101 dg_test1 online
san_vc0_5 auto:cdsdisk dg_oradump01 dg_oradump online
 
while server 2
 
san_vc0_4 auto:cdsdisk - (dg_oradump) online
san_vc0_5 auto:cdsdisk - (dg_oratmp) online
 
& also, the number of disks on  servr 1 are more as compared to server 2 however considering your statement that "in the first server, dg_test, dg_temp and disk 14 are used for some testing/benchmarking..." that means the number of visible disks are equal on two nodes however the names are not consistent across the nodes.
 
To make the naming schema consistent across the nodes, you need to use vxddladm command
 
man page at --  https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/manualpages/html/man/volume_manager/html/man1m/vxddladm.1m.html
 
There is an option with vxddladm..
 
vxddladm set namingscheme={ebn|osn}[persistence={yes|no}]
    [lowercase={yes|no}] [use_avid={yes|no}]
 
In the above command, if you use "persistence=yes" & "use_avid=yes" .. it should assign the names refering to array volume ID which would be consistent across the nodes ...
 
more notes on using this option which may help:
 
set namingscheme
  Bases the name of a device on the enclosure name (ebn), or on the device name that is used by the operating system (osn). The change is immediate, and does not require vxconfigd to be restarted.
  For TPD devices, the effect of the device naming scheme also depends on the setting of the enclosure-specific attribute tpdmode. If the tpdmode is set to pseudo, the TPD naming is used, regardless of the device naming scheme. To use enclosure-based naming for TPD devices, set the tpdmode to native for the enclosure. Set the tpdmode attribute with the vxdmpadm setattr command. See the vxdmpadm(1) man page for details.
  The following options can be set:
persistence
  Specifies whether the names of disk devices that are displayed by VxVM remain unchanged after disk hardware has been reconfigured and/or the system rebooted.
If persistence is on, the DDL assigns device names from the persistent device name database, rather than generating new names according to the OSN or EBN naming scheme.
If the naming scheme is OSN, name persistence is off by default. The generated names are not likely to differ from the names in the persistent name database, unless a change causes the OS to assign a new path name for a device.
If the naming scheme is EBN, name persistence is on by default. Certain configuration changes on the array side could cause the generated name to be different from the name in the persistent name database. When name persistence is on, the name from the persistent names repository is used for the DMP meta-device, unless
the user changes it. .
lowercase By default, the names of the enclosure are converted to lowercase, regardless of the case of the name specified by the ASL. The EBN names are therefore in lowercase. Use the lowercase=no option to suppress the conversion to lowercase.
use_avid For EBN scheme, indicates that the Array Volume ID (AVID) is used together with the enclosure name for the disk device name. The disk devices are named as enclosure_avid. The default value is yes. If use_avid is set to no, the devices are named as enclosure_name_index_of_device where index is obtained by sorting the DMP devices based on the LUN_SERIAL_NO.
 
hope this helps
 
Gaurav

NathanBuck_Symc
Level 2
Employee Certified

Guarav's information is spot on, specifically the feature that would result in disk names matching across cluster nodes relies on the USE_AVID functionality of the device naming sub-system.

Unfortunately the SANVC devices do not present an AVID value so Volume Manager is generating the device numbers on an as-discovered basis for this array.

This means that you will not be able to have the expectation that device names will match across cluster nodes when using the SANVC.

 

tzabary
Level 3

Thank you Gaurav, this was it!