cancel
Showing results for 
Search instead for 
Did you mean: 

Enclosure Based consistent naming, or OS Native?

Gotham_Gerry
Level 4

Using VxVM 5.0 on Solaris. Intend to upgrade soon. Currently all our servers are using OS Native disk naming:

#> vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE    LOWERCASE      USE_AVID      
============================================================
OS Native           No             Yes            Yes

Despite this, the disk names we get are something like "emc0_1234", or "disk_12" depending on what array the server is attached to on the SAN.  I assume this is because VxVM will not use the very long WWN disk names that the OS uses. 

Problem is, sometimes the servers with the generic "disk_XX" Veritas disk names get all their disks renumbered due to disks being removed from VxVM and from the server.  When this happens, the disk groups get all mixed up, and we have to rebuild them from backups. As much fun as it is to rebuild the disk groups, I should probably prevent this from happening again if I can. I think that using persistent enclosure based names will resolve this.

Any reason I should not run  'vxddladm set namingscheme=ebn' on all our servers? 

If I remove a disk from a disk group, then remove it from VxVM and finally from the server, will the /etc/vx/disk.info file update? 

Thanks. 

9 REPLIES 9

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi Gotham,

In my opinion, with enclosure based naming, the basic benefit was to have easy identification of disks. There could be multiple arrays attached to a server & with OSN implemented it becomes very hard to identify which c#t#d# is coming from which array. To ease this out EBN comes in picture.

As per 5.0 Guide, here is what EBN does while naming devices

Enclosure-based naming

Enclosure-based naming operates as follows:

  • Devices with very long device names (for example, Fibre Channel devices that include worldwide name (WWN) identifiers) are always represented by enclosure-based names.
  • All fabric or non-fabric disks in supported disk arrays are named using the enclosure_name_# format. For example, disks in the supported disk array, enggdept are named enggdept_0, enggdept_1, enggdept_2 and so on. You can use the vxdmpadm command to administer enclosure names.

    See "Administering DMP using vxdmpadm" on page 150.

    See the vxdmpadm(1M) manual page.

  • Disks in the DISKS category (JBOD disks) are named using the Disk_# format.
  • Disks in the OTHER_DISKS category (disks that are not multipathed by DMP) are named as follows:
    • Non-fabric disks are named using the c#t#d#s# format.
    • Fabric disks are named using the fabric_# format.

 

Now coming to concern of removing devices from servers, ideally if a disk is removed from diskgroup & then ultimately removed from server itself, it shouldn't cause impact to numbering of disks or if you are using OSN, it shouldn't change the c#t#d# numbers ... the c#t#d# numbers are set in vxvm depending on the order disks were scanned by operating system. If you implement EBN in this case, EBN will have also have a numbering which I am wondering may change as well if the device order is changing. Also, to my knowledge, disk.info is updated at time of every rescan of disks however till 5.0  we had many issues were device tree was required to be rebuilt & one of the steps of device tree rebuild was to move disk.info to disk.info.old & once we restart the vxconfigd daemon the disk.info file is regenerated.

I would suggest to double check at OS level if any of device order is changing once you do removal of disks from server. If device major numbers, minor numbers are changing, this may change the order at vxvm level as well.

Upgrading to 6.x or atleast 5.0MP3 would be the recommendation I would add very strongly

 

G

Hari_Krishna_V
Level 3
Employee

The "Enclosure based naming" is found to be more useful to OS native naming scheme due to the following reasons:

  1. The user can distinguish the physical enclosure from where the LUN is provisioned to allow for better decision making while provisioning or using storage
  2. With the patented enhancement introduced from 5.0MP3, called 'array volume id (avid)' based enclosure based naming uses the same index value as the storage array to generate the index in the name. This has the advantage that the name is now unique and hence two LUNs from the same storage array cannot have the same name and hence there is no issue with the re-ordering plus the names are consistent across nodes in a cluster.
  3. If the array does not provide the avid value we employ persistent naming and generation of index based on sorting of the LUN attributes which to be reasonable extent provides cluster wide naming consistentcy and prevents re-ordering of names

I strongly recommend switching to the new default EBN naming scheme and experience the value that it brings in by making storage management easy. The article http://www.symantec.com/connect/articles/vxdmp-storage-device-manager gives more details.

But if you still want to use OSN scheme, you can switch back by running the vxddladm CLI you mentioned above.

 

Clifford_Barcli
Level 3
Employee Accredited Certified

Regarding native device names:

When using DMP and VxVM, one should never have to worry about a server re-ordering the native names as Symantec stores the DG info on every disk.  As long as the LUN from the array has not be changed by the storage team, no matter how the LUN is presented (path changes due to new LUNs added, path change due to HBA cards moved, path changes due to zone reconfig, etc), the DGs should always be visible.  One initialized as a VxVM volume, the /dev//dsk/cXtYdZ names no longer matter.  When using enclosure or AVID naming, we also make sure the name is persistent accross servers, regardless of O/S.

Only when the label has been corrupeted should one need to restore the lable via vx commands or if a disk is missing in a DG and the DG must be modified.  Otherwise, an O/S scan of the buses should always return the approriate DG names regardless of the native name.

 

Personnaly, I like ENC based names as it makes if very appartent when having to migrate storage, or, to sort in large reports.

Hope this helps.

Cheers

Gotham_Gerry
Level 4

Thank you for the response.  Just to be clear -- I apparently do not have enclosure based naming by default, but I get enclosure based names anyway, because that is what VxVM gives me when the native names are very long & contain the disk WWN.  When I remove disks from the server, the native OS names do not renumber, it is the Veritas "disk_01" names that sometimes change.  We've had this happen several times, and it does cause the disk groups to go into an error state, or fail to import.  So I guess it is not really enclosure based names that I need (since I already have them), but persistent vx disk names.    

Gotham_Gerry
Level 4

Thank you, this was helpful.  I wondered what "USE_AVID" was, and now I know.  Thing is, we have the "USE_AVID flag set, and it did not help us:

 #> vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE    LOWERCASE      USE_AVID      
============================================================
OS Native           No             Yes            Yes  

Our vx disk names still became renumbered, and the disk groups still became confused and do not import.  I am using VxVM 5.0, MP3.  There is no storage array library for our EMC Invista disks; does this make AVID useless? 

My intention is to set naming scheme to ECN, just to get persistence.  (We already have "disk_XX" names despite our naming scheme.)

Please note that we bypass vxDMP by using Solaris MPxIO, so it appears to vxVM that we only have one path to each SAN disk.  I hope this does not matter. 

Clifford_Barcli
Level 3
Employee Accredited Certified

Hi Gotham.

You VX names should always be persistant when using both DMP and VxVM, with or without EBN.

 

Without DMP, but using VxVM/VxFS, your VX objects should always be consistant.   Your O/S names may change.  But, since we store all DG information on each disk that is part of a disk group, one just needs to scan to find all the objects to "find" all the participants in a DG, import, activate (if needed) and mount.  You should never have to "restore" anything.  If that is the case, then you should open a support case.

 

Hope that helps.

 

Cheers

Gotham_Gerry
Level 4

Thanks. Even though we have it set to use OSN, we get EBN due to the fact that the disk names contain WWN, and thus are too long for Veritas.  It is these "disk_XX" names that renumber and cause the disk groups to not import. When I pull disks from the OS, I do not believe that the disk minor numbers all renumber, because I currently see gaps in the numbers.  Rebooting or running 'devfsadm -Cv' does not change existing device file minor numbers as far as I can tell.  

Gotham_Gerry
Level 4

Thanks. Yes, I did open a support case, and the disk groups did need to be rebuilt, because they would not import by any means. This problem is reproducable.  What happens is the vx disk names change, and 'vxprint' shows disk media names ("disk_XX") that no longer exist, and are not shown in 'vxdisk list'.  I understand that the disk private regions maintain disk group membership for each disk, but what is missing is a valid configuration copy for the disk group. 

Hari_Krishna_V
Level 3
Employee

The reason why you see long names is you are using MPxIO for multipathing instead of VxDMP. You may consider running 'stmsboot -d' to disable MPxIO and verify that the names when VxDMP is doing the multipathing is more manageable.

Also the use_avid is effective only when the naming scheme is EBN (enclosure based)