01-16-2014 09:47 AM
Using VxVM 5.0 on Solaris. Intend to upgrade soon. Currently all our servers are using OS Native disk naming:
#> vxddladm get namingscheme NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID ============================================================ OS Native No Yes Yes
Despite this, the disk names we get are something like "emc0_1234", or "disk_12" depending on what array the server is attached to on the SAN. I assume this is because VxVM will not use the very long WWN disk names that the OS uses.
Problem is, sometimes the servers with the generic "disk_XX" Veritas disk names get all their disks renumbered due to disks being removed from VxVM and from the server. When this happens, the disk groups get all mixed up, and we have to rebuild them from backups. As much fun as it is to rebuild the disk groups, I should probably prevent this from happening again if I can. I think that using persistent enclosure based names will resolve this.
Any reason I should not run 'vxddladm set namingscheme=ebn' on all our servers?
If I remove a disk from a disk group, then remove it from VxVM and finally from the server, will the /etc/vx/disk.info file update?
Thanks.
01-16-2014 06:46 PM
Hi Gotham,
In my opinion, with enclosure based naming, the basic benefit was to have easy identification of disks. There could be multiple arrays attached to a server & with OSN implemented it becomes very hard to identify which c#t#d# is coming from which array. To ease this out EBN comes in picture.
As per 5.0 Guide, here is what EBN does while naming devices
Enclosure-based naming operates as follows:
#
format. For example, disks in the supported disk array, enggdept
are named enggdept_0
, enggdept_1
, enggdept_2
and so on. You can use the vxdmpadm
command to administer enclosure names.
DISKS
category (JBOD disks) are named using the Disk_#
format.OTHER_DISKS
category (disks that are not multipathed by DMP) are named as follows:
Now coming to concern of removing devices from servers, ideally if a disk is removed from diskgroup & then ultimately removed from server itself, it shouldn't cause impact to numbering of disks or if you are using OSN, it shouldn't change the c#t#d# numbers ... the c#t#d# numbers are set in vxvm depending on the order disks were scanned by operating system. If you implement EBN in this case, EBN will have also have a numbering which I am wondering may change as well if the device order is changing. Also, to my knowledge, disk.info is updated at time of every rescan of disks however till 5.0 we had many issues were device tree was required to be rebuilt & one of the steps of device tree rebuild was to move disk.info to disk.info.old & once we restart the vxconfigd daemon the disk.info file is regenerated.
I would suggest to double check at OS level if any of device order is changing once you do removal of disks from server. If device major numbers, minor numbers are changing, this may change the order at vxvm level as well.
Upgrading to 6.x or atleast 5.0MP3 would be the recommendation I would add very strongly
G
01-16-2014 09:15 PM
The "Enclosure based naming" is found to be more useful to OS native naming scheme due to the following reasons:
I strongly recommend switching to the new default EBN naming scheme and experience the value that it brings in by making storage management easy. The article http://www.symantec.com/connect/articles/vxdmp-storage-device-manager gives more details.
But if you still want to use OSN scheme, you can switch back by running the vxddladm CLI you mentioned above.
01-17-2014 08:56 AM
Regarding native device names:
When using DMP and VxVM, one should never have to worry about a server re-ordering the native names as Symantec stores the DG info on every disk. As long as the LUN from the array has not be changed by the storage team, no matter how the LUN is presented (path changes due to new LUNs added, path change due to HBA cards moved, path changes due to zone reconfig, etc), the DGs should always be visible. One initialized as a VxVM volume, the /dev//dsk/cXtYdZ names no longer matter. When using enclosure or AVID naming, we also make sure the name is persistent accross servers, regardless of O/S.
Only when the label has been corrupeted should one need to restore the lable via vx commands or if a disk is missing in a DG and the DG must be modified. Otherwise, an O/S scan of the buses should always return the approriate DG names regardless of the native name.
Personnaly, I like ENC based names as it makes if very appartent when having to migrate storage, or, to sort in large reports.
Hope this helps.
Cheers
01-22-2014 08:12 AM
Thank you for the response. Just to be clear -- I apparently do not have enclosure based naming by default, but I get enclosure based names anyway, because that is what VxVM gives me when the native names are very long & contain the disk WWN. When I remove disks from the server, the native OS names do not renumber, it is the Veritas "disk_01" names that sometimes change. We've had this happen several times, and it does cause the disk groups to go into an error state, or fail to import. So I guess it is not really enclosure based names that I need (since I already have them), but persistent vx disk names.
01-22-2014 08:22 AM
Thank you, this was helpful. I wondered what "USE_AVID" was, and now I know. Thing is, we have the "USE_AVID flag set, and it did not help us:
#> vxddladm get namingscheme NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID ============================================================ OS Native No Yes Yes
Our vx disk names still became renumbered, and the disk groups still became confused and do not import. I am using VxVM 5.0, MP3. There is no storage array library for our EMC Invista disks; does this make AVID useless?
My intention is to set naming scheme to ECN, just to get persistence. (We already have "disk_XX" names despite our naming scheme.)
Please note that we bypass vxDMP by using Solaris MPxIO, so it appears to vxVM that we only have one path to each SAN disk. I hope this does not matter.
01-22-2014 08:24 AM
Hi Gotham.
You VX names should always be persistant when using both DMP and VxVM, with or without EBN.
Without DMP, but using VxVM/VxFS, your VX objects should always be consistant. Your O/S names may change. But, since we store all DG information on each disk that is part of a disk group, one just needs to scan to find all the objects to "find" all the participants in a DG, import, activate (if needed) and mount. You should never have to "restore" anything. If that is the case, then you should open a support case.
Hope that helps.
Cheers
01-22-2014 08:36 AM
Thanks. Even though we have it set to use OSN, we get EBN due to the fact that the disk names contain WWN, and thus are too long for Veritas. It is these "disk_XX" names that renumber and cause the disk groups to not import. When I pull disks from the OS, I do not believe that the disk minor numbers all renumber, because I currently see gaps in the numbers. Rebooting or running 'devfsadm -Cv' does not change existing device file minor numbers as far as I can tell.
01-22-2014 09:54 AM
Thanks. Yes, I did open a support case, and the disk groups did need to be rebuilt, because they would not import by any means. This problem is reproducable. What happens is the vx disk names change, and 'vxprint' shows disk media names ("disk_XX") that no longer exist, and are not shown in 'vxdisk list'. I understand that the disk private regions maintain disk group membership for each disk, but what is missing is a valid configuration copy for the disk group.
01-22-2014 09:23 PM
The reason why you see long names is you are using MPxIO for multipathing instead of VxDMP. You may consider running 'stmsboot -d' to disable MPxIO and verify that the names when VxDMP is doing the multipathing is more manageable.
Also the use_avid is effective only when the naming scheme is EBN (enclosure based)