10-15-2013 03:57 PM
I am currently preparing to exchange two old SLES 9 systems by new SLES 11 machines. These new ones have SF 6.0 Basic and are able to see and read (dd) the disks currently being in production by the SLES9 systems (SAN FC ones).
The disks are Version 120 originally created and in use by SF4.1:
# vxdisk list isar1_sas_2
Device: isar1_sas_2
devicetag: isar1_sas_2
type: simple
hostid: example4
disk: name=isar1_sas_2 id=1341261625.7.riser5
group: name=varemadg id=1339445883.17.riser5
flags: online ready private foreign autoimport imported
pubpaths: block=/dev/disk/by-name/isar1_sas_2 char=/dev/disk/by-name/isar1_sas_2
version: 2.1
iosize: min=512 (bytes) max=1024 (blocks)
public: slice=0 offset=2049 len=33552383 disk_offset=0
private: slice=0 offset=1 len=2048 disk_offset=0
update: time=1372290815 seqno=0.83
ssb: actual_seqno=0.0
headers: 0 248
configs: count=1 len=1481
logs: count=1 len=224
Defined regions:
config priv 000017-000247[000231]: copy=01 offset=000000 enabled
config priv 000249-001498[001250]: copy=01 offset=000231 enabled
log priv 001499-001722[000224]: copy=01 offset=000000 enabled
# vxdg list varemadg | grep version
version: 120
But when I look in the new systems SF6.0 does not recognize the diskgroups at all:
# vxdisk list isar1_sas_2
Device: isar1_sas_2
devicetag: isar1_sas_2
type: auto
info: format=none
flags: online ready private autoconfig invalid
pubpaths: block=/dev/vx/dmp/isar1_sas_2 char=/dev/vx/rdmp/isar1_sas_2
guid: -
udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC
site: -
When I am doing a hexdump of the first few sectors it looks pretty much the same on both machines. According to articles like TECH174882 SF6.0 should be more than happy to recognize any disk layout between Version 20 and 170.
Any hints what I might be doing wrong?
Solved! Go to Solution.
11-01-2013 02:09 PM
Ursi,
It appears that when moving from a foreign disk to a DMP managed disk, the type changed from defined as 'simple' to 'auto' as shown in your original posting.
To redefine the DMP devices as simple, rather than auto, you could try:
# vxdisk rm isar1_sas_2
# vxdisk define isar1_sas_2 type=simple
# vxdctl enable
At that point the type should be listed as simple, and you could try to import the diskgroup.
10-16-2013 01:48 AM
What type/model of array is being used?
It appears the disk(s) have been added to the SF4.1 system as a foreign device - so it may be controlled by a TPD not recognised/supported by DMP - so they may need to be added as foreign disks for the 6.0 install to recognise the disk properly?
see DMP 6.0 (Linux) Administrator's Guide -> Administering Disks -> Discovering and configuring newly added disk devices -> How to administer the Device Discovery Layer -> Foreign Devices
https://sort.symantec.com/public/documents/dmp/6.0/linux/productguides/html/dmp_admin/ch04s02s04s15.htm
----------
Foreign devices
DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multi-pathing or RAM disk capabilities. For these devices it may be preferable to use the multi-pathing capability that is provided by the third-party drivers for some arrays rather than using Dynamic Multi-Pathing (DMP). Such foreign devices can be made available as simple disks to VxVM by using the vxddladm addforeign command. This also has the effect of bypassing DMP for handling I/O. The following example shows how to add entries for block and character devices in the specified directories:
# vxddladm addforeign blockdir=/dev/foo/dsk \
chardir=/dev/foo/rdsk
If a block or character device is not supported by a driver, it can be omitted from the command as shown here:
# vxddladm addforeign blockdir=/dev/foo/dsk
By default, this command suppresses any entries for matching devices in the OS-maintained device tree that are found by the autodiscovery mechanism. You can override this behavior by using the -f and -n options as described on the vxddladm(1M) manual page.
After adding entries for the foreign devices, use either the vxdisk scandisks or the vxdctl enable command to discover the devices as simple disks. These disks then behave in the same way as autoconfigured disks.
The foreign device feature was introduced in VxVM 4.0 to support non-standard devices such as RAM disks, some solid state disks, and pseudo-devices such as EMC PowerPath.
Foreign device support has the following limitations:
• A foreign device is always considered as a disk with a single path. Unlike an autodiscovered disk, it does not have a DMP node.
• It is not supported for shared disk groups in a clustered environment. Only standalone host systems are supported.
• It is not supported for Persistent Group Reservation (PGR) operations.
• It is not under the control of DMP, so enabling of a failed disk cannot be automatic, and DMP administrative commands are not applicable.
• Enclosure information is not available to VxVM. This can reduce the availability of any disk groups that are created using such devices.
• The I/O Fencing and Cluster File System features are not supported for foreign devices.
If a suitable ASL is available and installed for an array, these limitations are removed.
----------
Note the following known issue from the DMP 6.0 release notes:
https://sort.symantec.com/public/documents/dmp/6.0/linux/productguides/html/dmp_notes/ch01s08s08.htm
----------
Adding a DMP device or its OS device path as a foreign disk is not supported (2062230)
When DMP native support is enable, adding a DMP device or its OS device path as a foreign disk using the vxddladm addforeign command is not supported. Using this command can lead to unexplained behavior.
----------
10-16-2013 04:23 AM
I agree with previous comment from g_lee
Look at the flags of disk from SLES9 machine
flags: online ready private foreign autoimport imported >>> added as foreign device
This may be causing an unexpected behavior to 6.0. Do you have any other array in the environment which can be presented to old systems ?
Also, let us know what array you use & the output of below
# vxddladm listsupport all
# vxddladm listapm all
G
10-17-2013 04:57 AM
First of all: thank you both for your excellent observation and sorry that I didn't tell the full story here (for brevity). Yes, you are right in SLES 9 we do not use VxDMP since it simply doesn't work.
Befor I dig into details: what do you mean by:
"so it may be controlled by a TPD not recognised/supported by DMP"
The arrays used are Promise VTrack 610fD (Dual Controller, ALUA, 4x 4GBit FC). Since the ALUA feature for these arrays is not correctly handled by all DMP versions it is some kind of real pain - it keeps enabling paths all the time, then recognizing they are not available and disabling them again. It is OK in SF5.1 and 6.0 since there is an ASL for Promise, albeit not with ALUA support, but it did not work in SF 4.x.
In SLES 9 I do use native Linux dm-multipath which seems to work fine and even recognizes ALUA correctly. Those dm-disks had then be added as foreign devices.
In 6.0 (and 5.1) the Promise arrays are recognized by VRTSaslapm
# vxddladm listsupport all LIBNAME VID PID ================================================================================================= [...] libvxpromise.so Promise VTrak E610f, VTrak E310f, VTrak E610s, VTrak E310s [...]
So no need to add those disks as foreign devices:
# vxdmpadm list dmpnode all [...] dmpdev = isar1_sas_2 state = enabled enclosure = promise_e610f0 cab-sno = 49534520000000000000 asl = libvxpromise.so vid = Promise pid = VTrak E610f array-name = Promise_E610f array-type = A/A iopolicy = MinimumQ avid = - lun-sno = 22C90001557951EC udid = Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC dev-attr = - lun_type = std scsi3_vpd = 22C90001557951EC replicated = no num_paths = 4 ###path = name state type transport ctlr hwpath aportID aportWWN attr path = sdac enabled(a) - FC c4 c4 - 26:01:00:01:55:61:23:32 - path = sdt enabled(a) - FC c3 c3 - 26:02:00:01:55:61:23:32 - path = sdk enabled(a) - FC c3 c3 - 26:00:00:01:55:61:23:32 - path = sdaf enabled(a) - FC c4 c4 - 26:03:00:01:55:61:23:32 -
I had a feeling that those different multipathing layers might maybe shift some sectors/blocks/bytes at the beginning of the disk so it looks different but as said above when looking at a hexdump it looks the same from both machines.
Of course I could try disabling DMP again and switching back to dm-multipath even for SF6.0 but of course I would like to stick with it :(
10-17-2013 05:34 AM
so it may be controlled by a TPD not recognised/supported by DMP
The disks are controlled by a Third Party [multipathing] Driver on SLES9/SF 4.1 (dm-multipath) ie: not DMP - because as you said, DMP does not support ALUA so it disables the inactive paths all the time.
Above, you've mentioned the array is a Promise VTrack 610fD and it's ALUA
Although SF 6.0 does have an ASL for Promise arrays, it's only for the following models/arrays:
VTrak E610f, VTrak E310f, VTrak E610s, VTrak E310s
and the supported mode is A/A not ALUA
ref: https://sort.symantec.com/asl/details/605 (6.0) and https://sort.symantec.com/asl/details/635 (6.0.1)
So it appears your particular array is still not supported for 6.0 yet, as it's not listed, and it's ALUA not A/A
You may want to log a support call to see if there is an updated/separate ASL that can be downloaded for SLES11/SF6.0 as the ASL page is not always 100% up to date - however if there is not, then it appears you will need to use a different multipathing solution and either use DMP on top (if it coexists - see DMP administrator's guide -> Administering disks -> Discovering and configuring newly added disk devices -> Third-party driver coexistence), or add as foreign devices again to use this array on 6.0
10-17-2013 07:17 AM
Can you provide output of the following commands?
#vxdisk -oalldgs list
#vxddladm listforeign
#/etc/vx/bin/vxprtvtoc /dev/vx/rdsk/<diskname>
If there are any free disks on that array, can you try to initialize them on the new machine?
#vxdisksetup -i <diskdevice>
10-30-2013 04:57 AM
Hello g_lee,
sorry, I have been away for a week.
No, it is not solved of course. It would be solved, if that would be an official support call and than of course the correct answer would be "not supported".
Unfortunately we do neither have the money to only implement fully supported solutions nor to pay for full support; hence this community question :).
Our productive and fully licensed SF5.1 runs somewhat perfectly with those Promise Arrays despite DMP might hate me for being constantly fiddling on the paths but I am pretty relaxed with that :).
I have a feeling Symantec will not enhance libvxpromise.so for me albeit it would seem reasonable since Promise DOES NOT work in A/A mode. Of course you can enable that but it dimishes the performance down to 5% and Promise strongly recommends to only use ALUA. So writing a Promise ASL with A/A does not make any sense at all.
10-30-2013 06:14 AM
ursi,
The Community Manager marked this as solved as your question was "Why doesn't SF 6.0 recognize SF 4.1 Version 120 simple disk?", and the answer provided was that the configuration is not supported (non-supported array model and mode), so the options given were:
- add the disk as foreign disk
- log support case and check if there's an updated ASL, and/or log a request/RFE to see if support can be added for the model/mode being used.
So in that sense it is "solved", as it explains why the disk isn't being recognised, and provides you with a workaround. Evidently you don't like this answer as it doesn't help you use the LUNs from this array with SF6.0; however as the doc points to it not being supported, there's not much else that can be done without engaging Symantec support in some fashion.
I cannot comment as to why the ASL supports A/A when Promise recommends ALUA - I will contact the Community Manager to see if she can get in touch with someone from the product mgt side to see if ALUA support is on the roadmap; there may be vendor/other constraints that might be preventing this, so there are no guarantees, but we can at least ask.
regards,
Grace
10-30-2013 10:09 AM
Hi @ursi and Grace,
First off, Grace, thanks for bringing this to my attention, and @ursi, my apologies for marking solved - I've cleared the solution marker.
I am also forwarding this thread to our Product Management team, and see if they can add any assistance here.
Best,
Kimberley
10-30-2013 01:04 PM
@Kimberley: Thank you for forwarding that, I do really appreciate this :)
@Daniel:
To me SF6.0 seems to work fine. I can see disks initialized with SF5.1 (I just maped one for testing) and initialize new discs. Of course the SF5.1 ones had been made with VxDMP ...
# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS isar1_sas_2 auto:none - - online invalid isar1_sas_3 auto:none - - online invalid isar1_sas_5 auto:none - - online invalid isar1_sas2_1 auto:none - - online invalid isar2_sas_2 auto:none - - online invalid isar2_sas_3 auto:none - - online invalid isar2_sas_5 auto:none - - online invalid isar2_sas2_1 auto:none - - online invalid promise_e610f0_8 auto:none - - online invalid promise_e610f0_9 auto:sliced - (testdg) online media_mismatch # /opt/VRTS/bin/vxdisksetup -ie promise_e610f0_8 format=sliced # vxdisk list promise_e610f0_8 Device: promise_e610f0_8 devicetag: promise_e610f0_8 type: auto hostid: disk: name= id=1383161879.13.ema1 group: name= id= info: format=sliced,privoffset=1,pubslice=6,privslice=5 flags: online ready private autoconfig autoimport pubpaths: block=/dev/vx/dmp/promise_e610f0_8s6 char=/dev/vx/rdmp/promise_e610f0_8s6 privpaths: block=/dev/vx/dmp/promise_e610f0_8s5 char=/dev/vx/rdmp/promise_e610f0_8s5 guid: - udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F229B000155DA58FA site: - version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=6 offset=0 len=2026481 disk_offset=305 private: slice=5 offset=1 len=65562 disk_offset=2026847 update: time=1383161879 seqno=0.2 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=51598 logs: count=1 len=4096 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 disabled config priv 000249-051615[051367]: copy=01 offset=000231 disabled log priv 051616-055711[004096]: copy=01 offset=000000 disabled Multipathing information: numpaths: 4 sdak state=disabled sdah state=enabled sdai state=disabled sdaj state=enabled # vxdisk list promise_e610f0_9 Device: promise_e610f0_9 devicetag: promise_e610f0_9 type: auto hostid: salt disk: name= id=1337420728.46.salt group: name=testdg id=1342568457.43.salt info: format=sliced,privoffset=1,pubslice=6,privslice=5 [...]
Concerning your other questions above there are no foreign devices on the SF6.0 machine since they are only foreign on SF4.x:
# vxddladm listforeign The Paths included are ----------------------- Based on Directory names: ----------------------- Based on Full Path: -------------------- # fdisk -l /dev/vx/dmp/promise_e610f0_8 Disk /dev/vx/dmp/promise_e610f0_8: 1073 MB, 1073741824 bytes 34 heads, 61 sectors/track, 1011 cylinders, total 2097152 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0002414e Device Boot Start End Blocks Id System /dev/vx/dmp/promise_e610f0_8p4 61 2092665 1046302+ 5 Extended /dev/vx/dmp/promise_e610f0_8p5 2026847 2092665 32909+ 7f Unknown /dev/vx/dmp/promise_e610f0_8p6 305 2026785 1013240+ 7e Unknown Partition table entries are not in disk order # fdisk -l /dev/vx/dmp/isar1_sas_2 Disk /dev/vx/dmp/isar1_sas_2: 17.2 GB, 17179869184 bytes 64 heads, 32 sectors/track, 16384 cylinders, total 33554432 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System
So maybe it is the simple disc that puzzles SF6.x. I cannot remember why I did simple ones, but I assume that I had some reason for this.
10-30-2013 01:18 PM
hmmm ... the reason that I had on 4.x seems to be that nothing besides "simple" seems to work. vxdisk refuses to take anything besides simple and does simple as default and vdisksetup seems to be unable to understand foreign devices (it always looks into the VxDMP direcotory):
# /etc/vx/bin/vxdisksetup -i isar1_ematest2 format=sliced VxVM ERROR V-5-3-000: Can't open device /dev/vx/dmp/isar1_ematest2 # vxdisk init isar1_ematest2 type=sliced VxVM vxdisk ERROR V-5-1-5433 Device isar1_ematest2: init failed: Attribute cannot be changed with a reinit # vxdisk -f init isar1_ematest2 type=cdsdisk VxVM vxdisk ERROR V-5-1-5433 Device isar1_ematest2: init failed: Attribute cannot be changed with a reinit # vxdisk -f init isar1_ematest2 type=simple # vxdisk list isar1_ematest2 Device: isar1_ematest2 devicetag: isar1_ematest2 type: simple
So I had to do simple ...
Anyway, if we cannot solve this I simply have to copy the data onto new disks but I still do no understand where the problem might be located which hinders 6.x from recognizing these discs.
10-31-2013 04:00 AM
ursi,
From the initial response here: https://www-secure.symantec.com/connect/forums/sf-60-does-not-recognize-sf-41-version-120-simple-disk#comment-9333761
(bold added for emphasis since it was obviously missed initially)
Foreign devices
DDL may not be able to discover some devices that are controlled by third-party drivers, such as those that provide multi-pathing or RAM disk capabilities. For these devices it may be preferable to use the multi-pathing capability that is provided by the third-party drivers for some arrays rather than using Dynamic Multi-Pathing (DMP). Such foreign devices can be made available as simple disks to VxVM by using the vxddladm addforeign command. This also has the effect of bypassing DMP for handling I/O. The following example shows how to add entries for block and character devices in the specified directories:
# vxddladm addforeign blockdir=/dev/foo/dsk \
chardir=/dev/foo/rdskIf a block or character device is not supported by a driver, it can be omitted from the command as shown here:
# vxddladm addforeign blockdir=/dev/foo/dsk
By default, this command suppresses any entries for matching devices in the OS-maintained device tree that are found by the autodiscovery mechanism. You can override this behavior by using the -f and -n options as described on the vxddladm(1M) manual page.
After adding entries for the foreign devices, use either the vxdisk scandisks or the vxdctl enable command to discover the devices as simple disks. These disks then behave in the same way as autoconfigured disks.
ie: the disks have been added as foreign devices on SF4.0, so this is why they're only available as simple disks.
simple is a valid format for SF6.0 - the issue is still probably with the DDL/ASL issues mentioned earlier.
edit: you mentioned the disks do work with DMP on SF5.1, however you also hinted some manual intervention/changes seems to have been required to get this working:
Our productive and fully licensed SF5.1 runs somewhat perfectly with those Promise Arrays despite DMP might hate me for being constantly fiddling on the paths but I am pretty relaxed with that :).
What "fiddling" has/is being done on the 5.1 system that differs/allows it to "work"?
10-31-2013 04:37 AM
Yes, i do fully understand I could completely drop VxDMP and add those disks as foreign devices, but this still does not answer my question WHY the VxVM layer is unable to recognize those discs.
Those discs are fully accessible via DMP so I do __not__ suspect DMP to be the problem.
Is there any way to debug VxVM's disc-scanning feature? Maybe debuging vxconfigd?
no no --- no fiddling at all for the sliced SF 5.1 disks they simply do work as expected.
10-31-2013 06:34 AM
might be this one:
10/31 14:06:37: VxVM vxconfigd DEBUG V-5-1-19067 vold_disk_check_udid_mismatch: Device isar1_sas_2 is invalid, can't get UDID attribute
11-01-2013 04:30 AM
Not sure how many more ways it's possible to say: unsupported configuration = unpredictable/unexpected/unintended results (hence: not supported).
see if there's a response from support/PM team.
11-01-2013 04:55 AM
g_lee please excuse me for being rude, but I am not interested in your "being unsupported" answers. I am trying to technically understand how VxVM scans new disks and what does it hinder to recognize those. I can perfectly circumvent the problem but since I am always consulting very big customers using SF I am interested in getting down to to real reasn for any such problem.
Sorry.
11-01-2013 05:56 AM
see: Device Discovery Layer (DDL)
[edited to remove links]
11-01-2013 02:09 PM
Ursi,
It appears that when moving from a foreign disk to a DMP managed disk, the type changed from defined as 'simple' to 'auto' as shown in your original posting.
To redefine the DMP devices as simple, rather than auto, you could try:
# vxdisk rm isar1_sas_2
# vxdisk define isar1_sas_2 type=simple
# vxdctl enable
At that point the type should be listed as simple, and you could try to import the diskgroup.
11-04-2013 01:24 AM
Chad,
I am astonished. Unbelievable. I didn't take any notice of that TYPE field in the vxdisk list. I always thought VxVM would recognize the disk type during disk-scanning and judge the right type based on some magical algorythm. Seems it doesn't :).
ema1:~ # vxdisk list DEVICE TYPE DISK GROUP STATUS isar1_sas_2 auto:none - - online invalid [...] ema1:~ # vxdisk rm isar1_sas_2 ema1:~ # vxdisk define isar1_sas_2 type=simple ema1:~ # vxdctl enable ema1:~ # vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS isar1_sas_2 simple - (varemadg) online ema1:~ # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: simple hostid: riser4 disk: name= id=1341261625.7.riser5 group: name=varemadg id=1339445883.17.riser5 info: privoffset=1 flags: online ready private autoimport pubpaths: block=/dev/vx/dmp/isar1_sas_2 char=/dev/vx/rdmp/isar1_sas_2 guid: - udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC site: - version: 2.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=0 offset=2049 len=33552383 disk_offset=0 private: slice=0 offset=1 len=2048 disk_offset=0 update: time=1372290815 seqno=0.83 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=1481 logs: count=1 len=224 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-001498[001250]: copy=01 offset=000231 enabled log priv 001499-001722[000224]: copy=01 offset=000000 enabled Multipathing information: numpaths: 4 sdac state=enabled sdt state=disabled sdk state=enabled sdaf state=disabled
Thank you for that great hint.
Problem solved.