cancel
Showing results for 
Search instead for 
Did you mean: 

unable to remove a disabled dmp path without reboot on solaris 10

IdaWong
Level 4

here is my problem, i have a dmpnode, and one of the 2 WWN has been rearranged from the array side. so it generated the some disabled paths. 

my problem is how to remove these disabled paths without disrupting the current vxfs mounts.

at the moment, cfgadm sees these paths as failing even though format sees them as offline. luxadm -e offline $path didn't help. 

 

6 REPLIES 6

Marianne
Level 6
Partner    VIP    Accredited Certified

IdaWong
Level 4

is this safe to do this when you have vxfs fs mounted? it looks very destructive.

IdaWong
Level 4

i don't think this would work at all, first of all:

 # devfsadm -Cv 

this would failed. as i mentioned above, cfgadm shows the disk as failing (not unusable) because the disk's diskgroup is imported

 

lex13
Level 2
Employee

The paths can be disabled using vxdmpadm disable path=<pathname>, the only caveat, is if the disk is in use, Veritas will not remove the last functioning path. The previous statement regarding device tree cleanup is correct, you will have to go in to the /dev/vx/dmp and /dev/vx/rdmp and cleanup the devices, rescan and check if the paths are removed. If you need to keep persistence on, use vxddladm set namingscheme=[osn| ebn] persistence=yes

[root@server102 rdmp]# vxddladm set namingscheme=ebn persistence=yes
[root@server102 rdmp]# vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE    LOWERCASE      USE_AVID
============================================================
Enclosure Based     Yes            Yes            Yes
[root@server102 rdmp]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:none       -            -            online invalid
ibm_shark0_0 auto:cdsdisk    dev1         lex          online
ibm_shark0_1 auto:cdsdisk    -            -            online
ibm_shark0_2 auto:cdsdisk    -            -            online
ibm_shark0_3 auto:cdsdisk    -            -            online
ibm_shark0_4 auto:cdsdisk    -            -            online
ibm_shark0_5 auto:cdsdisk    -            -            online
ibm_shark0_6 auto:cdsdisk    dev3         lex          online
ibm_shark0_7 auto:cdsdisk    ibm_shark0_7  vxfendg      online
ibm_shark0_8 auto:cdsdisk    ibm_shark0_8  vxfendg      online
ibm_shark0_9 auto:cdsdisk    ibm_shark0_9  vxfendg      online
 

[root@server102 dcli]# vxdisk rm ibm_shark0_5
 

[root@server102 dcli]# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:none       -            -            online invalid
ibm_shark0_0 auto:cdsdisk    dev1         lex          online
ibm_shark0_1 auto:cdsdisk    -            -            online
ibm_shark0_2 auto:cdsdisk    -            -            online
ibm_shark0_3 auto:cdsdisk    -            -            online
ibm_shark0_4 auto:cdsdisk    -            -            online   

ibm_shark0_6 auto:cdsdisk    dev3         lex          online
ibm_shark0_7 auto:cdsdisk    ibm_shark0_7  vxfendg      online
ibm_shark0_8 auto:cdsdisk    ibm_shark0_8  vxfendg      online
ibm_shark0_9 auto:cdsdisk    ibm_shark0_9  vxfendg      online
[root@server102 dcli]#

It is still there but removed until your next scan

[root@server102 /]# vxdmpadm getsubpaths
NAME         STATE[A]   PATH-TYPE[M] DMPNODENAME  ENCLR-NAME   CTLR   ATTRS
================================================================================
c1t0d0s2     ENABLED(A)   -          disk_0       disk         c1        -
c2t2d0s2     ENABLED(A)   -          ibm_shark0_0 ibm_shark0   c2        -
c2t3d0s2     ENABLED(A)   -          ibm_shark0_1 ibm_shark0   c2        -
c2t4d0s2     ENABLED(A)   -          ibm_shark0_2 ibm_shark0   c2        -
c2t5d0s2     ENABLED(A)   -          ibm_shark0_3 ibm_shark0   c2        -
c2t6d0s2     ENABLED(A)   -          ibm_shark0_4 ibm_shark0   c2        -
c2t7d0s2     ENABLED(A)   -          ibm_shark0_5 ibm_shark0   c2        -  <<<<
c2t8d0s2     ENABLED(A)   -          ibm_shark0_6 ibm_shark0   c2        -
c2t9d0s2     ENABLED(A)   -          ibm_shark0_7 ibm_shark0   c2        -
c2t10d0s2    ENABLED(A)   -          ibm_shark0_8 ibm_shark0   c2        -
c2t11d0s2    ENABLED(A)   -          ibm_shark0_9 ibm_shark0   c2        -
[root@server102 /]#

======================

For EMC devices use powermt check, remove, dev=disk_name

For Oracle/Solaris -

# luxadm remove_device -F /dev/rdsk/ctd

# vxdiskadm, option 3

# cfgadm -f -o unusable_FCP_dev -c unconfigure c3::50060e8004274d30

# luxadm -e offline /dev/dsk/c3t50060E8004274D30d3s2

(i.e. "luxadm -e offline <device path for LUN in 'failing' state from cfgadm>)

Then re-run the previous cfgadm command (cfgadm -al -o show_FCP_dev) to check that the LUN state has changed from "failing" to "unusable".

References

luxadm - http://xteams.oit.ncsu.edu/iso/lun_removal

Oracle luxadm - http://docs.oracle.com/cd/E23824_01/html/821-1462/luxadm-1m.html

Oracle hotplug devices - http://docs.oracle.com/cd/E19683-01/816-5074-10/hotplug.html


When you are using the dmp paths, you need to remove the disk from both drectories dsk and rdmp. Once it is removed from DMP control then remove from the operating system device control.

I was not sure what your statement, "the 2 WWN has been rearranged from the array side", meant, so I added FC-AL, Fibre and hotplug.
 

IdaWong
Level 4

Hi lex13,

if the disk is imported, luxadm -e offline $path would not work as long as the disk is online even though the path is DISABLED. it is not just luxadm, linux has the same issue. 

As i mentioned before, there are other valid ENABLED paths, why would veritas hold on to the disabled paths? This means it requires down time to resolve any disabled paths.

I have no trouble removing DISABLED paths if the disk is not imported

 

Daniel_Matheus
Level 4
Employee Accredited Certified

IdaWong,

 

you say 2 WWN have been rearranged on array.

What exactly do you mean with that?

Does the host not see these paths anymore?

In that case devfsadm should be able to remove the paths.

 

i don't think this would work at all, first of all:

 # devfsadm -Cv 

this would failed. as i mentioned above, cfgadm shows the disk as failing (not unusable) because the disk's diskgroup is imported

This is not true, devfsadm doesn't care if a path is part if a diskgroup (imported or not)

You have 3 possibilites here:

-as per Lex disable the path as a workaround

-as per Marianne perform a device tree cleanup, this does not affect the mounted file systems, but if this is a clustered environment you should freeze the service groups that contain diskgroups prior to this action (if you have lots of disks and volumes the monitor might run into a timeout during the refresh)

-node reboot, this will clear for sure the device tree

 

As i mentioned before, there are other valid ENABLED paths, why would veritas hold on to the disabled paths? This means it requires down time to resolve any disabled paths.

The DMP nodes are built from the devices found in the OS device tree, as long as you have stale devices in the OS device tree you will have DMP nodes pointing to these stale devices.

 

Thanks,
Dan