cancel
Showing results for 
Search instead for 
Did you mean: 

Duplicate (both ENABLED and DISABLED) states listed under vxdmpadm

jkVeritasAcct
Level 2
Running Volume Manager 3.5 under Solaris 9.  Had a root drive failure and when taking the
failed drive down the "luxadm -e offline" command wasn't successful.  I think this has left
veritas in a strange state.  After replacing the failed drive I am unable to add it via vxdiskadm (option 5)
as it errors out.

prd02# vxdmpadm getsubpaths ctlr=c1
NAME         STATE         PATH-TYPE  DMPNODENAME  ENCLR-TYPE   ENCLR-NAME
======================================================================
c1t0d0s2     DISABLED       -        c1t0d0s2     Disk         Disk
c1t1d0s2     ENABLED        -        c1t1d0s2     Disk         Disk
c1t0d0s2     ENABLED        -        c1t0d0s2     Disk         Disk

Having c1t0d0s2 appear as both DISABLED and ENABLED seems a bit odd and I think I need to get rid of one
of these entries.

prd02# vxdctl enable
prd02# vxdisk list
DEVICE       TYPE      DISK         GROUP        STATUS
c1t0d0s2     sliced    -            -            error
c1t0d0s2     sliced    -            -            error
c1t1d0s2     sliced    rootcopy_1   rootdg       online nohotuse
c2t50d0s2    sliced    -            -            online
c2t50d1s2    sliced    d2-node2     node2-dg     online nohotuse
c3t40d0s2    sliced    -            -            online
c3t40d1s2    sliced    d1-node2     node2-dg     online nohotuse
-            -         rootdisk_1   rootdg       removed nohotuse was:c1t0d0s2

The system recognizes the new drive, and the links in /dev and /devices include the new device path.

I can do a "vxdisk rm c1t0d0s2" twice to get rid of the listings in "vxdisk list" but it doesn't get rid of the entries in
vxdmpadm.

I think I need to get rid of one (or both) of the entries in vxdmpadm but want to check to see how,
and if anyone else has seen this.

Thanks,
Jeff

2 REPLIES 2

jkVeritasAcct
Level 2
Tried a bunch of different ways to get the devices straightened up so that veritas would be happy... in the
end I just moved the resources onto another cluster node and did a "boot -r" on the node with the failed drive.
This seems to have fixed things up as they should be.

sunshine_2
Level 4

one possible way of fixing this would be to run this command to avoid reboot

 

 

# vxconfigd -k