cancel
Showing results for 
Search instead for 
Did you mean: 

Unable to add luns to veritas vxdisk shows luns in error state

mimiandi
Level 2

Hi,

running solaris 10 64bit  update 7 kernel 142900-12

 

First started with 1.7TB and couldnt get it online so changed the format to EFI following one of the pages for adding lun with > 1TB.

That didnt work so splited the 1.7TB to multiple luns and tried but still having issues.

vxdisk still shows

emcpower157s2 auto            -            -            error
emcpower158s2 auto            -            -            error
emcpower159s2 auto            -            -            error

And the older luns that initially added to veritas still shows up as

emcpower152s2 auto            -            -            error
emcpower153s2 auto            -            -            error
emcpower154s2 auto            -            -            error
emcpower155s2 auto            -            -            error

even though they dont show up in format.

We are using powerpath.

Any tips?

thx

8 REPLIES 8

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi,

Are the devices labelled from format menu ? can you read a VTOC from it ?

# prtvtoc /dev/rdsk/emcpower157s2

If it is not labelled, can u label it & then run a  "vxdctl enable" to see if any changes ..  If this doen't help, provide below outputs

how does powerpath see the underlying paths ?

can you attach output of

# powermt display dev=all

# modinfo  |grep -i vx

# vxdmpadm listctlr all

# vxdmpadm listenclosure all

# vxdisk list emcpower157s2

 

If these are new luns, they should appear in "online invalid" state before we initialize them in veritas

 

G

Marianne
Level 6
Partner    VIP    Accredited Certified

Try the following:

Format the lun using the /dev/vx/rdmp name, and set the size of partition 0 to 0tb, and label the disk.

# format -e /dev/vx/rdmp/emcpower##

 

Run vxdisksetup against the disk.

# /etc/vx/bin/vxdisksetup -i -f emcpower## format=sliced

 

Run vxdisk init.

# vxdisk -f init emcpower## format=sliced

 

Then create a dg using that disk as the first disk.

# vxdg init test_dg cds=off testd1=emcpower##

mimiandi
Level 2

hi

I did tried format -e rdmp lun but no luck.

Btw,  luns that are in error NOT more than 1TB now - they were splited to be under 1TB so we wouldnt have to use EFI format.

 

$ prtvtoc /dev/rdsk/emcpower157c
* /dev/rdsk/emcpower157c partition map
*
* Dimensions:
*     512 bytes/sector
* 1262485504 sectors
* 1262485437 accessible sectors
*
* Flags:
*   1: unmountable
*  10: read-only
*
* Unallocated space:
*       First     Sector    Last
*       Sector     Count    Sector
*          34 1262469053 1262469086
*
*                          First     Sector    Last
* Partition  Tag  Flags    Sector     Count    Sector  Mount Directory
       8     11    00  1262469087     16384 1262485470

 

$ powermt display dev=emcpower157a

<snip>

Pseudo name=emcpower157a
state=alive; policy=CLAROpt; priority=0; queued-IOs=0
Owner: default=SP B, current=SP B       Array failover mode: 1
==============================================================================
---------------- Host ---------------   - Stor -   -- I/O Path -  -- Stats ---
###  HW Path                I/O Paths    Interf.   Mode    State  Q-IOs Errors
==============================================================================
3072 pci@1,700000/SUNW,qlc@0/fp@0,0 c2t5006016310601F5Ed3s0 SP A3     active  alive      0      0
3072 pci@1,700000/SUNW,qlc@0/fp@0,0 c2t5006016A10601F5Ed3s0 SP B2     active  alive      0      0
3080 pci@1,700000/SUNW,qlc@0,1/fp@0,0 c3t5006016210601F5Ed3s0 SP A2     active  alive      0      0
3080 pci@1,700000/SUNW,qlc@0,1/fp@0,0 c3t5006016B10601F5Ed3s0 SP B3     active  alive      0      0
3075 pci@11,700000/SUNW,qlc@0/fp@0,0 c6t5006016010601F5Ed3s0 SP A0     active  alive      0      0
3075 pci@11,700000/SUNW,qlc@0/fp@0,0 c6t5006016910601F5Ed3s0 SP B1     active  alive      0      0
3076 pci@11,700000/SUNW,qlc@0,1/fp@0,0 c7t5006016110601F5Ed3s0 SP A1     active  alive      0      0
3076 pci@11,700000/SUNW,qlc@0,1/fp@0,0 c7t5006016810601F5Ed3s0 SP B0     active  alive      0      0

</snip>

$ modinfo |grep -i vx
 49 7bebe000  3f368 284   1  vxdmp (VxVM 5.0MP3RP3: DMP Driver)
 51 7ba00000 210048 285   1  vxio (VxVM 5.0MP3RP3 I/O driver)
 53  13ff308    c78 286   1  vxspec (VxVM 5.0MP3RP3 control/status d)
227 7b753268    cb0 282   1  vxportal (VxFS 5.0_REV-5.0MP3RP3i_sol por)
228 7a600000 1dd0b0  21   1  vxfs (VxFS 5.0_REV-5.0MP3RP3i_sol Sun)
243 7a7e6000   ab30 283   1  fdd (VxQIO 5.0_REV-5.0MP3RP3i_sol Qu)

$ vxdmpadm listctlr all
CTLR-NAME       ENCLR-TYPE      STATE      ENCLR-NAME
=====================================================
c0              Disk            ENABLED      disk
c1              Disk            ENABLED      disk
emcp            PP_EMC          ENABLED      pp_emc0
emcp            PP_EMC_CLARiiON ENABLED      pp_emc_clariion0

 

$ vxdmpadm listenclosure all
ENCLR_NAME        ENCLR_TYPE     ENCLR_SNO      STATUS       ARRAY_TYPE     LUN_COUNT
===================================================================================
disk              Disk           DISKS                CONNECTED    Disk        4
pp_emc0           PP_EMC         000190104637         CONNECTED    A/A        151

$ vxdisk list emcpower157s2
Device:    emcpower157s2
devicetag: emcpower157
type:      auto
flags:     error private autoconfig
pubpaths:  block=/dev/vx/dmp/emcpower157s2 char=/dev/vx/rdmp/emcpower157s2
guid:      -
udid:      DGC%5FRAID%205%5FAPM00045000447%5F60060160E0D71200DA802C86475AE011
site:      -
Multipathing information:
numpaths:   1
emcpower157c    state=enabled

Gaurav_S
Moderator
Moderator
   VIP    Certified

that is strange, don't see anything bad there ... powerpath paths are alive & all emc controllers & enclosures look good..

have you tried running a "vxdctl enable" & see if any change in status ?

 

Or else try changing the format of disk to a normal disk using fdisk menu & running a "vxdctl enable "  ?

 

Gaurav

Marianne
Level 6
Partner    VIP    Accredited Certified

I had another look at your original post.

You were saying "they dont show up in format".  That says that 'something somewhere' is not right...

Let's start from scratch - clean up the device table and start again:

1.  mv /etc/vx/array.info /etc/vx/array.info.old
2.  mv /etc/vx/disk.info /etc/vx/disk.info.old
3.  rm /dev/vx/dmp/*
4.  rm /dev/vx/rdmp/*
5. rm /dev/dsk/c#t#d#s# <------Remove all entries except for internal boot devices.
6. rm /dev/rdsk/c#t#d#s# <-------Remove all entries except for internal boot devices.
7. devfsadm -Cv
8. vxconfigd -k

Supporting technote: http://www.symantec.com/docs/TECH57018

mimiandi
Level 2

I tried both methods and i still see devices in error. I have spoked with the symantec engineer and she recommended re-adding with different LUN ids but that didnt work either. Other option she recommened by reboot which i would like to avoid. I think veritas is seeing marking the certain pseudo name, logical device name to error. not sure why .. btw is it common not to have disk.info? because i didnt see it.

Marianne
Level 6
Partner    VIP    Accredited Certified

"is it common not to have disk.info?"

vxconfigd -k will recreate it.

You haven't told us your SF version yet?
It might be worth it to double-check array settings. If we know your SF version we can give you the URL for the Hardware Technote for your SF version.

g_lee
Level 6

From the modinfo output in earlier post you appear to be on 5.0MP3RP3 - I believe in either 5.0MP3 or one of the rolling patches the naming scheme appears to be set to non-persistent; if that is the case you won't have a disk.info file.

Check if persistent naming is set by running the following:

# vxddladm get namingscheme
NAMING_SCHEME       PERSISTENCE    LOWERCASE      USE_AVID
============================================================
Enclosure Based     Yes            Yes            Yes

If Persistence is set to yes, then you should have a disk.info file. If it's set to no, then there won't be a disk.info file, but that is expected/intended behaviour.

Regarding the original/main problem of your disks not showing up correctly - if you removed and re-added the LUNs without removing them from VxVM cleanly first (eg: vxdisk rm <disk> before unpresenting/re-presenting the disk) then it's highly possible it's got the wrong information stuck somewhere/is picking up the wrong LUN information; unfortunately in most cases where this occurs, the only way to resolve is via reboot to clear out the wrong details.

To avoid this situation in future, ensure the disks are removed cleanly from VxVM before unpresenting LUNs from the system, particularly if the same LUN IDs may be reused / re-presented. Also ensure vxesd (event source daemon) is disabled to prevent the devices from being re-discovered automatically - see the following technote for instructions:

(the symptoms in this particular technote are different; the relevant part here is the section with steps to disable vxesd)
http://www.symantec.com/business/support/index?page=content&id=TECH52756

The following technote may also be helpful for future reference:

Best practice using PowerPath with Storage Foundation/Volume Manager
http://www.symantec.com/business/support/index?page=content&id=TECH125662