cancel
Showing results for 
Search instead for 
Did you mean: 

vxdmpadm

giacomopane
Level 4
Partner Accredited Certified

hi all,

 

I would like to submit a problem that I have to work.
 
The storage men, have brought a new storage (HP P9500), after all appropriate configurations, presented the LUN on solaris9 whit SFHA 5.0 MP3.
 
from vxdisk list, i see:
 
DEVICE       TYPE            DISK         GROUP        STATUS
Disk_0       auto:none       -            -            online invalid
Disk_1       auto:cdsdisk    dgi_infr01   dgi_infr     online
Disk_2       auto:none       -            -            online invalid
Disk_3       auto:cdsdisk    dgi_infr02   dgi_infr     online
IBM_DS8x000_0 auto:cdsdisk    dge_tbeare_ems01  dge_tbeare_ems online
IBM_DS8x000_1 auto:cdsdisk    dge_tbeare_VG201  dge_tbeare_VG2 online
IBM_DS8x000_2 auto:cdsdisk    -            -            online
IBM_DS8x000_3 auto:cdsdisk    dge_tbeare_ipr01  dge_tbeare_ipr online
IBM_DS8x000_20 auto:cdsdisk    -            -            online
IBM_DS8x001_0 auto:cdsdisk    mir_dge_tbeare_ipr01  dge_tbeare_ipr online
IBM_DS8x001_1 auto:cdsdisk    mir_dge_tbeare_VG201  dge_tbeare_VG2 online
IBM_DS8x001_2 auto:cdsdisk    -            -            online
IBM_DS8x001_3 auto:cdsdisk    -            -            online
IBM_DS8x001_4 auto:cdsdisk    mir_dge_tbeare_VG202  dge_tbeare_VG2 online
IBM_DS8x001_5 auto:cdsdisk    -            -            online
IBM_DS8x001_6 auto:cdsdisk    mir_dge_tbeare_ems01  dge_tbeare_ems online
XP10K-12K0_0 auto            -            -            error
XP10K-12K0_1 auto            -            -            error
XP10K-12K0_2 auto            -            -            error
XP10K-12K0_3 auto            -            -            error
XP10K-12K0_4 auto            -            -            error
 
 
CTLR-NAME       ENCLR-TYPE      STATE      ENCLR-NAME
=====================================================
c1              Disk            ENABLED      Disk
c0              Disk            ENABLED      Disk
c6              IBM_DS8x00      ENABLED      IBM_DS8x001
c4              IBM_DS8x00      ENABLED      IBM_DS8x001
c6              IBM_DS8x00      ENABLED      IBM_DS8x000
c4              IBM_DS8x00      ENABLED      IBM_DS8x000
c6              XP10K-12K       DISABLED     XP10K-12K0
c4              XP10K-12K       DISABLED     XP10K-12K0
 
now, i do tail -f /etc/vx/dmpevents.log and then i run
 
vxdmpadm enable ctlr=c4 enclosure=XP10K-12K0
vxdmpadm enable ctlr=c5 enclosure=XP10K-12K0
 
below the logs.
________________________________________________________________________________________-
 
root@es088sg3 # tail -f /etc/vx/dmpevents.log
Tue Nov 29 15:16:08.400: Disabled Path c4t50060E80164DA510d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:16:08.405: Disabled Path c6t50060E80164DA500d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:16:08.405: Disabled Dmpnode XP10K-12K0_1
Tue Nov 29 15:16:08.415: Disabled Path c4t50060E80164DA510d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:16:08.421: Disabled Path c6t50060E80164DA500d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:16:08.421: Disabled Dmpnode XP10K-12K0_3
Tue Nov 29 15:16:08.431: Disabled Path c4t50060E80164DA510d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:16:08.439: Disabled Disk array XP10K-12K0
Tue Nov 29 15:16:08.439: Disabled Path c6t50060E80164DA500d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:16:08.439: Disabled Dmpnode XP10K-12K0_0
Tue Nov 29 15:20:27.000: Reconfiguration is in progress
Tue Nov 29 15:20:27.000: Reconfiguration has finished
Tue Nov 29 15:20:27.391: Enabled Disk array XP10K-12K0
Tue Nov 29 15:20:27.391: Enabled Path c4t50060E80164DA510d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:20:27.391: Enabled Dmpnode XP10K-12K0_0
Tue Nov 29 15:20:27.391: Enabled Path c4t50060E80164DA510d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:20:27.391: Enabled Dmpnode XP10K-12K0_1
Tue Nov 29 15:20:27.391: Enabled Path c4t50060E80164DA510d0s2 belonging to Dmpnode XP10K-12K0_2
Tue Nov 29 15:20:27.391: Enabled Dmpnode XP10K-12K0_2
Tue Nov 29 15:20:27.391: Enabled Path c4t50060E80164DA510d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:20:27.391: Enabled Dmpnode XP10K-12K0_3
Tue Nov 29 15:20:27.391: Enabled Path c4t50060E80164DA510d1s2 belonging to Dmpnode XP10K-12K0_4
Tue Nov 29 15:20:27.391: Enabled Dmpnode XP10K-12K0_4
Tue Nov 29 15:20:27.424: Enabled Controller c4 belonging to Disk array XP10K-12K0
Tue Nov 29 15:20:27.703: Disabled Path c4t50060E80164DA510d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:20:27.703: Disabled Dmpnode XP10K-12K0_0
Tue Nov 29 15:20:27.712: Disabled Path c4t50060E80164DA510d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:20:27.712: Disabled Dmpnode XP10K-12K0_3
Tue Nov 29 15:20:27.720: Disabled Path c4t50060E80164DA510d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:20:27.720: Disabled Dmpnode XP10K-12K0_1
Tue Nov 29 15:20:27.728: Disabled Path c4t50060E80164DA510d1s2 belonging to Dmpnode XP10K-12K0_4
Tue Nov 29 15:20:27.728: Disabled Dmpnode XP10K-12K0_4
Tue Nov 29 15:20:27.737: Disabled Disk array XP10K-12K0
Tue Nov 29 15:20:27.737: Disabled Path c4t50060E80164DA510d0s2 belonging to Dmpnode XP10K-12K0_2
Tue Nov 29 15:20:27.737: Disabled Dmpnode XP10K-12K0_2
Tue Nov 29 15:21:04.432: Marked as failing Path c6t50060E80164DA500d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:21:04.432: Marked as failing Path c6t50060E80164DA500d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:21:04.433: Marked as failing Path c6t50060E80164DA500d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:21:04.433: Marked as failing Path c4t50060E80164DA510d3s2 belonging to Dmpnode XP10K-12K0_3
Tue Nov 29 15:21:04.433: Marked as failing Path c6t50060E80164DA500d1s2 belonging to Dmpnode XP10K-12K0_4
Tue Nov 29 15:21:04.434: Marked as failing Path c6t50060E80164DA500d0s2 belonging to Dmpnode XP10K-12K0_2
Tue Nov 29 15:21:04.434: Marked as failing Path c4t50060E80164DA510d4s2 belonging to Dmpnode XP10K-12K0_0
Tue Nov 29 15:21:04.434: Marked as failing Path c4t50060E80164DA510d2s2 belonging to Dmpnode XP10K-12K0_1
Tue Nov 29 15:21:04.435: Marked as failing Path c4t50060E80164DA510d1s2 belonging to Dmpnode XP10K-12K0_4
Tue Nov 29 15:21:04.435: Marked as failing Path c4t50060E80164DA510d0s2 belonging to Dmpnode XP10K-12K0_2
 
can anyone help me?
1 ACCEPTED SOLUTION

Accepted Solutions

g_lee
Level 6

Jako,

You need to see a partition table on the disk - at the very least it needs to have a slice 2. Even if there are other slices, as long as slice 2 is there and is the correct length, this is OK (if you are going to initialise the disks in vxvm later it doesn't matter as it will re-partition the disk anyway) - however you need that initial label / partition table for VxVM to see the disk correctly.

If you aren't seeing a partition table/label in prtvtoc, rebooting will not help / will be a waste of time, as VxVM needs the label to see the disks correctly before it will enable the paths.

(If you have more recent versions of VxVM eg: 5.1 or 5.0MP3 rolling patches, vxdisk will show the disks as nolabel rather than error - so it's a bit more descriptive - the point is you need to label the disks before you can use them)

Label the disks as suggested, check they are labelled with prtvtoc, then vxdisk scandisks to rescan the disks.

If you choose to ignore everyone else's advice so far, and want go ahead and waste your time with a reboot, that's up to you, but it's not going to help if you don't ensure the disks are labelled properly as suggested in the first place.

View solution in original post

13 REPLIES 13

joseph_dangelo
Level 6
Employee Accredited

Typically the vxdisk list output you provided would indicate that the LUNS still need to be labeled in Solaris format.  Followed by an initialization (i.e. creating the public and private regions of the disk).  The disks will change to "online invalid" after you format them and then online after they are initialized.

That being said, can you please provide the output from the the following command:

#> cfgadm -al

you may need to configure the controllers from a Solaris perspective first.

#> cfgadm -c configure c4  (...c5, c6)

Thanks,

Joe D

giacomopane
Level 4
Partner Accredited Certified

hi

I have already given cfgadm to make a scan on the HBA,

and the format sees the disk.

However tomorrow (now 10.30om Italian time) will post the "format".

BR

jako

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi,

As Joe said, typically an "error" state would indicate that disk is not labelled however looking at dmpevents.log I understand there is some other issue.  "error" state could also mean that  dmp paths are not stable or online that is why DMP is unable to make an IO down to the path & hence error state.

Now clearly the DDL layer is marking the controllers to DISABLED state that itself indicates an issue. I would suggest to recheck the connectivity (cables / switches) in the connectivity path to resolve this issue.

If the cables & switches are OK (physically), also have a look at FC switch config to ensure all is OK. Also, the array is set to the correct setting as recommended in hardware technote.

I believe the issue is with connectivity.

 

G

giacomopane
Level 4
Partner Accredited Certified

hi,

excuse me, but I'm perplexed.
On the same fiber, there are other disks in the SAN, the storage is different (IBM).
If you tell me that is a connectivity problem, I should have the same problems on these disks, but it is not. Storage administrators, have informed me that just introduce yourself and the host, the SAN administrators
have done their checks and everything is ok. However, I report the cfgadm and format.

 

Greetings to all

 

Ap_Id                          Type         Receptacle   Occupant     Condition
IO14                           HPCI+        connected    configured   ok
IO14::pci0                     io           connected    configured   ok
IO14::pci1                     io           connected    configured   ok
IO14::pci2                     io           connected    configured   ok
IO14::pci3                     io           connected    configured   ok
IO15                           HPCI+        connected    configured   ok
IO15::pci0                     io           connected    configured   ok
IO15::pci1                     io           connected    configured   ok
IO15::pci2                     io           connected    configured   ok
IO15::pci3                     io           connected    configured   ok
SB14                           V3CPU        connected    configured   ok
SB14::cpu0                     cpu          connected    configured   ok
SB14::cpu1                     cpu          connected    configured   ok
SB14::cpu2                     cpu          connected    configured   ok
SB14::cpu3                     cpu          connected    configured   ok
SB14::memory                   memory       connected    configured   ok
SB15                           V3CPU        connected    configured   ok
SB15::cpu0                     cpu          connected    configured   ok
SB15::cpu1                     cpu          connected    configured   ok
SB15::cpu2                     cpu          connected    configured   ok
SB15::cpu3                     cpu          connected    configured   ok
SB15::memory                   memory       connected    configured   ok
c0                             scsi-bus     connected    configured   unknown
c0::dsk/c0t10d0                disk         connected    configured   unknown
c0::dsk/c0t11d0                disk         connected    configured   unknown
c0::es/ses0                    processor    connected    configured   unknown
c1                             scsi-bus     connected    configured   unknown
c1::dsk/c1t10d0                disk         connected    configured   unknown
c1::dsk/c1t11d0                disk         connected    configured   unknown
c1::es/ses1                    processor    connected    configured   unknown
c2                             scsi-bus     connected    unconfigured unknown
c3                             scsi-bus     connected    unconfigured unknown
c4                             fc-fabric    connected    configured   unknown
c4::500507630713428f           disk         connected    configured   unknown
c4::500507630718428f           disk         connected    configured   unknown
c4::500507630a23033b           disk         connected    configured   unknown
c4::50060e80164da510           disk         connected    configured   unknown
c5                             fc           connected    unconfigured unknown
c6                             fc-fabric    connected    configured   unknown
c6::500507630733428f           disk         connected    configured   unknown
c6::500507630738428f           disk         connected    configured   unknown
c6::500507630a08433b           disk         connected    configured   unknown
c6::50060e80164da500           disk         connected    configured   unknown
c7                             fc           connected    unconfigured unknown

Searching for disks...done

c4t50060E80164DA510d0: configured with capacity of 50.03GB
c4t50060E80164DA510d1: configured with capacity of 50.03GB
c4t50060E80164DA510d2: configured with capacity of 50.03GB
c4t50060E80164DA510d3: configured with capacity of 50.03GB
c4t50060E80164DA510d4: configured with capacity of 50.03GB
c6t50060E80164DA500d0: configured with capacity of 50.03GB
c6t50060E80164DA500d1: configured with capacity of 50.03GB
c6t50060E80164DA500d2: configured with capacity of 50.03GB
c6t50060E80164DA500d3: configured with capacity of 50.03GB
c6t50060E80164DA500d4: configured with capacity of 50.03GB


AVAILABLE DISK SELECTIONS:
       0. c0t10d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@1dc,700000/pci@1/pci@1/scsi@2/sd@a,0
       1. c0t11d0 <SEAGATE-ST314655LSUN146G-0491 cyl 14087 alt 2 hd 24 sec 848>
          /pci@1dc,700000/pci@1/pci@1/scsi@2/sd@b,0
       2. c1t10d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
          /pci@1fc,700000/pci@1/pci@1/scsi@2/sd@a,0
       3. c1t11d0 <HITACHI-HUS15143BSUN146G-PA02 cyl 14087 alt 2 hd 24 sec 848>
          /pci@1fc,700000/pci@1/pci@1/scsi@2/sd@b,0
       4. c4t50060E80164DA510d0 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,0
       5. c4t50060E80164DA510d1 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,1
       6. c4t50060E80164DA510d2 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,2
       7. c4t50060E80164DA510d3 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,3
       8. c4t50060E80164DA510d4 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,4
       9. c4t500507630A23033Bd0 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,0
      10. c4t500507630A23033Bd1 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,1
      11. c4t500507630A23033Bd2 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,2
      12. c4t500507630A23033Bd3 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,3
      13. c4t500507630A23033Bd4 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,4
      14. c4t500507630A23033Bd5 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,5
      15. c4t500507630A23033Bd6 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,6
      16. c4t500507630718428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,0
      17. c4t500507630713428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,0
      18. c4t500507630718428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,1
      19. c4t500507630713428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,1
      20. c4t500507630713428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,2
      21. c4t500507630718428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,2
      22. c4t500507630713428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,3
      23. c4t500507630718428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,3
      24. c4t500507630718428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,4
      25. c4t500507630713428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,4
      26. c6t50060E80164DA500d0 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,0
      27. c6t50060E80164DA500d1 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,1
      28. c6t50060E80164DA500d2 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,2
      29. c6t50060E80164DA500d3 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,3
      30. c6t50060E80164DA500d4 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,4
      31. c6t500507630A08433Bd0 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,0
      32. c6t500507630A08433Bd1 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,1
      33. c6t500507630A08433Bd2 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,2
      34. c6t500507630A08433Bd3 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,3
      35. c6t500507630A08433Bd4 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,4
      36. c6t500507630A08433Bd5 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,5
      37. c6t500507630A08433Bd6 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,6
      38. c6t500507630733428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,0
      39. c6t500507630738428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,0
      40. c6t500507630733428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,1
      41. c6t500507630738428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,1
      42. c6t500507630733428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,2
      43. c6t500507630738428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,2
      44. c6t500507630733428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,3
      45. c6t500507630738428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,3
      46. c6t500507630738428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,4
      47. c6t500507630733428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
          /pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,4

TonyGriffiths
Level 6
Employee Accredited Certified

Hi,

Could you post the VTOC of one of these problem devices i.e

prtvtoc /dev/rdsk/cXtXdXsX

 

cheers

tony

giacomopane
Level 4
Partner Accredited Certified

hi

prtvtoc /dev/rdsk/c4t50060E80164DA510d4s2

prtvtoc: /dev/rdsk/c4t50060E80164DA510d4s2: Unable to read Disk geometry errno = 0x5

 

br

mohsinmansoor
Level 4
Partner Accredited

Try Rescanning the disks. Since you are running 5.0, you can use VEA to rescan the disks.

giacomopane
Level 4
Partner Accredited Certified

no i don't use the vea, only cli.  

i use, vxdctl enable or vxdisk scandisks

i rescanning the disks,  but the result does not change.

 

g_lee
Level 6

The prtvtoc error:

prtvtoc: /dev/rdsk/c4t50060E80164DA510d4s2: Unable to read Disk geometry errno = 0x5

indicates the disk has no label - this is why the disks are showing as error in vxdisk list.

As Joe D mentioned in the first reply, you need to label the disks in format

# format c#t#d#
> label
> y
> q

(repeat for remaining new disks)

Check the disks have a label with prtvtoc (should show a partition table)

then run vxdisk scandisks and the disks should be listed as online invalid (again, as correctly stated by Joe)

vxdmpadm / vxdisk will show the paths as disabled until the disks have a valid label, so there's no point rerunning the vx scans until you label the disks properly at the OS level

giacomopane
Level 4
Partner Accredited Certified

this is a good idea,

but I have talked to the storage administrator, as the partiton table of solaris, I saw that there were slice and I do not like this thing, because is new storage .

The storage administrator has confirmed to me that there no partirion on the disc.

I requested a reboot of the server  :)

 

br

jako

Gaurav_S
Moderator
Moderator
   VIP    Certified

I agree that paths will show disabled if label is not there  however I was concerned by looking at the status of paths as "failing" .. thats why wanted to ensure that connectivity is perfect. ...

Hope reboot helps :)

 

G

g_lee
Level 6

Jako,

You need to see a partition table on the disk - at the very least it needs to have a slice 2. Even if there are other slices, as long as slice 2 is there and is the correct length, this is OK (if you are going to initialise the disks in vxvm later it doesn't matter as it will re-partition the disk anyway) - however you need that initial label / partition table for VxVM to see the disk correctly.

If you aren't seeing a partition table/label in prtvtoc, rebooting will not help / will be a waste of time, as VxVM needs the label to see the disks correctly before it will enable the paths.

(If you have more recent versions of VxVM eg: 5.1 or 5.0MP3 rolling patches, vxdisk will show the disks as nolabel rather than error - so it's a bit more descriptive - the point is you need to label the disks before you can use them)

Label the disks as suggested, check they are labelled with prtvtoc, then vxdisk scandisks to rescan the disks.

If you choose to ignore everyone else's advice so far, and want go ahead and waste your time with a reboot, that's up to you, but it's not going to help if you don't ensure the disks are labelled properly as suggested in the first place.

giacomopane
Level 4
Partner Accredited Certified

hi 

@ g_lee:

I agree with you, but I know there must be no partition (apart from slice 2). I would not wish that there were problems on the SAN, as the environment in which I work is very critical.

nice day to all