β11-29-2011 06:26 AM
hi all,
Solved! Go to Solution.
β11-30-2011 08:39 AM
Jako,
You need to see a partition table on the disk - at the very least it needs to have a slice 2. Even if there are other slices, as long as slice 2 is there and is the correct length, this is OK (if you are going to initialise the disks in vxvm later it doesn't matter as it will re-partition the disk anyway) - however you need that initial label / partition table for VxVM to see the disk correctly.
If you aren't seeing a partition table/label in prtvtoc, rebooting will not help / will be a waste of time, as VxVM needs the label to see the disks correctly before it will enable the paths.
(If you have more recent versions of VxVM eg: 5.1 or 5.0MP3 rolling patches, vxdisk will show the disks as nolabel rather than error - so it's a bit more descriptive - the point is you need to label the disks before you can use them)
Label the disks as suggested, check they are labelled with prtvtoc, then vxdisk scandisks to rescan the disks.
If you choose to ignore everyone else's advice so far, and want go ahead and waste your time with a reboot, that's up to you, but it's not going to help if you don't ensure the disks are labelled properly as suggested in the first place.
β11-29-2011 09:01 AM
Typically the vxdisk list output you provided would indicate that the LUNS still need to be labeled in Solaris format. Followed by an initialization (i.e. creating the public and private regions of the disk). The disks will change to "online invalid" after you format them and then online after they are initialized.
That being said, can you please provide the output from the the following command:
#> cfgadm -al
you may need to configure the controllers from a Solaris perspective first.
#> cfgadm -c configure c4 (...c5, c6)
Thanks,
Joe D
β11-29-2011 01:32 PM
hi
I have already given cfgadm to make a scan on the HBA,
and the format sees the disk.
However tomorrow (now 10.30om Italian time) will post the "format".
BR
jako
β11-29-2011 08:19 PM
Hi,
As Joe said, typically an "error" state would indicate that disk is not labelled however looking at dmpevents.log I understand there is some other issue. "error" state could also mean that dmp paths are not stable or online that is why DMP is unable to make an IO down to the path & hence error state.
Now clearly the DDL layer is marking the controllers to DISABLED state that itself indicates an issue. I would suggest to recheck the connectivity (cables / switches) in the connectivity path to resolve this issue.
If the cables & switches are OK (physically), also have a look at FC switch config to ensure all is OK. Also, the array is set to the correct setting as recommended in hardware technote.
I believe the issue is with connectivity.
G
β11-30-2011 01:05 AM
hi,
excuse me, but I'm perplexed.
On the same fiber, there are other disks in the SAN, the storage is different (IBM).
If you tell me that is a connectivity problem, I should have the same problems on these disks, but it is not. Storage administrators, have informed me that just introduce yourself and the host, the SAN administrators have done their checks and everything is ok. However, I report the cfgadm and format.
Greetings to all
Ap_Id Type Receptacle Occupant Condition
IO14 HPCI+ connected configured ok
IO14::pci0 io connected configured ok
IO14::pci1 io connected configured ok
IO14::pci2 io connected configured ok
IO14::pci3 io connected configured ok
IO15 HPCI+ connected configured ok
IO15::pci0 io connected configured ok
IO15::pci1 io connected configured ok
IO15::pci2 io connected configured ok
IO15::pci3 io connected configured ok
SB14 V3CPU connected configured ok
SB14::cpu0 cpu connected configured ok
SB14::cpu1 cpu connected configured ok
SB14::cpu2 cpu connected configured ok
SB14::cpu3 cpu connected configured ok
SB14::memory memory connected configured ok
SB15 V3CPU connected configured ok
SB15::cpu0 cpu connected configured ok
SB15::cpu1 cpu connected configured ok
SB15::cpu2 cpu connected configured ok
SB15::cpu3 cpu connected configured ok
SB15::memory memory connected configured ok
c0 scsi-bus connected configured unknown
c0::dsk/c0t10d0 disk connected configured unknown
c0::dsk/c0t11d0 disk connected configured unknown
c0::es/ses0 processor connected configured unknown
c1 scsi-bus connected configured unknown
c1::dsk/c1t10d0 disk connected configured unknown
c1::dsk/c1t11d0 disk connected configured unknown
c1::es/ses1 processor connected configured unknown
c2 scsi-bus connected unconfigured unknown
c3 scsi-bus connected unconfigured unknown
c4 fc-fabric connected configured unknown
c4::500507630713428f disk connected configured unknown
c4::500507630718428f disk connected configured unknown
c4::500507630a23033b disk connected configured unknown
c4::50060e80164da510 disk connected configured unknown
c5 fc connected unconfigured unknown
c6 fc-fabric connected configured unknown
c6::500507630733428f disk connected configured unknown
c6::500507630738428f disk connected configured unknown
c6::500507630a08433b disk connected configured unknown
c6::50060e80164da500 disk connected configured unknown
c7 fc connected unconfigured unknown
Searching for disks...done
c4t50060E80164DA510d0: configured with capacity of 50.03GB
c4t50060E80164DA510d1: configured with capacity of 50.03GB
c4t50060E80164DA510d2: configured with capacity of 50.03GB
c4t50060E80164DA510d3: configured with capacity of 50.03GB
c4t50060E80164DA510d4: configured with capacity of 50.03GB
c6t50060E80164DA500d0: configured with capacity of 50.03GB
c6t50060E80164DA500d1: configured with capacity of 50.03GB
c6t50060E80164DA500d2: configured with capacity of 50.03GB
c6t50060E80164DA500d3: configured with capacity of 50.03GB
c6t50060E80164DA500d4: configured with capacity of 50.03GB
AVAILABLE DISK SELECTIONS:
0. c0t10d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@1dc,700000/pci@1/pci@1/scsi@2/sd@a,0
1. c0t11d0 <SEAGATE-ST314655LSUN146G-0491 cyl 14087 alt 2 hd 24 sec 848>
/pci@1dc,700000/pci@1/pci@1/scsi@2/sd@b,0
2. c1t10d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848>
/pci@1fc,700000/pci@1/pci@1/scsi@2/sd@a,0
3. c1t11d0 <HITACHI-HUS15143BSUN146G-PA02 cyl 14087 alt 2 hd 24 sec 848>
/pci@1fc,700000/pci@1/pci@1/scsi@2/sd@b,0
4. c4t50060E80164DA510d0 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,0
5. c4t50060E80164DA510d1 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,1
6. c4t50060E80164DA510d2 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,2
7. c4t50060E80164DA510d3 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,3
8. c4t50060E80164DA510d4 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da510,4
9. c4t500507630A23033Bd0 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,0
10. c4t500507630A23033Bd1 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,1
11. c4t500507630A23033Bd2 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,2
12. c4t500507630A23033Bd3 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,3
13. c4t500507630A23033Bd4 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,4
14. c4t500507630A23033Bd5 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,5
15. c4t500507630A23033Bd6 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a23033b,6
16. c4t500507630718428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,0
17. c4t500507630713428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,0
18. c4t500507630718428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,1
19. c4t500507630713428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,1
20. c4t500507630713428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,2
21. c4t500507630718428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,2
22. c4t500507630713428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,3
23. c4t500507630718428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,3
24. c4t500507630718428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630718428f,4
25. c4t500507630713428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
/pci@1dc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630713428f,4
26. c6t50060E80164DA500d0 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,0
27. c6t50060E80164DA500d1 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,1
28. c6t50060E80164DA500d2 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,2
29. c6t50060E80164DA500d3 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,3
30. c6t50060E80164DA500d4 <HP-OPEN-V-SUN-7002 cyl 13662 alt 2 hd 15 sec 512>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w50060e80164da500,4
31. c6t500507630A08433Bd0 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,0
32. c6t500507630A08433Bd1 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,1
33. c6t500507630A08433Bd2 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,2
34. c6t500507630A08433Bd3 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,3
35. c6t500507630A08433Bd4 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,4
36. c6t500507630A08433Bd5 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,5
37. c6t500507630A08433Bd6 <IBM-2107900-5.63 cyl 3838 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630a08433b,6
38. c6t500507630733428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,0
39. c6t500507630738428Fd0 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,0
40. c6t500507630733428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,1
41. c6t500507630738428Fd1 <IBM-2107900-.104 cyl 10920 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,1
42. c6t500507630733428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,2
43. c6t500507630738428Fd2 <IBM-2107900-.104 cyl 21843 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,2
44. c6t500507630733428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,3
45. c6t500507630738428Fd3 <IBM-2107900-.104 cyl 6398 alt 2 hd 64 sec 256>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,3
46. c6t500507630738428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630738428f,4
47. c6t500507630733428Fd4 <IBM-2107900-.183 cyl 21843 alt 2 hd 30 sec 64>
/pci@1fc,600000/SUNW,qlc@1/fp@0,0/ssd@w500507630733428f,4
β11-30-2011 01:09 AM
Hi,
Could you post the VTOC of one of these problem devices i.e
prtvtoc /dev/rdsk/cXtXdXsX
cheers
tony
β11-30-2011 02:13 AM
hi
prtvtoc /dev/rdsk/c4t50060E80164DA510d4s2
prtvtoc: /dev/rdsk/c4t50060E80164DA510d4s2: Unable to read Disk geometry errno = 0x5
br
β11-30-2011 02:21 AM
Try Rescanning the disks. Since you are running 5.0, you can use VEA to rescan the disks.
β11-30-2011 02:33 AM
no i don't use the vea, only cli.
i use, vxdctl enable or vxdisk scandisks
i rescanning the disks, but the result does not change.
β11-30-2011 03:39 AM
The prtvtoc error:
prtvtoc: /dev/rdsk/c4t50060E80164DA510d4s2: Unable to read Disk geometry errno = 0x5
indicates the disk has no label - this is why the disks are showing as error in vxdisk list.
As Joe D mentioned in the first reply, you need to label the disks in format
# format c#t#d#
> label
> y
> q
(repeat for remaining new disks)
Check the disks have a label with prtvtoc (should show a partition table)
then run vxdisk scandisks and the disks should be listed as online invalid (again, as correctly stated by Joe)
vxdmpadm / vxdisk will show the paths as disabled until the disks have a valid label, so there's no point rerunning the vx scans until you label the disks properly at the OS level
β11-30-2011 07:52 AM
this is a good idea,
but I have talked to the storage administrator, as the partiton table of solaris, I saw that there were slice and I do not like this thing, because is new storage .
The storage administrator has confirmed to me that there no partirion on the disc.
I requested a reboot of the server :)
br
jako
β11-30-2011 08:28 AM
I agree that paths will show disabled if label is not there however I was concerned by looking at the status of paths as "failing" .. thats why wanted to ensure that connectivity is perfect. ...
Hope reboot helps :)
G
β11-30-2011 08:39 AM
Jako,
You need to see a partition table on the disk - at the very least it needs to have a slice 2. Even if there are other slices, as long as slice 2 is there and is the correct length, this is OK (if you are going to initialise the disks in vxvm later it doesn't matter as it will re-partition the disk anyway) - however you need that initial label / partition table for VxVM to see the disk correctly.
If you aren't seeing a partition table/label in prtvtoc, rebooting will not help / will be a waste of time, as VxVM needs the label to see the disks correctly before it will enable the paths.
(If you have more recent versions of VxVM eg: 5.1 or 5.0MP3 rolling patches, vxdisk will show the disks as nolabel rather than error - so it's a bit more descriptive - the point is you need to label the disks before you can use them)
Label the disks as suggested, check they are labelled with prtvtoc, then vxdisk scandisks to rescan the disks.
If you choose to ignore everyone else's advice so far, and want go ahead and waste your time with a reboot, that's up to you, but it's not going to help if you don't ensure the disks are labelled properly as suggested in the first place.
β11-30-2011 11:56 PM
hi
@ g_lee:
I agree with you, but I know there must be no partition (apart from slice 2). I would not wish that there were problems on the SAN, as the environment in which I work is very critical.
nice day to all