10-04-2010 10:56 AM
I have 2 new SAN disks attached to a host. One looks normal and Veritas can see it and initialize it. The other, shows an "ERROR" in vxdisk list.
vxdisk list
DEVICE TYPE DISK GROUP STATUS
c0t0d0s2 auto:none - - online invalid
c2t0d0s2 auto:none - - online invalid
c2t6d0s2 auto:none - - online invalid
fabric_50 auto:cdsdisk fabric_11 ocsrawdg online shared
. . .
fabric_78 auto:cdsdisk fabric_28 ocsrawdg online shared
fabric_79 auto - - error
fabric_80 auto:cdsdisk - - online
I left out a bunch of other disks between fabric_50 and fabric_78 as they are not relevant. Note fabric_79 and fabric_80 are the 2 new disks. They both appear normal in Solaris format, and the NetApp host tools show them as both good.
#format
Searching for disks...done
Solved! Go to Solution.
10-05-2010 06:08 AM
Bill,
Are you trying to initialise the disk?
Can you please try:
# vxdisksetup -i fabric_79
see this technote for some background (ie: vxdisksetup may be setting additional attributes): http://www.symantec.com/business/support/index?page=content&id=TECH59768
... however, this still may not work as you're using an old version - eg: the following technote references Powerpath, but also mentions that 4.1MP2 is required for full EFI support (""Note that the full support of the EFI disks requires VxVM 4.1 MP2 or VxVM 5.0 MP1 RP1. A reconfiguration reboot is necessary if either PowerPath or VxVM was upgraded.")
http://www.symantec.com/docs/TECH54064
Try the vxdisksetup, if this still does not work, please post the output and will see if I can find anything about EFI on 4.1MP1 in the meantime.
10-04-2010 11:08 AM
Interesting....
Can you paste the current prtvtoc output from the disk ? Also paste these:
# vxdisk -e list |egrep 'fabric_79|fabric_80'
# prtvtoc /dev/vx/rdmp/fabric_79
# prtvtoc /dev/vx/rdmp/fabric_80
# vxdisk list fabric_79
# vxdisk list fabric_80
# vxdmpadm listenclosure all
# vxdmpadm listctlr all
Gaurav
10-04-2010 11:37 AM
Here are the requested outputs:
#vxdisk -e list |egrep 'fabric_79|fabric_80'
fabric_79 auto - - error c8t60A98000486E5A7153345A4471373748d0s2
fabric_80 auto - - online c8t60A98000486E5A71675A5A447168634Bd0s2
#prtvtoc /dev/vx/rdmp/fabric_79
* /dev/vx/rdmp/fabric_79 partition map
*
* Dimensions:
* 512 bytes/sector
* 189 sectors/track
* 255 tracks/cylinder
* 48195 sectors/cylinder
* 48822 cylinders
* 48820 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 2352879900 2352879899
6 4 00 0 2352879900 2352879899
#prtvtoc /dev/vx/rdmp/fabric_80
* /dev/vx/rdmp/fabric_80 partition map
*
* Dimensions:
* 512 bytes/sector
* 2048 sectors/track
* 16 tracks/cylinder
* 32768 sectors/cylinder
* 6528 cylinders
* 6526 accessible cylinders
*
* Flags:
* 1: unmountable
* 10: read-only
*
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
2 5 01 0 213843968 213843967
7 15 01 0 213843968 213843967
#vxdmpadm listenclosure all
ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE
============================================================================
Disk Disk DISKS DISCONNECTED -
OTHER_DISKS OTHER_DISKS OTHER_DISKS CONNECTED OTHER_DISKS
#vxdmpadm listctlr all
CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME
=====================================================
c8 OTHER_DISKS ENABLED OTHER_DISKS
c2 OTHER_DISKS ENABLED OTHER_DISKS
c0 OTHER_DISKS ENABLED OTHER_DISKS
10-04-2010 12:06 PM
paste below as well
# vxdisk list fabric_79
# vxdisk list fabric_80
Need to know if paths to disk are enabled or disabled...
Gaurav
10-04-2010 12:20 PM
Oops, sorry I missed those. Here they are:
#vxdisk list fabric_79
Device: fabric_79
devicetag: fabric_79
type: auto
flags: online error private autoconfig
errno: Device path not valid
Multipathing information:
numpaths: 1
c8t60A98000486E5A7153345A4471373748d0s2 state=enabled
#vxdisk list fabric_80
Device: fabric_80
devicetag: fabric_80
type: auto
hostid:
disk: name= id=1285941906.39.
group: name= id=
info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2
flags: online ready private autoconfig autoimport
pubpaths: block=/dev/vx/dmp/fabric_80s2 char=/dev/vx/rdmp/fabric_80s2
version: 3.1
iosize: min=512 (bytes) max=2048 (blocks)
public: slice=2 offset=2304 len=213841664 disk_offset=0
private: slice=2 offset=256 len=2048 disk_offset=0
update: time=1285941906 seqno=0.1
ssb: actual_seqno=0.0
headers: 0 240
configs: count=1 len=1280
logs: count=1 len=192
Defined regions:
config priv 000048-000239[000192]: copy=01 offset=000000 disabled
config priv 000256-001343[001088]: copy=01 offset=000192 disabled
log priv 001344-001535[000192]: copy=01 offset=000000 disabled
lockrgn priv 001536-001679[000144]: part=00 offset=000000
Multipathing information:
numpaths: 1
c8t60A98000486E5A71675A5A447168634Bd0s2 state=enabled
10-04-2010 12:24 PM
hmm nothing looks wrong there.... can u also let me know the SF version you are using ?
# modinfo |grep -i vx
Also give a try on these steps:
-- since fabric_79 is a fresh disk, I hope it doesn't have any data.... so delete slice 6 using format utility
-- Label the disk using format utility after deleting slice 6
-- rescan OS tree using "devfsadm -Cv" (notice any errors)
-- rescan veritas tree using "vxdctl enable"
-- Now see if disk still shows in error state or no..
Gaurav
10-04-2010 12:36 PM
#modinfo |grep -i vx
32 1347890 29a00 259 1 vxdmp (VxVM 4.1_p3.1: DMP Driver)
33 7be00000 216778 260 1 vxio (VxVM 4.1_p3.1 I/O driver)
35 136ee28 13f8 261 1 vxspec (VxVM 4.1_p3.1 control/status dr)
I followed the procedure above, used format to remove slice6, relabeled, did the devfsadm (no errors), vxdctl enable, and still the same result. It has an error flag set.
Here's a bunch of output from the procedure:
partition> pr
Current partition table (original):
Total disk cylinders available: 48820 + 2 (reserved cylinders)
10-04-2010 12:38 PM
Just to be sure, I did mention that this is a large LUN. 1.1T
Is that a problem?
10-04-2010 01:41 PM
whether both fabric_79 & fabric_80 are of same size ? If only fabric_79 is greater than 1TB, try changing the type of disk using Format utility..
If its a 1.1T Lun, you might face issue in initializing it veritas, but atleast it should be visible to vx ....
If this is a standalone server, then try below procedure... (if this is a cluster server, don't use below procedure)
can you try this....
# mv /etc/vx/disk.info /etc/vx/disk.info.old (move disk.info to old name)
# rm /dev/vx/rdmp/fabric_79
# rm /dev/vx/dmp/fabric_79
# rm /dev/rdsk/c8t60A98000486E5A7153345A4471373748d0*
# rm /dev/dsk/c8t60A98000486E5A7153345A4471373748d0*
# devfsadm -Cv
# vxconfigd -k (this is important step, it will restart vxconfigd daemon & will regenerate disk.info file)
If you are using veritas cluster CFS/CVM, restarting vxconfigd may cause failover to happen, so if you use VCS, don't do above steps... if it is a standalone server, you can try above procedure..
Gaurav
10-05-2010 04:11 AM
Bill,
The disk is >1TB so it needs an EFI label, it currently has an SMI label which is why you're having issues. Remove disk from vxvm, relabel as EFI, rescan
1. Ensure the event source daemon is not running so it won't recreate disk while you're trying to cleanup
# ps -ef |grep vxesd
If it is running, stop it
# vxddladm stop eventsource
# ps -ef |grep vxesd
2. Remove disks from vxdisk list and the underlying dmp devices
# vxdisk rm fabric_79
# ls -la /dev/vx/*dmp/fabric_79* ### ensure this just lists the fabric_79 devices/slices
# rm /dev/vx/*dmp/fabric_79*
# ls -la /dev/vx/*dmp/fabric_79* ### should not find anything
3. Relabel as EFI
# format -e c8t60A98000486E5A7153345A4471373748d0
format> label
[will give you option for SMI or EFI label. select EFI label, follow prompts]
format> q
# prtvtoc /dev/rdsk/c8t60A98000486E5A7153345A4471373748d0s0
### should now see slice 8 for EFI label
4. Now this disk has correct label, rebuild dmp dev tree, rescan into vxvm
# vxdctl initdmp
# vxdctl enable
# vxdisk list
Will add one additional caveat - you're on 4.1MP1 (ie: fairly old) - I *think* this version does still have support for disks >1TB but can't find a specific mention when I skim the release notes - if you run into issues it might be best to log a support case.
10-05-2010 04:58 AM
In addition to G Lee's post - I think we need to get back to basics and check compatilbility... If my memory serves me right, I think that EFI label is only supported on Solaris 10. I don't see the Solaris level mentioned anywhere...
To use latest disk technology, always ensure that the O/S and SF versions supports that.
10-05-2010 05:25 AM
Marianne,
You are right, I got a bit ahead of myself :\
EFI label is mainly for Solaris 10, however it also appears to be available from Solaris 9 4/03 from the following document:
http://docs.sun.com/app/docs/doc/817-6960/6mmah946u?a=view
Additional details/restrictions for EFI labels here:
http://docs.sun.com/app/docs/doc/817-5093/disksconcepts-17?l=en&a=view
10-05-2010 05:46 AM
Almost there! The procedure G Lee suggested seems to have worked, but there is a problem at the very end. Here is output from all the steps.
Nox#ps -ef |grep vxesd
root 345 1 0 Jul 31 ? 0:27 /sbin/vxesd
root 18947 17791 0 08:28:18 pts/1 0:00 grep vxesd
Nox#vxddladm stop eventsource
Nox#ps -ef |grep vxesd
root 19127 17791 0 08:28:36 pts/1 0:00 grep vxesd
Nox#vxdisk rm fabric_79
Nox#ls -la /dev/vx/*dmp/fabric_79*
brw------- 1 root root 259, 258 Oct 4 15:32 /dev/vx/dmp/fabric_79
brw------- 1 root root 259, 256 Oct 4 15:32 /dev/vx/dmp/fabric_79s0
brw------- 1 root root 259, 257 Oct 4 15:32 /dev/vx/dmp/fabric_79s1
brw------- 1 root root 259, 258 Oct 4 15:32 /dev/vx/dmp/fabric_79s2
brw------- 1 root root 259, 259 Oct 4 15:32 /dev/vx/dmp/fabric_79s3
brw------- 1 root root 259, 260 Oct 4 15:32 /dev/vx/dmp/fabric_79s4
brw------- 1 root root 259, 261 Oct 4 15:32 /dev/vx/dmp/fabric_79s5
brw------- 1 root root 259, 262 Oct 4 15:32 /dev/vx/dmp/fabric_79s6
brw------- 1 root root 259, 263 Oct 4 15:32 /dev/vx/dmp/fabric_79s7
crw------- 1 root root 259, 258 Oct 4 15:32 /dev/vx/rdmp/fabric_79
crw------- 1 root root 259, 256 Oct 4 15:32 /dev/vx/rdmp/fabric_79s0
crw------- 1 root root 259, 257 Oct 4 15:32 /dev/vx/rdmp/fabric_79s1
crw------- 1 root root 259, 258 Oct 4 15:32 /dev/vx/rdmp/fabric_79s2
crw------- 1 root root 259, 259 Oct 4 15:32 /dev/vx/rdmp/fabric_79s3
crw------- 1 root root 259, 260 Oct 4 15:32 /dev/vx/rdmp/fabric_79s4
crw------- 1 root root 259, 261 Oct 4 15:32 /dev/vx/rdmp/fabric_79s5
crw------- 1 root root 259, 262 Oct 4 15:32 /dev/vx/rdmp/fabric_79s6
crw------- 1 root root 259, 263 Oct 4 15:32 /dev/vx/rdmp/fabric_79s7
Nox#rm /dev/vx/*dmp/fabric_79*
Nox#ls -la /dev/vx/*dmp/fabric_79*
/dev/vx/*dmp/fabric_79*: No such file or directory
Nox#sanlun lun show
controller: lun-pathname device filename adapter protocol lun size lun state
filer2: /vol/acqbiz_vis_prod_nona_nox_cluster_ebsfsdg/lun1 /dev/rdsk/c8t60A98000486E5A71675A5A447168634Bd0s2 qlc1 FCP 102g (109521666048) GOOD
filer1: /vol/acqbiz_vis_prod_nona_nox_cluster_ocsrawdg/lun1 /dev/rdsk/c8t60A98000486E5A7153345A4471373748d0s2 qlc1 FCP 1.1t (1204738326528) GOOD
Nox#format -e c8t60A98000486E5A7153345A4471373748d0
selecting c8t60A98000486E5A7153345A4471373748d0
[disk formatted]
10-05-2010 05:47 AM
It's Solaris 10.
SunOS xxxxxxxxxx 5.10 Generic_142900-15 sun4u sparc SUNW,Sun-Fire
10-05-2010 06:08 AM
Bill,
Are you trying to initialise the disk?
Can you please try:
# vxdisksetup -i fabric_79
see this technote for some background (ie: vxdisksetup may be setting additional attributes): http://www.symantec.com/business/support/index?page=content&id=TECH59768
... however, this still may not work as you're using an old version - eg: the following technote references Powerpath, but also mentions that 4.1MP2 is required for full EFI support (""Note that the full support of the EFI disks requires VxVM 4.1 MP2 or VxVM 5.0 MP1 RP1. A reconfiguration reboot is necessary if either PowerPath or VxVM was upgraded.")
http://www.symantec.com/docs/TECH54064
Try the vxdisksetup, if this still does not work, please post the output and will see if I can find anything about EFI on 4.1MP1 in the meantime.
10-05-2010 06:59 AM
The SAN storage manager took the big LUN back and gave me a new one that he said was, "set up for EFI." This made no difference.
It seems I don't have vxdisksetup, so that probably means my version is too old to handle >1T LUNs. So, I have asked the SAN storage manager to split the space up into 4 pieces (may as well stripe it while I'm at it.)
Thanks for all the help. I'm learning a lot here.
10-05-2010 07:02 AM
This is a Sun Cluster setup. So, I went with G Lee's suggestions.
10-05-2010 07:10 AM
Bill,
Sorry to hear you couldn't get the >1TB lun to work. Although, given your version is quite old, it might be better to use a smaller lun for now (ie: until you can upgrade) to avoid running into any issues that would be fixed in more recent updates.
For reference, VxVM 4.1 does have vxdisksetup, but it's not under /usr/sbin so it doesn't come up by default unless you've set up your PATH variable (which many people don't do)
Location is /etc/vx/bin/vxdisksetup (or /usr/lib/vxvm/bin/vxdisksetup as /etc/vx/bin -> /usr/lib/vxvm/bin)
Hope that helps,
Grace
10-05-2010 07:27 AM
just one suggestion, tried to initialize disk as sliced ?
# vxdisk init fabric_79 format=sliced
somehow I remember EFI label working with 4.1 (but not very sure)...
Gaurav
10-05-2010 09:02 AM
Same error with format=sliced. Apparently I have too old a version of 4.1