VxVM vxdg ERROR V-5-1-585 Unable to create shared disk group
Hi, To setup SFRAC, I need to configure I/O fencing on a 2-node VCS configuration. I am unable to create the shared diskgroup after configuring I/O Fencing. Without I/O Fencing, I am able to create the shared dikgroup. The error is: VxVM vxdg ERROR V-5-1-585 Disk group oradg: cannot create: Record not in disk group The two SPARC v400 servers "node1" and "node2 are connected to a Sun StorageTek 6140 array and the details are as follows: On "node1": node1 * / # vxdisk -e list DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR disk_0 auto:none - - online invalid c1t3d0s2 - disk_1 auto:none - - online invalid c1t2d0s2 - disk_2 auto:cdsdisk disk_2 vxfencoorddg online c2t6d0s2 - disk_3 auto:cdsdisk disk_3 vxfencoorddg online c2t7d0s2 - disk_4 auto:cdsdisk - - online c2t0d0s2 - disk_5 auto:cdsdisk disk_5 vxfencoorddg online c2t5d0s2 - disk_6 auto:cdsdisk - - online c2t3d0s2 - disk_7 auto:none - - online invalid c2t1d0s2 - disk_8 auto:cdsdisk - - online c2t2d0s2 - disk_9 auto:none - - online invalid c2t4d0s2 - disk_10 auto:none - - online invalid c1t1d0s2 - disk_11 auto:ZFS - - ZFS c1t0d0s2 - On "node2": node2 * / # vxdisk -e list DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR disk_0 auto:none - - online invalid c1t3d0s2 - disk_1 auto:none - - online invalid c1t1d0s2 - disk_2 auto:ZFS - - ZFS c1t0d0s2 - disk_3 auto:cdsdisk - - online c3t6d0s2 - disk_4 auto:cdsdisk - - online c3t7d0s2 - disk_5 auto:cdsdisk - - error c3t0d0s2 - disk_6 auto:cdsdisk - - online c3t5d0s2 - disk_7 auto:cdsdisk - - online c3t3d0s2 - disk_8 auto:none - - online invalid c3t1d0s2 - disk_9 auto:cdsdisk - - online c3t2d0s2 - disk_10 auto:none - - online invalid c3t4d0s2 - disk_11 auto:none - - online invalid c1t2d0s2 - On "node1": node1 * / # vxdg -s init oradg oradg01=disk_4 VxVM vxdg ERROR V-5-1-585 Disk group oradg: cannot create: Record not in disk group node1 * / # vxdg -s init oradg oradg01=disk_8 VxVM vxdg ERROR V-5-1-585 Disk group oradg: cannot create: Record not in disk group I have installed the latest ASL libraries for StorageTek 6140 array. Can anyone please help? Best Regards, Bruce13KViews1like21CommentsNew disk "ERROR" in vxdisk list
I have 2 new SAN disks attached to a host. One looks normal and Veritas can see it and initialize it. The other, shows an "ERROR" in vxdisk list. vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online invalid c2t0d0s2 auto:none - - online invalid c2t6d0s2 auto:none - - online invalid fabric_50 auto:cdsdisk fabric_11 ocsrawdg online shared . . . fabric_78 auto:cdsdisk fabric_28 ocsrawdg online shared fabric_79 auto - - error fabric_80 auto:cdsdisk - - online I left out a bunch of other disks between fabric_50 and fabric_78 as they are not relevant. Note fabric_79 and fabric_80 are the 2 new disks. They both appear normal in Solaris format, and the NetApp host tools show them as both good. #format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /ssm@0,0/pci@18,600000/scsi@2/sd@0,0 1. c2t0d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /ssm@0,0/pci@1c,600000/scsi@2/sd@0,0 2. c2t6d0 <SUN146G cyl 14087 alt 2 hd 24 sec 848> /ssm@0,0/pci@1c,600000/scsi@2/sd@6,0 3. c8t60A98000486E5A71675A5A447168634Bd0 <NETAPP-LUN-7320 cyl 6526 alt 2 hd 16 sec 2048> /scsi_vhci/ssd@g60a98000486e5a71675a5a447168634b 4. c8t60A98000486E5A7153345A4471373748d0 <NETAPP-LUN-7320 cyl 48820 alt 2 hd 255 sec 189> /scsi_vhci/ssd@g60a98000486e5a7153345a4471373748 #sanlun lun show controller: lun-pathname device filename adapter protocol lun size lun state filer2: /vol/acqbiz_vis_prod_nona_nox_cluster_ebsfsdg/lun1 /dev/rdsk/c8t60A98000486E5A71675A5A447168634Bd0s2 qlc1 FCP 102g (109521666048) GOOD filer1: /vol/acqbiz_vis_prod_nona_nox_cluster_ocsrawdg/lun1 /dev/rdsk/c8t60A98000486E5A7153345A4471373748d0s2 qlc1 FCP 1.1t (1204738326528) GOOD I've left out a lot of excess output, but the interesting stuff should be here. Finally, issuing a vxdisk init give an error. #vxdisk init fabric_79 VxVM vxdisk ERROR V-5-1-5433 Device fabric_79: init failed: Device path not valid I even tried dd'ing /dev/zero onto the first 4 blocks of the device and relabeling the disk with format. Still no joy. Does anyone have any idea what the problem might be? I'm going to have hard time convincing the SAN folks it's a problem since it looks fine with format and the NetApp tool, but there must be something I've missed. One thing I probably should mention is the LUN is 1.1T in size.Solved10KViews1like20Commentsdisk in error state in vxdisk list
disk replacement was performed for c1t2d0s2 and after that we are not able to add this disk into dg. I can see duplicate entry of c1t2d0s2 in vxdisk list abcmas3{root} #: vxdisk list DEVICE TYPE DISK GROUP STATUS c1t1d0s2 auto:none - - online invalid c1t2d0s2 auto - - error c1t2d0s2 auto - - error c1t3d0s2 auto:none - - online invalid c1t4d0s2 auto:none - - online invalid c1t5d0s2 auto:none - - online invalid c1t8d0s2 auto:none - - online invalid c1t9d0s2 auto:sliced disk2mirr datadg online c1t10d0s2 auto:sliced disk3mirr datadg online c1t11d0s2 auto:sliced disk4mirr datadg online c1t13d0s2 auto:sliced disk5mirr datadg online c2t0d0s2 auto:none - - online invalid c2t12d0s2 auto:sliced disk6mirr datadg online I tried to lable c1t2d0 and then vxdctl enable , it dinot come to invalid online. what should be the way forward here?7.8KViews1like5CommentsUnable to add a new disk to an existing disk group after upgrade to 5.1SP1RP4
Hi All, Recently we have upgraded cluster nodes to 5.1SP1RP4. After upgrade i am not able to add new disk in DG and getting below error. I noticed all the disks now have udid_mismatch flag which was not the case before upgrade. root # vxdg -g IPREDODG adddisk emc8_0d71=emc8_0d71 VxVM vxdg ERROR V-5-1-0 Disk Group IPREDODG has only cloned disks and trying to add standard disk to diskgroup. Mix of standard and cloned disks in a diskgroup is not allowed. Please follow the vxdg (1M) man page. I found a doc which talks about similar issue in 5.1SP1RP3 .http://www.symantec.com/business/support/index?page=content&id=TECH204069 My question is that in my setup DGs are in cluster and we don't want to deport them. Can we update the udid online without outage? Or can anyone please suggest any other solution which doesn't require downtime?7.7KViews1like16CommentsVeritas Volume Manager problem
We have 2 Sun 3510 arrays each containing three disks. VxVM, on Solaris 10 for SPARC, has been set up to mirror each disk, ie Array 1 Disk 1 is mirrored to Array 2 Disk 1 etc etc. We had a problem with the power to the controller of array 2 and the disks in that array ended up in a disabled and removed state. The power has now been restored and the disks are now visible to the operating system again. However whatever we try we can not get VxVM to accept the disks. It can see them but will not re-add them. We have followed the replace disk procedure which fails with the message 'no device available to replace'. Any suggestions would be most gratefuly received.Solved6.2KViews1like27Commentslibadm.so.1: version `SUNW_1.2' not found
I m new to the vertias volume manager. For practice, I have installed the vxvm on solaris 64 bit os using virtualbox. I have created volume group and trying to format filesystem with vxfs, the below error is getting. can you please fix the issue. still you need any more information please let me know bash-3.00# mkfs -F vxfs /dev/vx/rdsk/dg1/vol1 ld.so.1: mkfs: fatal: libadm.so.1: version `SUNW_1.2' not found (required by file /usr/lib/fs/vxfs/mkfs) ld.so.1: mkfs: fatal: libadm.so.1: open failed: No such file or directory Killed bash-3.00# cat /etc/release Solaris 10 10/08 s10x_u6wos_07b X86 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 October 2008 bash-3.00# mkfs -F vxfs /dev/vx/rdsk/dg1/vol1 ld.so.1: mkfs: fatal: libadm.so.1: version `SUNW_1.2' not found (required by file /usr/lib/fs/vxfs/mkfs) ld.so.1: mkfs: fatal: libadm.so.1: open failed: No such file or directory Killed bash-3.00# pkginfo -l VRTSvxvm PKGINST: VRTSvxvm NAME: Binaries for VERITAS Volume Manager by Symantec CATEGORY: system ARCH: i386 VERSION: 6.0.100.000,REV=08.01.2012.11.29 BASEDIR: / VENDOR: Symantec Corporation DESC: Virtual Disk Subsystem PSTAMP: 6.0.100.000-GA-2012-08-01 INSTDATE: Apr 18 2013 21:09 HOTLINE: http://www.symantec.com/business/support/assistance_care.jsp STATUS: completely installed FILES: 841 installed pathnames 41 shared pathnames 116 directories 346 executables 425545 blocks used (approx) bash-3.00# !find find / -name libadm.so.1 -print 2>/dev/null /usr/lib/amd64/libadm.so.1 /usr/lib/libadm.so.1 /lib/amd64/libadm.so.1 /lib/libadm.so.1 bash-3.00# cat /etc/release Solaris 10 10/08 s10x_u6wos_07b X86 Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 27 October 2008 bash-3.00# vxdisk list DEVICE TYPE DISK GROUP STATUS c0d0s2 auto:none - - online invalid c0d1s2 auto:none - - online invalid disk_0 auto:cdsdisk d1 dg1 online disk_1 auto:cdsdisk d2 dg1 online disk_2 auto:none - - online invalid disk_3 auto:none - - online invalid bash-3.00# vxprint -vht Disk group: dg1 V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO EX NAME ASSOC VC PERMS MODE STATE v vol1 - ENABLED ACTIVE 102400 SELECT - fsgen pl vol1-01 vol1 ENABLED ACTIVE 102400 CONCAT - RW sd d1-01 vol1-01 d1 0 102400 0 disk_0 ENA bash-3.00#Solved5.9KViews2likes8CommentsCVM won't start on remote node with an FSS diskgroup
I am testing FSS (Flexible Shared Storage) on SF 6.1 on RH 5.5 in a Virtual Box VM and when I try to start CVM on the remote node I get: VCS ERROR V-16-20006-1005 (r55v61b) CVMCluster:cvm_clus:monitor:node - state: out of cluster reason: Disk for disk group not found: retry to add a node failed Here is my setup: Node A is master with a local (sdd) and remote disk (B_sdd) [root@r55v61a ~]# vxdctl -c mode mode: enabled: cluster active - MASTER master: r55v61a [root@r55v61a ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS B_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported Node B is the slave, and sees local (sdd) and remote disk (A_sdd) [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported On node A, I add an FSS diskgroup, so on node A the disk is local [root@r55v61a ~]# vxdg -s -o fss=on init fss-dg fd1_La=sdd [root@r55v61a ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS B_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk fd1_La fss-dg online exported shared And on node B the disk in fss-dg is remote [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk fd1_La fss-dg online shared remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported I then stop and start VCS on node B which is when I see the issue: 2014/05/13 12:05:23 VCS INFO V-16-2-13716 (r55v61b) Resource(cvm_clus): Output of the completed operation (online) ============================================== ERROR: ============================================== 2014/05/13 12:05:24 VCS ERROR V-16-20006-1005 (r55v61b) CVMCluster:cvm_clus:monitor:node - state: out of cluster reason: Disk for disk group not found: retry to add a node failed If I destroy fss-dg diskgroup on node A, then CVM will start on node B, so issue is the FSS diskgroup where it seems CVM cannot find the remote disk in the diskgroup I can also get round issue by stopping VCS on node A and then CVM will start on node B: [root@r55v61b ~]# hagrp -online cvm -sys r55v61b [root@r55v61b ~]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported If I then start VCS on node A, then B is able to see the FSS diskgroup: [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk fd1_La fss-dg online shared remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported I can stop and start VCS on each node when disks are just exported and VCS is able to see disk from other node, but when I create the FSS diskgroup, CVM won't start on the system that has the remote disk - does anybody have any ideas as to why? MikeSolved5.5KViews1like21CommentsVxVM VVR vradmin ERROR V-5-52-82 Cannot communicate with vradmind server
Hi, I have 1 simple question, my VCS was no DiskArray case, I changed 2 heartbeat IP address to a new one, VCS working properly, but can't check disk status, it shows: #vradmin -g ligdg repstatus ligrvg VxVM VVR vradmin ERROR V-5-52-82 Cannot communicate with vradmind server When I change back the 2 heartbeat IP address, I found it working again. I think there must be a relation between heartbeat IP and vradmin program. My question is how to change 2 heartbeat IP and at the same time let vradmin can works. thanks.Solved4.9KViews0likes26CommentsSF 6.0 does not recognize SF 4.1 Version 120 simple disk?
I am currently preparing to exchange two old SLES 9 systems by new SLES 11 machines. These new ones have SF 6.0 Basic and are able to see and read (dd) the disks currently being in production by the SLES9 systems (SAN FC ones). The disks are Version 120 originally created and in use by SF4.1: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: simple hostid: example4 disk: name=isar1_sas_2 id=1341261625.7.riser5 group: name=varemadg id=1339445883.17.riser5 flags: online ready private foreign autoimport imported pubpaths: block=/dev/disk/by-name/isar1_sas_2 char=/dev/disk/by-name/isar1_sas_2 version: 2.1 iosize: min=512 (bytes) max=1024 (blocks) public: slice=0 offset=2049 len=33552383 disk_offset=0 private: slice=0 offset=1 len=2048 disk_offset=0 update: time=1372290815 seqno=0.83 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=1481 logs: count=1 len=224 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-001498[001250]: copy=01 offset=000231 enabled log priv 001499-001722[000224]: copy=01 offset=000000 enabled # vxdg list varemadg | grep version version: 120 But when I look in the new systems SF6.0 does not recognize the diskgroups at all: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: auto info: format=none flags: online ready private autoconfig invalid pubpaths: block=/dev/vx/dmp/isar1_sas_2 char=/dev/vx/rdmp/isar1_sas_2 guid: - udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC site: - When I am doing a hexdump of the first few sectors it looks pretty much the same on both machines. According to articles like TECH174882 SF6.0 should be more than happy to recognize any disk layout between Version 20 and 170. Any hints what I might be doing wrong?Solved4.6KViews1like18CommentsDisk Group Import Failure
SAN disks which were originally in a Volume Manager 4.1disk group disappeared during a power failure. We had to subsequently rebuild the server and we install a newer version of Volume Manager in the process(without the SAN disks present on the server at the time). Then, after installing the latest version of volume manager, we tried to reattach the disks by importing them using the newer volume manager. We are seeing all of the devices(disks), and a vxdisk list of the individual disks shows that their headers still contain the previously created volume manager disk group information, but the volume manager fails to import the group with an error that says that there are 'No valid disks found containing disk group' . Could a volume manager version mismatch between the latest software version which is currently installed on the server, and the old version used to create the disk headers cause a disk group import to fail? Has anyone seen/experienced/heard of this before?Solved4.4KViews1like6Comments