failed to create a Veritas dg
Hi, I'm creating a DG with 6 LUNs. The LUNs have 1 TB each. The disks are as below: When you configure a disk error message appears: emc_clariion0_264 auto - - nolabel emc_clariion0_265 auto - - nolabel emc_clariion0_266 auto - - nolabel emc_clariion0_267 auto - - nolabel emc_clariion0_268 auto - - nolabel emc_clariion0_269 auto - - nolabel root@ # vxdisksetup -i emc_clariion0_264 format=cdsdisk c5t500601663EE00243d228s2 VxVM vxdisksetup ERROR V-5-2-5241 Cannot label as disk geometry cannot be obtained. root@ # root@ # vxdisksetup -i emc_clariion0_265 format=cdsdisk c5t5006016F3EE00243d241s2 VxVM vxdisksetup ERROR V-5-2-5241 Cannot label as disk geometry cannot be obtained. root@ # ... ... root@ # vxdisk list emc_clariion0_264 Device: emc_clariion0_264 devicetag: emc_clariion0_264 type: auto flags: nolabel private autoconfig pubpaths: block=/dev/vx/dmp/emc_clariion0_264 char=/dev/vx/rdmp/emc_clariion0_264 guid: - udid: DGC%5FVRAID%5FCKM00123600618%5F6006016025303100932DF87904C4E411 site: - errno: Device path not valid Multipathing information: numpaths: 8 c7t5006016E3EE00243d228s2 state=enabled type=secondary c7t500601673EE00243d228s2 state=enabled type=primary c5t5006016F3EE00243d228s2 state=enabled type=secondary c5t500601663EE00243d228s2 state=enabled type=primary c6t500601663EE00243d228s2 state=enabled type=primary c6t5006016F3EE00243d228s2 state=enabled type=secondary c8t5006016E3EE00243d228s2 state=enabled type=secondary c8t500601673EE00243d228s2 state=enabled type=primary root@BMG01 # Can someone help me solve the problem? Thanks. MarconiSolved4.6KViews2likes12CommentsDisks name not match within two node cluster
Hi, I have two node cluster and today I ran Virtual Fire Drill in order to see if everything is OK. I've noticed I'm getting errors such as: * udid.vfd: Failed to get disk information for disk san_vc0_13. * udid.vfd: UDIDS for device <san_vc_8 do not match on cluster nodes. [...] Anyway, I've connected to both servers, san_vc0_13 doesn't exist on the second node. About the other errors the disks name is not match. Reviewing this case, I've seen we have 2 more temporary disks that the first node has, that has nothing to do with the cluster (this situation is OK). the problem is, that these 2 disks, took the number 2 and 3 (san_vc0_2/3), so now the disks names are mismatch and it seems I'm missing a disk on the second node. the cluster diskgroups are viewed by both nodes and they are OK (even within the cluster), the only problem is the names are not match and VFP is warning about it. Anyway to change the disk's names and make it permanent (by UDID or so) ? Thanks,Solved2.9KViews1like10CommentsHow to map drive name from vxdisk list to actual drive name
Hi, This question is for SF on Solaris, I'd like to know how to map the drive name as given by vxdisk list to the actual drive in cxtxdx format? For instance, my output of vxdisk list is: bash-3.2# vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:ZFS - - ZFS disk_14 auto:cdsdisk - - online disk_15 auto:cdsdisk - - online disk_16 auto:cdsdisk - - online disk_17 auto:cdsdisk - - online disk_18 auto:cdsdisk - - online And my output of format is: bash-3.2# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0t0d0 <DEFAULT cyl 19146 alt 2 hd 16 sec 255> /pci@1e,600000/ide@d/dad@0,0 1. c1t52d0 <SEAGATE-ST336605FSUN36G-0638 cyl 24620 alt 2 hd 27 sec 107> /pci@1f,700000/SUNW,qlc@3/fp@0,0/ssd@w21000004cf99e36a,0 2. c1t53d0 <SEAGATE-ST39102FCSUN9.0G-0D29 cyl 4924 alt 2 hd 27 sec 133> /pci@1f,700000/SUNW,qlc@3/fp@0,0/ssd@w2100002037224d78,0 3. c1t54d0 <SEAGATE-ST39102FCSUN9.0G-1129 cyl 4924 alt 2 hd 27 sec 133> /pci@1f,700000/SUNW,qlc@3/fp@0,0/ssd@w21000020371b9af1,0 4. c1t55d0 <SEAGATE-ST39102FCSUN9.0G-0D29 cyl 4924 alt 2 hd 27 sec 133> /pci@1f,700000/SUNW,qlc@3/fp@0,0/ssd@w21000020372251db,0 5. c1t57d0 <SEAGATE-ST336605FSUN36G-0538 cyl 24620 alt 2 hd 27 sec 107> /pci@1f,700000/SUNW,qlc@3/fp@0,0/ssd@w21000004cf62cb72,0 How do I establish a mapping between drive names? Also, what exactly does the command "vxdctl enable" do? Thanks in advance.Solved2.8KViews1like4CommentsVCS Switch Over problem
Hi, I do have som problems when testing Cluster Switch (2 node cluster VCS 6.2, Solaris 10). We test with init 6. Active node always hangs with : 2014/02/09 08:07:36 VCS ERROR V-16-10001-11511 (node1) Volume:vol_v1:offline:Cannot find Volume v1, either the Volume is a Volume Set or Veritas Volume Manager configuration daemon (vxconfigd) is not in enabled mode 2014/02/09 08:07:36 VCS INFO V-16-6-15002 (node1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resstatechange node1 mnt_1 ONLINE OFFLINE successfully 2014/02/09 08:07:37 VCS INFO V-16-2-13716 (node1) Resource(vol_v1): Output of the completed operation (offline) ============================================== VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible ============================================== 2014/02/09 08:07:37 VCS ERROR V-16-2-13064 (node1) Agent is calling clean for resource(vol_v1) because the resource is up even after offline completed. 2014/02/09 08:07:37 VCS ERROR V-16-10001-11511 (node1) Volume:vol_v1:clean:Cannot find Volume v1, either the Volume is a Volume Set or Veritas Volume Manager configuration daemon (vxconfigd) is not in enabled mode 2014/02/09 08:07:38 VCS INFO V-16-2-13716 (node1) Resource(vol_v1): Output of the completed operation (clean) ============================================== VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible ============================================== I noticed that the init script /etc/rc0.d/K99vxvm-shutdown does stop the vxconfigd and also does "/usr/sbin/vxdctl -k stop". My question is do I need any vxvm init script since the upgrade from 5 to 6.1 is done and we have SMF service in place, or should I increase the timeouts of the VCS stop procedures? Thanks a lot in advance! CheersSolved2.8KViews0likes10Commentsvxconfigd not starting
when I try to start vxconfigd, I am getting this error: 15:39:15 /etc/init.d : # vxconfigd vxvm:vxconfigd: NOTICE: ddl_make_dll_info: Invalid library - libvxhpeva.so vxvm:vxconfigd: NOTICE: ddl_search_and_place: Can't make dll info vxvm:vxconfigd: NOTICE: ddl_search_and_place: Library libvxhpeva.so.3 validation fails vxvm:vxconfigd: ERROR: Segmentation violation - core dumped First we have an array that did not have a ASL library on the server installed. when I saw that I detached the array and wanted to start it clean but get the above error. This is a solaris 8 box, 3.5 vxvm, attached array is HP EVA. Old legacy stuff..~:), but we are moving away from it soon. Thanks in advance DanSolved2.8KViews0likes14CommentsHow to Change the DiskGroup ID.
Hello, We have SFHA 6.0 installed on Solaris 10. We have changed the hostname of the solaris server and henceI need to change the diskgroup id of a particular diskgroup.What will be the command or procedure for this? I have enabled the newhostname using vxdctl hostid, but the diskgroup does not changes after this.2.7KViews0likes8CommentsVxdisk shows disk error on secondary cluster
Hello ppl, I need quick help to understand the below situation as i am a bit new to VCS world . We have a global cluster running with 2 nodes on each site . Primary side (2) and secondary site 2. The secondary site however is complete idle doing nothing and just gettng data replicated via storage disk groups. There are usually 2 disk group on each site cluster (ossdg and sybasedg) when active . From few days i am observing on secondary site nodes (admin1 and 2) Vxvm disk errors and its hard for me to figure out is it nomral beacuse it si idle cluster. The disks on storgare looks fine with no failure however Vxvm gives error and OS doesnt see it too . Also i dont see any output for vxprint . Is this normal for global cluster. Please help me figure out if i need to take an action . PFB Vxdisk list ouput : Seondary cluster server 1 [23:18:11 root@xxx-xyz-admin1 /]>> vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:SVM - - SVM c0t1d0s2 auto:SVM - - SVM c1t500601623DE03785d0s2 auto:sliced c1t500601623DE03785d0s2 vxfendg1 on line c1t500601623DE03785d1s2 auto:sliced c1t500601623DE03785d1s2 vxfendg1 on line c1t500601623DE03785d2s2 auto:sliced c1t500601623DE03785d2s2 vxfendg1 on line c1t500601623DE03785d3s2 auto:sliced - - online c1t500601623DE03785d4s2 auto:sliced - - online c1t500601623DE03785d5s2 auto:sliced - - online c1t500601623DE03785d6s2 auto - - error c1t500601623DE03785d7s2 auto - - error c1t500601623DE03785d8s2 auto - - error secondary cluster server 2 [23:19:40 root@xxx-xyz-admin2 /]>> vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:SVM - - SVM c0t1d0s2 auto:SVM - - SVM c1t500601623DE03785d0s2 auto:sliced - - online c1t500601623DE03785d1s2 auto:sliced - - online c1t500601623DE03785d2s2 auto:sliced - - online c1t500601623DE03785d3s2 auto:sliced - - online c1t500601623DE03785d4s2 auto:sliced - - online c1t500601623DE03785d5s2 auto:sliced - - online c1t500601623DE03785d6s2 auto - - error c1t500601623DE03785d7s2 auto - - error c1t500601623DE03785d8s2 auto - - error c1t500601623DE03785d9s2 auto - - error Server icluster runs in active /active mode. Let me know if any extrax info is required to analyse this.Solved2.6KViews0likes10Commentshow to delete a veritas filesystem
Hi, I have just created a veritas filesystem/dblog. # df -Ig Filesystem GB blocks Used Free %Used Mounted on /dev/hd4 0.25 0.19 0.06 75% / /dev/hd2 3.00 2.01 0.99 68% /usr /dev/hd9var 3.00 0.31 2.69 11% /var /dev/hd3 0.12 0.01 0.12 5% /tmp /dev/hd1 0.12 0.00 0.12 1% /home /dev/hd11admin 0.12 0.00 0.12 1% /admin /proc - - - - /proc /dev/hd10opt 10.00 4.79 5.21 48% /opt /dev/livedump 0.25 0.00 0.25 1% /var/adm/ras/livedump /dev/odm 0.00 0.00 0.00 -1% /dev/odm /dev/vx/dsk/dbs-dg/oraclevol 14.65 0.94 13.71 7% /oracle /dev/vx/dsk/dbs-dg/exportvol 48.83 3.13 45.70 7% /export /dev/vx/dsk/dbs-dg/dbweb 4.88 0.32 4.56 7% /dbweb /dev/vx/dsk/dbs-dg/dblogvol 48.83 3.13 45.70 7% /dblog I want to delete this file system because the size is much bigger than the requirement. Anyone can help please. Thanks in advanceSolved2.5KViews0likes2CommentsNODEVICE status for Plex when iam using vxprint -ht -g oracledg
Hi Team, When iam using the command vxprint -ht -g oracledg, it is showing the plex status of the vx disks as "NODEVICE". If the status is RECOVER,STALE,CLEAN,INACTIVE , we can recover using the vxmend command. But when it is showing as "NODEVICE", how can i recover it. One of my Storage hardware is failed and we removed that hardware, we have one more backup instead of that. [5:14pm] root@dws01: # vxprint -ht -g oracledg v c3t1d12s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg014vol-01 c3t1d12s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg01-01 oracledg014vol-01 oracledg01 0 1950220288 0 - NDEV v c3t1d13s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg024vol-01 c3t1d13s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg02-01 oracledg024vol-01 oracledg02 0 1950220288 0 - NDEV v c3t1d14s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg034vol-01 c3t1d14s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg03-01 oracledg034vol-01 oracledg03 0 1950220288 0 - NDEV v c3t1d15s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg044vol-01 c3t1d15s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg04-01 oracledg044vol-01 oracledg04 0 1950220288 0 - NDEV v c3t1d16s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg054vol-01 c3t1d16s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg05-01 oracledg054vol-01 oracledg05 0 1950220288 0 - NDEV v c3t1d17s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg064vol-01 c3t1d17s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg06-01 oracledg064vol-01 oracledg06 0 1950220288 0 - NDEV v c3t1d18s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg074vol-01 c3t1d18s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg07-01 oracledg074vol-01 oracledg07 0 1950220288 0 - NDEV v c3t1d19s4 - DISABLED ACTIVE 1950220288 ROUND - gen pl oracledg094vol-01 c3t1d19s4 DISABLED NODEVICE 1950220288 CONCAT - RW sd oracledg09-01 oracledg094vol-01 oracledg09 0 1950220288 0 - NDEV Or once i fixed the removed hardware , the status of the plex again came to RECOVERor STALE or CLEAN status. Till now iam not received that hardware yet,But i dont think that may be the problem for this plex Status "NODEVICE".2.4KViews0likes5Comments