How to check VCS version and update ?
Dear all, I've been asked to manage few asymmetric failover win 2003 systems with Veritas Cluster Server installed: how can i check the current verision and if there was some hot-fix, and services pach installed ? Thanks in advance and Best Regards, /fabrizio32KViews2likes10CommentsVCS Cluster not starting.
Hello All, I am having difficulties trying to get VCS started on this system. I have attached what I have got so far. I apperciate any comments or suggestions as to go from here. Thank you The hostnames in the main.cf corrosponds to that of the servers. hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available hasys -state VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastop -all -force VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastart / hastart -onenode dmesg: Exiting: Another copy of VCS may be running engine_A.log 2013/10/22 15:16:43 VCS NOTICE V-16-1-11051 VCS engine join version=4.1000 2013/10/22 15:16:43 VCS NOTICE V-16-1-11052 VCS engine pstamp=4.1 03/03/05-14:58:00 2013/10/22 15:16:43 VCS NOTICE V-16-1-10114 Opening GAB library 2013/10/22 15:16:43 VCS NOTICE V-16-1-10619 'HAD' starting on: db1 2013/10/22 15:16:45 VCS INFO V-16-1-10125 GAB timeout set to 15000 ms 2013/10/22 15:17:00 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding #gabconfig -a GAB Port Memberships =============================================================== #lltstat -nvv LLT node information: Node State Link Status Address * 0 db1 OPEN bge1 UP 00:03:BA:15 bge2 UP 00:03:BA:15 1 db2 CONNWAIT bge1 DOWN bge2 DOWN bash-2.05$ lltconfig LLT is running ps -ef | grep had root 826 1 0 15:16:43 ? 0:00 /opt/VRTSvcs/bin/had root 836 1 0 15:16:45 ? 0:00 /opt/VRTSvcs/bin/hashadowSolved18KViews3likes4CommentsVCS Cluster not starting.
Hi I am facing problem while trying to start VCS . From LOG : ============================================================== tail /var/VRTSvcs/log/engine_A.log 2014/01/13 21:39:14 VCS NOTICE V-16-1-11050 VCS engine version=5.1 2014/01/13 21:39:14 VCS NOTICE V-16-1-11051 VCS engine join version=5.1.00.0 2014/01/13 21:39:14 VCS NOTICE V-16-1-11052 VCS engine pstamp=Veritas-5.1-10/06/09-14:37:00 2014/01/13 21:39:14 VCS INFO V-16-1-10196 Cluster logger started 2014/01/13 21:39:14 VCS NOTICE V-16-1-10114 Opening GAB library 2014/01/13 21:39:14 VCS NOTICE V-16-1-10619 ‘HAD’ starting on: nsscls01 2014/01/13 21:39:16 VCS INFO V-16-1-10125 GAB timeout set to 30000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout set to 200000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11059 GAB registration monitoring action set to log system message 2014/01/13 21:39:31 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding ============================================================================================= root@nsscls01# hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available Please advice how can I start the VCS.Solved16KViews2likes11CommentsFencing and Reservation Conflict
Hi to all I have redhat linux 5.9 64bit with SFHA 5.1 SP1 RP4 with fencing enable ( our storage device is IBM .Storwize V3700 SFF scsi3 compliant [root@mitoora1 ~]# vxfenadm -d I/O Fencing Cluster Information: ================================ Fencing Protocol Version: 201 Fencing Mode: SCSI3 Fencing SCSI3 Disk Policy: dmp Cluster Members: * 0 (mitoora1) 1 (mitoora2) RFSM State Information: node 0 in state 8 (running) node 1 in state 8 (running) ******************************************** in /etc/vxfenmode (scsi3_disk_policy=dmp and vxfen_mode=scsi3) vxdctl scsi3pr scsi3pr: on [root@mitoora1 etc]# more /etc/vxfentab # # /etc/vxfentab: # DO NOT MODIFY this file as it is generated by the # VXFEN rc script from the file /etc/vxfendg. # /dev/vx/rdmp/storwizev70000_000007 /dev/vx/rdmp/storwizev70000_000008 /dev/vx/rdmp/storwizev70000_000009 ****************************************** [root@mitoora1 etc]# vxdmpadm listctlr all CTLR-NAME ENCLR-TYPE STATE ENCLR-NAME ===================================================== c0 Disk ENABLED disk c10 StorwizeV7000 ENABLED storwizev70000 c7 StorwizeV7000 ENABLED storwizev70000 c8 StorwizeV7000 ENABLED storwizev70000 c9 StorwizeV7000 ENABLED storwizev70000 main.cf cluster drdbonesales ( UserNames = { admin = hlmElgLimHmmKumGlj } ClusterAddress = "10.90.15.30" Administrators = { admin } UseFence = SCSI3 ) ********************************************** I configured the coordinator fencing so I have 3 lun in a veritas disk group ( dmp coordinator ) All seems works fine but I noticed a lot of reservation conflict in the messages of both nodes On the log of the server I am constantly these messages: /var/log/messages Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 9:0:1:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 9:0:0:1: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 7:0:1:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:0:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 8:0:1:3: reservation conflict Nov 26 15:14:09 mitoora2 kernel: sd 10:0:1:3: reservation conflict You have any idea? Best Regards VincenzoSolved11KViews1like15CommentsUnable to start VCS on one Node. Its two node cluster
I have recently installed VCS on two blade 1500. I am unable to start VCS on both machine at the same time. If I start systemA, I noticed VCS started and on systemB unable to start. It depends on which system I am starting first. 2010/10/22 22:16:40 VCS INFO V-16-1-10196 Cluster logger started 2010/10/22 22:16:40 VCS NOTICE V-16-1-11022 VCS engine (had) started 2010/10/22 22:16:40 VCS NOTICE V-16-1-11050 VCS engine version=5.1 2010/10/22 22:16:40 VCS NOTICE V-16-1-11051 VCS engine join version=5.1.00.0 2010/10/22 22:16:40 VCS NOTICE V-16-1-11052 VCS engine pstamp=Veritas-5.1-10/06/10-14:37:00 2010/10/22 22:16:41 VCS NOTICE V-16-1-10114 Opening GAB library 2010/10/22 22:16:50 VCS NOTICE V-16-1-10619 'HAD' starting on: systemA 2010/10/22 22:17:10 VCS INFO V-16-1-10125 GAB timeout set to 30000 ms 2010/10/22 22:17:10 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout set to 200000 ms 2010/10/22 22:17:10 VCS NOTICE V-16-1-11059 GAB registration monitoring action set to log system message 2010/10/22 22:17:17 VCS INFO V-16-1-10077 Received new cluster membership 2010/10/22 22:17:18 VCS NOTICE V-16-1-10112 System (systemA) - Membership: 0x3, DDNA: 0x0 2010/10/22 22:17:18 VCS NOTICE V-16-1-10322 System (Node '0') changed state from UNKNOWN to INITING 2010/10/22 22:17:18 VCS NOTICE V-16-1-10086 System (Node '0') is in Regular Membership - Membership: 0x3 2010/10/22 22:17:18 VCS NOTICE V-16-1-10086 System systemA (Node '1') is in Regular Membership - Membership: 0x3 2010/10/22 22:17:18 VCS WARNING V-16-1-50129 Operation 'haclus -modify' rejected as the node is in CURRENT_DISCOVER_WAIT state 2010/10/22 22:17:18 VCS WARNING V-16-1-50129 Operation 'haclus -modify' rejected as the node is in CURRENT_DISCOVER_WAIT state 2010/10/22 22:17:18 VCS NOTICE V-16-1-10453 Node: 0 changed name from: '' to: 'systemB' 2010/10/22 22:17:18 VCS NOTICE V-16-1-10322 System systemB (Node '0') changed state from INITING to RUNNING 2010/10/22 22:17:18 VCS NOTICE V-16-1-10322 System systemA (Node '1') changed state from CURRENT_DISCOVER_WAIT to REMOTE_BUILD 2010/10/22 22:17:19 VCS NOTICE V-16-1-10464 Requesting snapshot from node: 0 2010/10/22 22:17:19 VCS NOTICE V-16-1-10465 Getting snapshot. snapped_membership: 0x3 current_membership: 0x3 current_jeopardy_membership: 0x0 2010/10/22 22:17:27 VCS NOTICE V-16-1-10181 Group VCShmg AutoRestart set to 1 2010/10/22 22:17:27 VCS INFO V-16-1-10466 End of snapshot received from node: 0. snapped_membership: 0x3 current_membership: 0x3 current_jeopardy_membership: 0x0 2010/10/22 22:17:27 VCS WARNING V-16-1-10030 UseFence=NONE. Hence do not need fencing 2010/10/22 22:17:27 VCS ERROR V-16-1-10651 Cluster UUID received from snapshot is not matching with one existing on systemA. VCS Stopping. Manually restart VCS after configuring correct CID in the cluster. Please help me.Solved11KViews1like9Commentscannot mount vxfs file system
Hi Friends, After adding kernal patch not able to mount the below file systems. root@wwdcssofa02$vxprint -htg wintshared_dg DG NAME NCONFIG NLOG MINORS GROUP-ID ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK CO NAME CACHEVOL KSTATE STATE VT NAME RVG KSTATE STATE NVOLUME V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO EX NAME ASSOC VC PERMS MODE STATE SR NAME KSTATE dg wintshared_dg default default 1000 1268867476.33.wwdcssofa02 dm wintshared_dg01 c3t5006048AD52E6588d113s2 auto 2048 142761216 - v oravol - ENABLED ACTIVE 142761216 SELECT - fsgen pl oravol-01 oravol ENABLED ACTIVE 142761216 CONCAT - RW sd wprd_dg01-01 oravol-01 wintshared_dg01 0 142761216 0 c3t5006048AD52E6588d113 ENA root@wwdcssofa02$vxprint -htg oraoemdg DG NAME NCONFIG NLOG MINORS GROUP-ID ST NAME STATE DM_CNT SPARE_CNT APPVOL_CNT DM NAME DEVICE TYPE PRIVLEN PUBLEN STATE RV NAME RLINK_CNT KSTATE STATE PRIMARY DATAVOLS SRL RL NAME RVG KSTATE STATE REM_HOST REM_DG REM_RLNK CO NAME CACHEVOL KSTATE STATE VT NAME RVG KSTATE STATE NVOLUME V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO EX NAME ASSOC VC PERMS MODE STATE SR NAME KSTATE dg oraoemdg default default 12000 1235085883.14.wwdcssofa02 dm oraoemdg01 c3t5006048AD52E6588d48s2 auto 65536 8854528 - v oemvol - ENABLED ACTIVE 8854528 SELECT - fsgen pl oemvol-01 oemvol ENABLED ACTIVE 8854528 CONCAT - RW sd oraoemdg01-01 oemvol-01 oraoemdg01 0 8854528 0 c3t5006048AD52E6588d48 ENA root@wwdcssofa02$more /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # fd - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c0t0d0s1 - - swap - no - /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 / ufs 1 no - /dev/dsk/c0t0d0s3 /dev/rdsk/c0t0d0s3 /var ufs 1 no - /dev/dsk/c0t0d0s7 /dev/rdsk/c0t0d0s7 /export/home ufs 2 yes - #/dev/dsk/c0t0d0s4 /dev/rdsk/c0t0d0s4 /globaldevices ufs 2 yes - /devices - /devices devfs - no - sharefs - /etc/dfs/sharetab sharefs - no - ctfs - /system/contract ctfs - no - objfs - /system/object objfs - no - swap - /tmp tmpfs - yes - /dev/vx/dsk/wuatp_dg/oravol /dev/vx/rdsk/wuatp_dg/oravol /wuatp/ora/applmgr vxfs - yes suid /dev/vx/dsk/oraoemdg/oemvol /dev/vx/rdsk/oraoemdg/oemvol /oraoem vxfs 0 yes suid /dev/vx/dsk/wintshared_dg/oravol /dev/vx/rdsk/wintshared_dg/oravol /wint/ora/applmgr vxfs - no global,suid 10.230.135.25:/operations - /ops nfs - yes bg,rw,soft 10.230.135.25:/applications - /app nfs - yes bg,rw,soft 10.230.135.25:/wwwreports - /wwwreports nfs - yes bg,rw,soft 10.230.135.25:/otissrc - /otissrc nfs - yes bg,rw,soft /dev/did/dsk/d2s4 /dev/did/rdsk/d2s4 /global/.devices/node@1 ufs 2 no global#df -h /dev/vx/dsk/wuatp_dg/oravol 69G 59G 9.0G 87% /wuatp/ora/applmgr df: `/wint/ora/applmgr': I/O error root@wwdcssofa02$mountall mount: /tmp is already mounted or swap is busy UX:vxfs mount: ERROR: V-3-20002: Cannot access /dev/vx/dsk/oraoemdg/oemvol: No such file or directory UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version nfs mount: mount: /ops: Device busy nfs mount: mount: /otissrc: Device busy nfs mount: mount: /wwwreports: Device busy nfs mount: mount: /app: Device busy mount: /dev/dsk/c0t0d0s7 is already mounted or /export/home is busy UX:vxfs mount: ERROR: V-3-21264: /dev/vx/dsk/wuatp_dg/oravol is already mounted, /wuatp/ora/applmgr is busy, allowable number of mount points exceeded root@wwdcssofa02$vxdisk list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online invalid c0t1d0s2 auto:none - - online invalid c3t5006048AD52E6588d47s2 auto:cdsdisk wuatp_dg01 wuatp_dg online c3t5006048AD52E6588d48s2 auto:cdsdisk oraoemdg01 oraoemdg online c3t5006048AD52E6588d51s2 auto:cdsdisk wintshared_dg01 bigwintshared_dg online c3t5006048AD52E6588d53s2 auto:cdsdisk - - online c3t5006048AD52E6588d113s2 auto:cdsdisk wintshared_dg01 wintshared_dg online root@wwdcssofa02$vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS c0t0d0s2 auto:none - - online invalid c0t1d0s2 auto:none - - online invalid c3t5006048AD52E6588d47s2 auto:cdsdisk wuatp_dg01 wuatp_dg online c3t5006048AD52E6588d48s2 auto:cdsdisk oraoemdg01 oraoemdg online c3t5006048AD52E6588d51s2 auto:cdsdisk wintshared_dg01 bigwintshared_dg online c3t5006048AD52E6588d53s2 auto:cdsdisk - (scquorum_dg) online c3t5006048AD52E6588d113s2 auto:cdsdisk wintshared_dg01 wintshared_dg onlineSolved9.8KViews1like2CommentsVCS Warning for Unknown State
Hi, I just curious why I received a Warning notification for Netlsnr Resource Group when the error is not logged into engine_A.log I have read the VCS documentation but the only hint I have is this. Resource state is unknown. Warning VCS cannot identify the state of the resource. Can anyone provide better explanation what could have caused VCS to send the warning email?Solved8.7KViews0likes17CommentsCannot unload GAB and LLT on RHEL 6.0
Hi all, i have next problem # lltstat -n LLT node information: Node State Links 0 srv-n1 OPEN 2 * 1 srv-n2 OPEN 2 # gabconfig -a GAB Port Memberships =============================================================== Port a gen 7b4d01 membership 01 Port b gen 7b4d05 membership 01 Port d gen 7b4d04 membership 01 Port h gen 7b4d11 membership 01 # /opt/VRTSvcs/bin/haconf -dump -makero VCS WARNING V-16-1-10369 Cluster not writable. # /opt/VRTSvcs/bin/hastop -all -force # /etc/init.d/vxfen stop Stopping vxfen.. Stopping vxfen.. Done # /etc/init.d/gab stop Stopping GAB: ERROR! Cannot unload GAB module. Clients still exist Kill/Stop clients corresponding to following ports. GAB Port Memberships =============================================================== Port d gen 7b4d04 membership 01 # /etc/init.d/llt stop Stopping LLT: LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [1] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [2] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [3] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [4] LLT lltconfig ERROR V-14-2-15121 LLT unconfigure aborted, unregister 3 port(s) LLT:Warning: lltconfig failed. Retrying [5] LLT:Error: lltconfig failed OK, i see lsmod and modinfo for details... # lsmod Module Size Used by vxodm 206291 1 vxgms 284352 0 vxglm 289848 0 gab 283317 4 llt 180985 5 gab autofs4 27683 3 sunrpc 241630 1 dmpCLARiiON 11771 1 dmpap 9390 1 vxspec 3174 6 vxio 3261814 1 vxspec vxdmp 377776 20 vxspec,vxio cpufreq_ondemand 10382 1 acpi_cpufreq 8593 3 freq_table 4847 2 cpufreq_ondemand,acpi_cpufreq ipv6 321209 60 vxportal 5940 0 fdd 53457 1 vxodm vxfs 2957815 2 vxportal,fdd exportfs 4202 0 serio_raw 4816 0 i2c_i801 11190 0 iTCO_wdt 11708 0 iTCO_vendor_support 3022 1 iTCO_wdt ioatdma 57872 9 dca 7099 1 ioatdma i5k_amb 5039 0 hwmon 2464 1 i5k_amb i5000_edac 8833 0 edac_core 46055 3 i5000_edac sg 30186 0 shpchp 33448 0 e1000e 140051 0 ext4 353979 3 mbcache 7918 1 ext4 jbd2 89033 1 ext4 dm_mirror 14003 1 dm_region_hash 12200 1 dm_mirror dm_log 10088 3 dm_mirror,dm_region_hash sr_mod 16162 0 cdrom 39769 1 sr_mod sd_mod 37221 18 crc_t10dif 1507 1 sd_mod pata_acpi 3667 0 ata_generic 3611 0 ata_piix 22588 0 ahci 39105 4 qla2xxx 280129 24 scsi_transport_fc 50893 1 qla2xxx scsi_tgt 12107 1 scsi_transport_fc radeon 797054 1 ttm 46942 1 radeon drm_kms_helper 32113 1 radeon drm 200778 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5664 1 radeon i2c_core 31274 5 i2c_i801,radeon,drm_kms_helper,drm,i2c_algo_bit dm_mod 76856 20 dm_mirror,dm_log # rmmod gab ERROR: Module gab is in use [root@srv-vrts-n2 ~]# modinfo gab filename: /lib/modules/2.6.32-71.el6.x86_64/veritas/vcs/gab.ko license: Proprietary. Send bug reports to support@veritas.com description: Group Membership and Atomic Broadcast 5.1.120.000-SP1PR2 author: VERITAS Software Corp. srcversion: F43C75576C05662FB0ED8C8 depends: llt vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: gab_logflag:int parm: gab_numnids:maximum nodes in the cluster (1-128) (int) parm: gab_numports:maximum gab ports allowed (1-32) (int) parm: gab_flowctrl:queue depth that causes flow-control (1-128) (int) parm: gab_logbufsize:internal log buffer size in bytes (8100-65400) (int) parm: gab_msglogsize:maximum messages in internal message log (128-4096) (int) parm: gab_isolate_time:maximum time to wait for isolated client (16000-240000) (int) parm: gab_kill_ntries:number of times to attempt to kill client (3-10) (int) parm: gab_kstat_size:Number of system statistics to maintain in GAB 60-240 (int) parm: gab_conn_wait:maximum number of wait for CONNECTS message (1-256) (int) parm: gab_ibuf_count:maximum number of intermediate buffers (0-32) (int) # modinfo llt filename: /lib/modules/2.6.32-71.el6.x86_64/veritas/vcs/llt.ko license: Proprietary. Send bug reports to support@veritas.com author: VERITAS Software Corp. description: Low Latency Transport 5.1.120.000-SP1PR2 srcversion: AF11D9C04A71073E1ADCFC8 depends: vermagic: 2.6.32-71.el6.x86_64 SMP mod_unload modversions parm: llt_maxnids:maximum nodes in the cluster (1-128) (int) parm: llt_maxports:maximum llt ports allowed (1-32) (int) parm: llt_nqthread:number of kernel threads to use (2-5) (int) parm: llt_basetimer:frequency of base timer ((10 * 1000)-(500 * 1000)) (int) Hm.... ok, i run /etc/init.d/gab stop with debug ... + echo 'Stopping GAB: ' Stopping GAB: + mod_isloaded ++ lsmod ++ grep '^gab\ ' + return + mod_isconfigured ++ LANG=C ++ LC_ALL=C ++ /sbin/gabconfig -l ++ grep 'Driver state' ++ grep -q Configured + return + /sbin/gabconfig -U + ret=1 + '[' '!' 1 -eq 0 ']' + echo 'ERROR! Cannot unload GAB module. Clients still exist' ERROR! Cannot unload GAB module. Clients still exist + echo 'Kill/Stop clients corresponding to following ports.' Kill/Stop clients corresponding to following ports. + LANG=C + LC_ALL=C + /sbin/gabconfig -a + grep -v 'Port a gen' GAB Port Memberships ... ok, i use /sbin/gabconfig -l and -U # /sbin/gabconfig -l GAB Driver Configuration Driver state : Configured Partition arbitration: Disabled Control port seed : Enabled Halt on process death: Disabled Missed heartbeat halt: Disabled Halt on rejoin : Disabled Keep on killing : Disabled Quorum flag : Disabled Restart : Enabled Node count : 2 Send queue limit : 128 Recv queue limit : 128 IOFENCE timeout (ms) : 15000 Stable timeout (ms) : 5000 # /sbin/gabconfig -U GAB /sbin/gabconfig ERROR V-15-2-25014 clients still registered but it did not help... i find this topic https://www-secure.symantec.com/connect/forums/unable-stop-gab-llt-vcs51solaris-10 and stop ODM # /etc/init.d/vxodm stop Stopping ODM and run /etc/init.d/gab stop , but # /etc/init.d/gab stop Stopping GAB: GAB has usage count greater than zero. Cannot unload I again see /sbin/gabconfig -l # /sbin/gabconfig -l GAB Driver Configuration Driver state : Unconfigured Partition arbitration: Disabled Control port seed : Disabled Halt on process death: Disabled Missed heartbeat halt: Disabled Halt on rejoin : Disabled Keep on killing : Disabled Quorum flag : Disabled Restart : Disabled Node count : 0 Send queue limit : 128 Recv queue limit : 128 IOFENCE timeout (ms) : 15000 Stable timeout (ms) : 5000 [root@srv-vrts-n2 ~]# /sbin/gabconfig -a GAB Port Memberships =============================================================== and again run with debug /etc/init.d/gab stop ... + echo 'Stopping GAB: ' Stopping GAB: + mod_isloaded ++ lsmod ++ grep '^gab\ ' + return + mod_isconfigured ++ LANG=C ++ LC_ALL=C ++ /sbin/gabconfig -l ++ grep 'Driver state' ++ grep -q Configured + return + mod_unload ++ lsmod ++ grep '^gab ' ++ awk '{print $3}' + USECNT=1 + '[' -z 1 ']' + '[' 5 '!=' 0 ']' + ps -e + grep gablogd + '[' 1 -ne 0 ']' + GAB_UNLOAD_RETRIES=0 + '[' 0 '!=' 0 ']' ++ lsmod ++ grep '^gab ' ++ awk '{print $3}' + USECNT=1 + '[' 1 -gt 0 ']' + echo 'GAB has usage count greater than zero. Cannot unload' GAB has usage count greater than zero. Cannot unload + return 1 ... and agian run lsmod lsmod | grep gab gab 283317 1 llt 180985 1 gab [root@srv-vrts-n2 ~]# rmmod gab ERROR: Module gab is in use what can be done in such a situation?Solved7.9KViews1like4Commentssolution needed for vxfen issue
<!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } A:link { so-language: zxx } --> there is a two node cluster and we split two node cluster for upgrade. The isolated node is not coming up as vxfen is not starting /02/01 11:52:05 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying... 2013/02/01 11:52:20 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying... 2013/02/01 11:52:35 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying... 2013/02/01 11:52:50 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying... 2013/02/01 11:53:05 VCS CRITICAL V-16-1-10037 VxFEN driver not configured. Retrying... 2013/02/01 11:53:20 VCS CRITICAL V-16-1-10031 VxFEN driver not configured. VCS Stopping. Manually restart VCS after configuring fencing ^C IOFENCING configuration seems okay on node 2 as /etc/vxfentab has the entry for co-ordinator disks and /etc/vxfendg has diskgroup entry vxfendg2 and these disks and diskgroup are visible too. DEVICE TYPE DISK GROUP STATUS c0t5006016047201339d0s2 auto:sliced - (ossdg) online c0t5006016047201339d1s2 auto:sliced - (sybasedg) online c0t5006016047201339d2s2 auto:sliced - (vxfendg1) online c0t5006016047201339d3s2 auto:sliced - (vxfendg1) online c0t5006016047201339d4s2 auto:sliced - (vxfendg1) online c0t5006016047201339d5s2 auto:sliced - (vxfendg2) online c0t5006016047201339d6s2 auto:sliced - (vxfendg2) online c0t5006016047201339d7s2 auto:sliced - - online c2t0d0s2 auto:SVM - - SVM c2t1d0s2 auto:SVM - - SVM On checking vxfen.log nvoked S97vxfen. Starting Fri Feb 1 11:50:37 CET 2013 starting vxfen.. Fri Feb 1 11:50:37 CET 2013 calling start_fun Fri Feb 1 11:50:38 CET 2013 found vxfenmode file Fri Feb 1 11:50:38 CET 2013 calling generate_vxfentab Fri Feb 1 11:50:38 CET 2013 checking for /etc/vxfendg Fri Feb 1 11:50:38 CET 2013 found /etc/vxfendg. Fri Feb 1 11:50:38 CET 2013 calling generate_disklist Fri Feb 1 11:50:38 CET 2013 Starting vxfen.. Done. Fri Feb 1 11:50:38 CET 2013 starting in vxfen-startup Fri Feb 1 11:50:38 CET 2013 calling regular vxfenconfig Fri Feb 1 11:50:38 CET 2013 return value from above operation is 1 Fri Feb 1 11:50:38 CET 2013 output was VXFEN vxfenconfig ERROR V-11-2-1003 At least three coordinator disks must be defined Log Buffer: 0xfffffffff4041090 refadm2-oss1{root} # cat /etc/vxfendg vxfendg2 and there are below mentioned two disks in vxfendg2 c0t5006016047201339d5s2 auto:sliced - (vxfendg2) online c0t5006016047201339d6s2 auto:sliced - (vxfendg2) online is it due to two disks in coordinator diskgroup? Is it a known issue ?Solved7.7KViews1like16Comments