VCS Cluster not starting.
Hello All, I am having difficulties trying to get VCS started on this system. I have attached what I have got so far. I apperciate any comments or suggestions as to go from here. Thank you The hostnames in the main.cf corrosponds to that of the servers. hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available hasys -state VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastop -all -force VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastart / hastart -onenode dmesg: Exiting: Another copy of VCS may be running engine_A.log 2013/10/22 15:16:43 VCS NOTICE V-16-1-11051 VCS engine join version=4.1000 2013/10/22 15:16:43 VCS NOTICE V-16-1-11052 VCS engine pstamp=4.1 03/03/05-14:58:00 2013/10/22 15:16:43 VCS NOTICE V-16-1-10114 Opening GAB library 2013/10/22 15:16:43 VCS NOTICE V-16-1-10619 'HAD' starting on: db1 2013/10/22 15:16:45 VCS INFO V-16-1-10125 GAB timeout set to 15000 ms 2013/10/22 15:17:00 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding #gabconfig -a GAB Port Memberships =============================================================== #lltstat -nvv LLT node information: Node State Link Status Address * 0 db1 OPEN bge1 UP 00:03:BA:15 bge2 UP 00:03:BA:15 1 db2 CONNWAIT bge1 DOWN bge2 DOWN bash-2.05$ lltconfig LLT is running ps -ef | grep had root 826 1 0 15:16:43 ? 0:00 /opt/VRTSvcs/bin/had root 836 1 0 15:16:45 ? 0:00 /opt/VRTSvcs/bin/hashadowSolved18KViews3likes4CommentsVCS Cluster not starting.
Hi I am facing problem while trying to start VCS . From LOG : ============================================================== tail /var/VRTSvcs/log/engine_A.log 2014/01/13 21:39:14 VCS NOTICE V-16-1-11050 VCS engine version=5.1 2014/01/13 21:39:14 VCS NOTICE V-16-1-11051 VCS engine join version=5.1.00.0 2014/01/13 21:39:14 VCS NOTICE V-16-1-11052 VCS engine pstamp=Veritas-5.1-10/06/09-14:37:00 2014/01/13 21:39:14 VCS INFO V-16-1-10196 Cluster logger started 2014/01/13 21:39:14 VCS NOTICE V-16-1-10114 Opening GAB library 2014/01/13 21:39:14 VCS NOTICE V-16-1-10619 ‘HAD’ starting on: nsscls01 2014/01/13 21:39:16 VCS INFO V-16-1-10125 GAB timeout set to 30000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout set to 200000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11059 GAB registration monitoring action set to log system message 2014/01/13 21:39:31 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding ============================================================================================= root@nsscls01# hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available Please advice how can I start the VCS.Solved16KViews2likes11CommentsMULTINICB resource faulty and not getting cleared
Hi Team, I am seeing MultinicB resource fault as shown below D Ossfs Proxy ossfs_p1 et-coreg-admin2 D PubLan MultiNICB pub_mnic et-coreg-admin2 D Sybase1 Proxy syb1_p1 et-coreg-admin2 Pub_mnic is faulted and in turn proxy resources that mirror the status of MUltinICB resources. Below error seen on 3 rd June Jun 3 10:39:17 et-coreg-admin2 in.mpathd[6604]: [ID 168056 daemon.error] All Interfaces in group pub_mnic have failed Jun 3 10:39:18 et-coreg-admin2 Had[6102]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on sys et-coreg-admin2 As of now interfaces seems ok and network is ok. I want to clear this resource but being a Persistent resource it should recover itself once network issue resolved. # ifconfig -a lo0: flags=1001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8131 index 1 inet 117.0.0.1 netmask ff000000 bnxe0: flags=19040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED> mtu 1500 index 1 inet 10.106.111.66 netmask ffffff80 broadcast 10.106.111.117 groupname pub_mnic ether 14:58:d0:54:18:18 bnxe0:1: flags=11000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FAILED> mtu 1500 index 1 inet 10.106.111.70 netmask ffffff80 broadcast 10.106.111.117 bnxe1: flags=19040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED> mtu 1500 index 3 inet 10.106.111.68 netmask ffffff80 broadcast 10.106.111.117 groupname pub_mnic ether 14:58:d0:54:18:1c hares -display pub_mnic #Resource Attribute System Value pub_mnic Group global PubLan pub_mnic Type global MultiNICB pub_mnic AutoStart global 1 pub_mnic Critical global 1 pub_mnic Enabled global 1 pub_mnic LastOnline global admin1 pub_mnic MonitorOnly global 0 pub_mnic ResourceOwner global pub_mnic TriggerEvent global 0 pub_mnic ArgListValues admin1 UseMpathd 1 1 MpathdCommand 1 /usr/lib/inet/in.mpathd ConfigCheck 1 1 MpathdRestart 1 1 Device 4 bnxe0 0 bnxe1 1 NetworkHosts 1 10.106.111.51 LinkTestRatio 1 1 IgnoreLinkStatus 1 1 NetworkTimeout 1 100 OnlineTestRepeatCount 1 3 OfflineTestRepeatCount 1 3 NoBroadcast 1 0 DefaultRouter 1 0.0.0.0 Failback 1 0 GroupName 1 "" Protocol 1 IPv4 pub_mnic ArgListValues admin1 UseMpathd 1 1 MpathdCommand 1 /usr/lib/inet/in.mpathd ConfigCheck 1 1 MpathdRestart 1 1 Device 4 bnxe0 0 bnxe1 1 NetworkHosts 1 10.106.111.51 LinkTestRatio 1 1 IgnoreLinkStatus 1 1 NetworkTimeout 1 100 OnlineTestRepeatCount 1 3 OfflineTestRepeatCount 1 3 NoBroadcast 1 0 DefaultRouter 1 0.0.0.0 Failback 1 0 GroupName 1 "" Protocol 1 IPv4 pub_mnic ConfidenceLevel admin1 0 pub_mnic ConfidenceLevel admin1 0 pub_mnic ConfidenceMsg admin1 pub_mnic ConfidenceMsg admin1 pub_mnic Flags admin1 pub_mnic Flags admin1 pub_mnic IState admin1 not waiting pub_mnic IState admin1 not waiting pub_mnic MonitorMethod admin1 Traditional pub_mnic MonitorMethod admin1 Traditional pub_mnic Probed admin1 1 pub_mnic Probed admin1 1 pub_mnic Start admin1 0 pub_mnic Start admin1 0 pub_mnic State admin1 ONLINE pub_mnic State admin1 FAULTED pub_mnic ComputeStats global 0 pub_mnic ConfigCheck global 1 pub_mnic DefaultRouter global 0.0.0.0 pub_mnic Failback global 0 pub_mnic GroupName global pub_mnic IgnoreLinkStatus global 1 pub_mnic LinkTestRatio global 1 pub_mnic MpathdCommand global /usr/lib/inet/in.mpathd pub_mnic MpathdRestart global 1 pub_mnic NetworkHosts global 10.106.111.51 pub_mnic NetworkTimeout global 100 pub_mnic NoBroadcast global 0 pub_mnic OfflineTestRepeatCount global 3 pub_mnic OnlineTestRepeatCount global 3 pub_mnic Protocol global IPv4 pub_mnic TriggerResStateChange global 0 pub_mnic UseMpathd global 1 pub_mnic ContainerInfo admin1 Type Name Enabled pub_mnic ContainerInfo admin1 Type Name Enabled pub_mnic Device admin1 bnxe0 0 bnxe1 1 pub_mnic Device admin1 bnxe0 0 bnxe1 1 pub_mnic MonitorTimeStats admin1 Avg 0 TS pub_mnic MonitorTimeStats admin1 Avg 0 TS pub_mnic ResourceInfo admin1 State Valid Msg TS pub_mnic ResourceInfo admin1 State Stale Msg TS Please help to solve this asapSolved1.7KViews0likes3CommentsErrors in dmpevent.log file
Hi , I am seeing below errror messages in dmpevent.log /var/adm/vx/dmpevents.log Mon Mar 28 11:23:27.786: SCSI error occured on Path sdv: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.788: SCSI error occured on Path sdv: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.790: SCSI error occured on Path sdu: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.792: SCSI error occured on Path sdu: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.793: SCSI error occured on Path sdaa: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Please confirm the cause and impact of these messages. Regards S.1.3KViews0likes3CommentsQuery regarding one vcs node down
Hi Team, Please suggest, if I have 4 nodes and due to some network issue suddenly all 4 nodes down. 3 nodes came up but 1 node is not coming up. So what would happen to vcs services.Does that run automatically or need to run it manually. Please share the step by step activity. Thanks. AllaboutunixSolved1.3KViews0likes7CommentsAfter network fluctuation disk group is imported on both nodes
Hi, After network fluctuation disk group is imported on both nodes, we try to deport DG frm passive node but it showing "offlining" status only. i think to reslove this we have to stop the VCS forecefull on passive node, but now i able to see that srvice grp is showing offline only & DG are deported successfully. I have attached engine_A.log file... CAN u help me to find out how Disk grp got deported on passive node...1.6KViews1like3CommentsJeopardy state Query
Hi Team, I have a query,If I have a 2 node cluster set up( S1 -S2)and system S2 went into the state of Jeopardywhat would be the impact on the service group if a) SG has no critical resources setup b) SG has critical resource set up What would be the steps to rectify the this Jeopardy state. Please guide. Thanks..Solved1.7KViews1like2CommentsUnable to boot into the OS
When trying to boot Solaris 10 server with veritas volume manager 4.0 Iget the following error. NOTICE: VxVM vxdmp V-5-0-34 added disk array DISKS, datype = Disk Oct 21 01:16:57 svc.startd[7]: svc:/system/power:default: Could not interpret user property. [ system/power:default failed (see 'svcs -x' for details) ] LLT INFO V-14-1-10009 LLT Protocol available GAB INFO V-15-1-20021 GAB available Oct 21 01:17:01 vxvm:vxconfigd: V-5-1-7601 vxvm:vxconfigd: ERROR: Could not open file /etc/vx/array.info for writing Oct 21 01:17:01 vxvm:vxconfigd: V-5-1-11219 vxvm:vxconfigd: ERROR: Failed to open file /etc/vx/disk.info.new Oct 21 01:17:01 vxvm:vxconfigd: V-5-1-7601 vxvm:vxconfigd: ERROR: Could not open file /etc/vx/array.info for writing [ system/sysevent:default failed repeatedly (see 'svcs -x' for details) ] Requesting System Maintenance Mode (See /lib/svc/share/README for more information.) Console login service(s) cannot run Can anyone provide any inight on these errors.1.1KViews0likes3Comments