Maxuproc not get updated even after reboot
Hi, Got a update to change the "maxuproc for wt82369 by 1.5 times" , While verifying we make necessary modification on the Global (wt81958). Normally there is a relation between max_nprocs value and maxuproc value. FYI.. maxuprc = max_nprocs – reserved_procs (default is 5) In this case we modified the max_nprocs value from 30000 to 50000 FYI.. [root@wt81958 GLOBAL] /etc # cat /etc/system | grep max_nprocs set max_nprocs=50000 After the global zone reboot the value is not updated while we hit sysdef [root@wt81958 GLOBAL] /root # sysdef | grep processes 30000 maximum number of processes (v.v_proc) 29995 maximum processes per user id (v.v_maxup) Can anyone please assist us if any thing we missed in this to make the necessary changes to replicate. Awaiting for your valuable suggestions. Thanks, senthilsamSolved3.2KViews0likes3CommentsVCS Cluster not starting.
Hello All, I am having difficulties trying to get VCS started on this system. I have attached what I have got so far. I apperciate any comments or suggestions as to go from here. Thank you The hostnames in the main.cf corrosponds to that of the servers. hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available hasys -state VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastop -all -force VCS ERROR V-16-1-10600 Cannot connect to VCS engine hastart / hastart -onenode dmesg: Exiting: Another copy of VCS may be running engine_A.log 2013/10/22 15:16:43 VCS NOTICE V-16-1-11051 VCS engine join version=4.1000 2013/10/22 15:16:43 VCS NOTICE V-16-1-11052 VCS engine pstamp=4.1 03/03/05-14:58:00 2013/10/22 15:16:43 VCS NOTICE V-16-1-10114 Opening GAB library 2013/10/22 15:16:43 VCS NOTICE V-16-1-10619 'HAD' starting on: db1 2013/10/22 15:16:45 VCS INFO V-16-1-10125 GAB timeout set to 15000 ms 2013/10/22 15:17:00 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding #gabconfig -a GAB Port Memberships =============================================================== #lltstat -nvv LLT node information: Node State Link Status Address * 0 db1 OPEN bge1 UP 00:03:BA:15 bge2 UP 00:03:BA:15 1 db2 CONNWAIT bge1 DOWN bge2 DOWN bash-2.05$ lltconfig LLT is running ps -ef | grep had root 826 1 0 15:16:43 ? 0:00 /opt/VRTSvcs/bin/had root 836 1 0 15:16:45 ? 0:00 /opt/VRTSvcs/bin/hashadowSolved18KViews3likes4CommentsVCS Cluster not starting.
Hi I am facing problem while trying to start VCS . From LOG : ============================================================== tail /var/VRTSvcs/log/engine_A.log 2014/01/13 21:39:14 VCS NOTICE V-16-1-11050 VCS engine version=5.1 2014/01/13 21:39:14 VCS NOTICE V-16-1-11051 VCS engine join version=5.1.00.0 2014/01/13 21:39:14 VCS NOTICE V-16-1-11052 VCS engine pstamp=Veritas-5.1-10/06/09-14:37:00 2014/01/13 21:39:14 VCS INFO V-16-1-10196 Cluster logger started 2014/01/13 21:39:14 VCS NOTICE V-16-1-10114 Opening GAB library 2014/01/13 21:39:14 VCS NOTICE V-16-1-10619 ‘HAD’ starting on: nsscls01 2014/01/13 21:39:16 VCS INFO V-16-1-10125 GAB timeout set to 30000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11057 GAB registration monitoring timeout set to 200000 ms 2014/01/13 21:39:16 VCS NOTICE V-16-1-11059 GAB registration monitoring action set to log system message 2014/01/13 21:39:31 VCS CRITICAL V-16-1-11306 Did not receive cluster membership, manual intervention may be needed for seeding ============================================================================================= root@nsscls01# hastatus -sum VCS ERROR V-16-1-10600 Cannot connect to VCS engine VCS WARNING V-16-1-11046 Local system not available Please advice how can I start the VCS.Solved16KViews2likes11CommentsSolaris 11.1 VXVM 6.0.1 'df' cause a panic
Environment: System Configuration: HP ProLiant BL480c G1 Oracle Solaris 11.1 X86 panic string: BAD TRAP: type=e (#pf Page fault) rp=fffffffc816fdb90 addr=0 occurred in module "unix" due to a NULL pointer dereference Veritas INFO: PKGINST: VRTSvxvm NAME: Binaries for VERITAS Volume Manager by Symantec CATEGORY: system ARCH: i386 VERSION: 6.0.100.000,REV=08.01.2012.08.52 Stack: genunix: [ID 655072 kern.notice] fffffffc816fdab0 unix:die+105 () genunix: [ID 655072 kern.notice] fffffffc816fdb80 unix:trap+153e () genunix: [ID 655072 kern.notice] fffffffc816fdb90 unix:cmntrap+e6 () genunix: [ID 655072 kern.notice] fffffffc816fdca0 unix:strncpy+1c () genunix: [ID 655072 kern.notice] fffffffc816fdcd0 odm:odmstatvfs+90 () genunix: [ID 655072 kern.notice] fffffffc816fdcf0 genunix:fsop_statfs+1a () genunix: [ID 655072 kern.notice] fffffffc816fde70 genunix:cstatvfs64_32+42 () genunix: [ID 655072 kern.notice] fffffffc816fdec0 genunix:statvfs64_32+69 () genunix: [ID 655072 kern.notice] fffffffc816fdf10 unix:brand_sys_sysenter+1dc () Messages: unix: [ID 839527 kern.notice] df: unix: [ID 753105 kern.notice] #pf Page fault unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x0 unix: [ID 243837 kern.notice] pid=3965, pc=0xfffffffffb893ff8, sp=0xfffffffc816fdc88, eflags=0x10206 unix: [ID 211416 kern.notice] cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de> unix: [ID 624947 kern.notice] cr2: 0 unix: [ID 625075 kern.notice] cr3: 59f0a2000 unix: [ID 625715 kern.notice] cr8: c unix: [ID 100000 kern.notice] unix: [ID 592667 kern.notice] rdi: fffffffc816fdd48 rsi: 0 rdx: f unix: [ID 592667 kern.notice] rcx: 1 r8: e80 r9: 0 unix: [ID 592667 kern.notice] rax: fffffffc816fdd48 rbx: fefa3430 rbp: fffffffc816fdca0 unix: [ID 592667 kern.notice] r10: fffffffffb856d00 r11: 0 r12: fffffffc816fdd00 unix: [ID 592667 kern.notice] r13: ffffc10012176880 r14: 0 r15: ffffc1002bb09480 unix: [ID 592667 kern.notice] fsb: 0 gsb: ffffc1000eac8000 ds: 4b unix: [ID 592667 kern.notice] es: 4b fs: 0 gs: 1c3 unix: [ID 592667 kern.notice] trp: e err: 0 rip: fffffffffb893ff8 unix: [ID 592667 kern.notice] cs: 30 rfl: 10206 rsp: fffffffc816fdc88 unix: [ID 266532 kern.notice] ss: 38 In preced log of panic I see "odm:odmstatvfs+90". I think this is root of panic, but due in lack of scat and mdb knowlage, I am cannot to investigate this module. When I delete VXVM, there is no panic when I issue the 'df'. If I can provide more information about this case, please let me know. For now I dont know what additional info to provide. Core dump is about of 400 MB, which is more than I can attach to this message.Solved4KViews1like13CommentsMULTINICB resource faulty and not getting cleared
Hi Team, I am seeing MultinicB resource fault as shown below D Ossfs Proxy ossfs_p1 et-coreg-admin2 D PubLan MultiNICB pub_mnic et-coreg-admin2 D Sybase1 Proxy syb1_p1 et-coreg-admin2 Pub_mnic is faulted and in turn proxy resources that mirror the status of MUltinICB resources. Below error seen on 3 rd June Jun 3 10:39:17 et-coreg-admin2 in.mpathd[6604]: [ID 168056 daemon.error] All Interfaces in group pub_mnic have failed Jun 3 10:39:18 et-coreg-admin2 Had[6102]: [ID 702911 daemon.notice] VCS ERROR V-16-1-10303 Resource pub_mnic (Owner: Unspecified, Group: PubLan) is FAULTED (timed out) on sys et-coreg-admin2 As of now interfaces seems ok and network is ok. I want to clear this resource but being a Persistent resource it should recover itself once network issue resolved. # ifconfig -a lo0: flags=1001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8131 index 1 inet 117.0.0.1 netmask ff000000 bnxe0: flags=19040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED> mtu 1500 index 1 inet 10.106.111.66 netmask ffffff80 broadcast 10.106.111.117 groupname pub_mnic ether 14:58:d0:54:18:18 bnxe0:1: flags=11000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4,FAILED> mtu 1500 index 1 inet 10.106.111.70 netmask ffffff80 broadcast 10.106.111.117 bnxe1: flags=19040843<UP,BROADCAST,RUNNING,MULTICAST,DEPRECATED,IPv4,NOFAILOVER,FAILED> mtu 1500 index 3 inet 10.106.111.68 netmask ffffff80 broadcast 10.106.111.117 groupname pub_mnic ether 14:58:d0:54:18:1c hares -display pub_mnic #Resource Attribute System Value pub_mnic Group global PubLan pub_mnic Type global MultiNICB pub_mnic AutoStart global 1 pub_mnic Critical global 1 pub_mnic Enabled global 1 pub_mnic LastOnline global admin1 pub_mnic MonitorOnly global 0 pub_mnic ResourceOwner global pub_mnic TriggerEvent global 0 pub_mnic ArgListValues admin1 UseMpathd 1 1 MpathdCommand 1 /usr/lib/inet/in.mpathd ConfigCheck 1 1 MpathdRestart 1 1 Device 4 bnxe0 0 bnxe1 1 NetworkHosts 1 10.106.111.51 LinkTestRatio 1 1 IgnoreLinkStatus 1 1 NetworkTimeout 1 100 OnlineTestRepeatCount 1 3 OfflineTestRepeatCount 1 3 NoBroadcast 1 0 DefaultRouter 1 0.0.0.0 Failback 1 0 GroupName 1 "" Protocol 1 IPv4 pub_mnic ArgListValues admin1 UseMpathd 1 1 MpathdCommand 1 /usr/lib/inet/in.mpathd ConfigCheck 1 1 MpathdRestart 1 1 Device 4 bnxe0 0 bnxe1 1 NetworkHosts 1 10.106.111.51 LinkTestRatio 1 1 IgnoreLinkStatus 1 1 NetworkTimeout 1 100 OnlineTestRepeatCount 1 3 OfflineTestRepeatCount 1 3 NoBroadcast 1 0 DefaultRouter 1 0.0.0.0 Failback 1 0 GroupName 1 "" Protocol 1 IPv4 pub_mnic ConfidenceLevel admin1 0 pub_mnic ConfidenceLevel admin1 0 pub_mnic ConfidenceMsg admin1 pub_mnic ConfidenceMsg admin1 pub_mnic Flags admin1 pub_mnic Flags admin1 pub_mnic IState admin1 not waiting pub_mnic IState admin1 not waiting pub_mnic MonitorMethod admin1 Traditional pub_mnic MonitorMethod admin1 Traditional pub_mnic Probed admin1 1 pub_mnic Probed admin1 1 pub_mnic Start admin1 0 pub_mnic Start admin1 0 pub_mnic State admin1 ONLINE pub_mnic State admin1 FAULTED pub_mnic ComputeStats global 0 pub_mnic ConfigCheck global 1 pub_mnic DefaultRouter global 0.0.0.0 pub_mnic Failback global 0 pub_mnic GroupName global pub_mnic IgnoreLinkStatus global 1 pub_mnic LinkTestRatio global 1 pub_mnic MpathdCommand global /usr/lib/inet/in.mpathd pub_mnic MpathdRestart global 1 pub_mnic NetworkHosts global 10.106.111.51 pub_mnic NetworkTimeout global 100 pub_mnic NoBroadcast global 0 pub_mnic OfflineTestRepeatCount global 3 pub_mnic OnlineTestRepeatCount global 3 pub_mnic Protocol global IPv4 pub_mnic TriggerResStateChange global 0 pub_mnic UseMpathd global 1 pub_mnic ContainerInfo admin1 Type Name Enabled pub_mnic ContainerInfo admin1 Type Name Enabled pub_mnic Device admin1 bnxe0 0 bnxe1 1 pub_mnic Device admin1 bnxe0 0 bnxe1 1 pub_mnic MonitorTimeStats admin1 Avg 0 TS pub_mnic MonitorTimeStats admin1 Avg 0 TS pub_mnic ResourceInfo admin1 State Valid Msg TS pub_mnic ResourceInfo admin1 State Stale Msg TS Please help to solve this asapSolved1.7KViews0likes3CommentsVxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group
Hi, I have a 2 node netbackup cluster(VCS). Earlier today I migrated a volume from an old storage array to a new storage array. How I did it is: 1. Present new Disk into the hosts 2. Scan for new disk on OS level 3. Scan for new disk on Veritas 4. Used the vxdiskadm utilitity to initialized the new disk 5. Added the new disk into the DiskGroup 6. Mirrored the volume to the new disk 7. After synchronization had completed, I removed the old plex from the disk group All of the above steps were done on the active node(NODE1), now when I try to failover the cluster resources to the inactive node(NODE2) i get the below error: VxVM vxdg ERROR V-5-1-10978 Disk group nbu_dg: import failed:No valid disk found containing disk group Then the cluster fails back again to the original node(NODE1) bash-3.2# vxdisk -o alldgs list (Active node) DEVICE TYPE DISK GROUP STATUS disk_0 auto:ZFS - - ZFS emc_clariion0_0 auto:cdsdisk - - online(Old disk) emc_clariion0_1 auto:cdsdisk nbu_dg02 nbu_dg online (New disk) ========================================================= bash-3.2# vxdisk -o alldgs list (Inactive node) DEVICE TYPE DISK GROUP STATUS disk_0 auto:ZFS - - ZFS emc_clariion0_0 auto:cdsdisk - - online (Old Disk) From the above output, I can't see the new disk which is supposed to show up in the inactive node with the disk group in a deported state. Please assist Regards,Solved2.6KViews0likes2CommentsErrors in dmpevent.log file
Hi , I am seeing below errror messages in dmpevent.log /var/adm/vx/dmpevents.log Mon Mar 28 11:23:27.786: SCSI error occured on Path sdv: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.788: SCSI error occured on Path sdv: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.790: SCSI error occured on Path sdu: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.792: SCSI error occured on Path sdu: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Mon Mar 28 11:23:27.793: SCSI error occured on Path sdaa: opcode=0x5f reported reservation conflict (status=0xc, key=0x0, asc=0x0, ascq=0x0) Please confirm the cause and impact of these messages. Regards S.1.3KViews0likes3CommentsImport disk group failure
Hello everyone! When I finished disk group configuration, I cannot find disk group until I imported by manual,butafter rebooting server,Icouldn't findthe disk group only if I imported again.following are my operations.Also the RVG was DISABLED until I started it,but it's still DISABLED after rebooting.Any help and suggestion would be appreciate [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) online [root@u31_host dsk]# vxdg import netnumendg [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk netnumendg01 netnumendg online sdc auto:cdsdisk netnumendg02 netnumendg online sdd auto:cdsdisk netnumendg03 netnumendg online [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 DISABLED CLEAN primary 2 srl_vol [root@u31_host dsk]# vxrvg -g netnumendg start netnumenrvg [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 ENABLED ACTIVE primary 2 srl_vol After reboot the server [root@u31_host Desktop]# vxprint [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxprint -rt |grep ^rv [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) onlineSolved1KViews1like1Comment