Maxuproc not get updated even after reboot
Hi, Got a update to change the "maxuproc for wt82369 by 1.5 times" , While verifying we make necessary modification on the Global (wt81958). Normally there is a relation between max_nprocs value and maxuproc value. FYI.. maxuprc = max_nprocs – reserved_procs (default is 5) In this case we modified the max_nprocs value from 30000 to 50000 FYI.. [root@wt81958 GLOBAL] /etc # cat /etc/system | grep max_nprocs set max_nprocs=50000 After the global zone reboot the value is not updated while we hit sysdef [root@wt81958 GLOBAL] /root # sysdef | grep processes 30000 maximum number of processes (v.v_proc) 29995 maximum processes per user id (v.v_maxup) Can anyone please assist us if any thing we missed in this to make the necessary changes to replicate. Awaiting for your valuable suggestions. Thanks, senthilsamSolved3.2KViews0likes3Commentsvxlicrep ERROR V-21-3-1015 Failed to prepare report for key
Dear all, we got a INFOSCALE FOUNDATION LNX 1 CORE ONPREMISE STANDARD PERPETUAL LICENSE CORPORATE. I have installed key using the vxlicinst -k <key> command. But when I want to check it using vxlicrep I'm getting this error for the given key: vxlicrep ERROR V-21-3-1015 Failed to prepare report for key = <key> We have Veritas Volume Manager 5.1 (VRTSvxvm-5.1.100.000-SP1_RHEL5 and VRTSvlic-3.02.51.010-0) running on RHEL 5.7 on 64 bits. I've read that the next step is to run vxkeyless set NONE, but I'm afraid to run this until I cannot see the license reported correctly by vxlicrep. What can I do to fix it? Thank you in advance. Kind regards, Laszlo4.2KViews0likes7CommentsUnable to initialize disk using vxdisksetup
I'm in the middle of setting up Storage Foundation for Oracle RAC 5.0 on RHEL 4 Update 3. I have Sun StorEdge 6920 as the storage array. Here's wat happened: - SFORAC documentation asked me to create the minimum possible LUN on the array for coordinator disks - I created the smallest LUN (16 MB) on the array - When trying to initialize it (using vxdisksetup -i Disk_0 format=cdsdisk) I got the error about disk being too small - I extended the disk size on the array to 50 MB - The servers still saw the coordinator disks (/dev/sda to /dev/sdc) of size 16 MB. Since I'm not well versed in Linux I rebooted both servers so they can see the new LUN size Fdisk is able to view the new LUN size & manipulate it as well however vxdisksetup still would not allow me to initialize the disks. I've tried the following: # fdisk /dev/sda Chose o & then w # vxdisksetup -i Disk_0 format=cdsdisk VxVM vxdisk ERROR V-5-1-535 Device Disk_0: Invalid attributes # vxdisksetup -i Disk_0 format=cdsdisk VxVM vxdisksetup ERROR V-5-2-0 Disk is too small for supplied parameters Then I zero filled the LUN: # dd if=/dev/zero of=/dev/sda bs=1M Repeated the steps above to get the same errors. I've also tried to use simple format but it doesnt work either. Have I missed out something or is it just that VxVM does not like volumes expanded by the array? Please help.4.4KViews0likes6CommentsDoubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentCannot remove last disk group configuration copy
Hi, I have a diskgroup with 6 EMC SAN disks in it. I got 6 new SAN storage and added it to the same disk group and mirrored volume. This host is running on centos 4.8. After mirroing I have removed the old plex. When I try to remove the last disk from the old SAN on the host using "vxdg -g dg rmdisk <disk_name> ", it throws an error as mentioned below. # vxdg -g dg01 rmdisk disk06 VxVM vxdg ERROR V-5-1-10127 disassociating disk-media disk06: Cannot remove last disk group configuration copy I would like to remove this last disk from the Disk group, because the volume is running on the new disks. How can I remove this disk from the disk group? Thanks for the help in advance.Solved4.9KViews0likes7CommentsInfoScale 7.1 and large disks (8Tb) with FSS
Hi everyone, I had been successfully running FSS with (thin) 8Tb disk drives on SFCFSHA 6.1 and 6.2.1 (see:http://vcojot.blogspot.ca/2015/01/storage-foundation-ha-61-and-flexible.html) I am trying to reproduce the same kind of setup with InfoScale 7.1 and it seems to have issues with 8Tb drives. Here's the full setup description: 2 * RHEL6.8 hosts with 16gb RAM. 4 LSI virtual adapters, each with 15 drives. c0* and c1* have 2Tb drives. c2* and c3* have 8Tb drives. Both 2tb and 8tb drives are 'exported' and the cluster is stable. Here's what I noticed.. Creating an FSS DG works on 2tb drives but not on 8tb drives (it used to on 6.1 and 6.2.1): [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_2T_00 [root@vcs18 ~]# vxdg list FSS00dg Group: FSS00dg dgid: 1466522672.427.vcs18 import-id: 33792.426 flags: shared cds version: 220 alignment: 8192 (bytes) local-activation: shared-write cluster-actv-modes: vcs18=sw vcs19=sw ssb: on autotagging: on detach-policy: local dg-fail-policy: obsolete ioship: on fss: on storage-sources: vcs18 copies: nconfig=default nlog=default config: seqno=0.1027 permlen=51360 free=51357 templen=2 loglen=4096 config disk ssd_2T_00 copy 1 len=51360 state=clean online log disk ssd_2T_00 copy 1 len=4096 On the 8Tb drives, it fails with: [root@vcs18 ~]# vxdg destroy FSS00dg [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_8T_00 VxVM vxdg ERROR V-5-1-585 Disk group FSS00dg: cannot create: Record not in disk group One thing that I noticed is that the 8Tb drives, even though exported, do -not- show up on the remote machine: [root@vcs18 ~]# vxdisk list|grep _00 ssd_2T_00 auto:cdsdisk - - online exported ssd_2T_00_1 auto:cdsdisk - - online remote ssd_8T_00 auto:cdsdisk - - online exported One other thing to note is that the 'connectivity' seems a bit messed up on the 8Tb drives: [root@vcs18 ~]# vxdisk list ssd_2T_00|grep conn connectivity: vcs18 [root@vcs18 ~]# vxdisk list ssd_2T_00_1|grep conn connectivity: vcs19 [root@vcs18 ~]# vxdisk list ssd_8T_00|grep conn connectivity: vcs18 vcs19 That's (IMHO) an error since those 'virtual'drives are local to each of the nodes and the SCSI busses aren't shared vcs18 and vcs19 are two fully independent VMWare machines. This looks like a bug to me but since I don't work for a company with a vx software support contrat anymore I cannot report the issue. Thanks for reading, Vincent3KViews0likes7CommentsSolaris 11.1 VXVM 6.0.1 'df' cause a panic
Environment: System Configuration: HP ProLiant BL480c G1 Oracle Solaris 11.1 X86 panic string: BAD TRAP: type=e (#pf Page fault) rp=fffffffc816fdb90 addr=0 occurred in module "unix" due to a NULL pointer dereference Veritas INFO: PKGINST: VRTSvxvm NAME: Binaries for VERITAS Volume Manager by Symantec CATEGORY: system ARCH: i386 VERSION: 6.0.100.000,REV=08.01.2012.08.52 Stack: genunix: [ID 655072 kern.notice] fffffffc816fdab0 unix:die+105 () genunix: [ID 655072 kern.notice] fffffffc816fdb80 unix:trap+153e () genunix: [ID 655072 kern.notice] fffffffc816fdb90 unix:cmntrap+e6 () genunix: [ID 655072 kern.notice] fffffffc816fdca0 unix:strncpy+1c () genunix: [ID 655072 kern.notice] fffffffc816fdcd0 odm:odmstatvfs+90 () genunix: [ID 655072 kern.notice] fffffffc816fdcf0 genunix:fsop_statfs+1a () genunix: [ID 655072 kern.notice] fffffffc816fde70 genunix:cstatvfs64_32+42 () genunix: [ID 655072 kern.notice] fffffffc816fdec0 genunix:statvfs64_32+69 () genunix: [ID 655072 kern.notice] fffffffc816fdf10 unix:brand_sys_sysenter+1dc () Messages: unix: [ID 839527 kern.notice] df: unix: [ID 753105 kern.notice] #pf Page fault unix: [ID 532287 kern.notice] Bad kernel fault at addr=0x0 unix: [ID 243837 kern.notice] pid=3965, pc=0xfffffffffb893ff8, sp=0xfffffffc816fdc88, eflags=0x10206 unix: [ID 211416 kern.notice] cr0: 80050033<pg,wp,ne,et,mp,pe> cr4: 6f8<xmme,fxsr,pge,mce,pae,pse,de> unix: [ID 624947 kern.notice] cr2: 0 unix: [ID 625075 kern.notice] cr3: 59f0a2000 unix: [ID 625715 kern.notice] cr8: c unix: [ID 100000 kern.notice] unix: [ID 592667 kern.notice] rdi: fffffffc816fdd48 rsi: 0 rdx: f unix: [ID 592667 kern.notice] rcx: 1 r8: e80 r9: 0 unix: [ID 592667 kern.notice] rax: fffffffc816fdd48 rbx: fefa3430 rbp: fffffffc816fdca0 unix: [ID 592667 kern.notice] r10: fffffffffb856d00 r11: 0 r12: fffffffc816fdd00 unix: [ID 592667 kern.notice] r13: ffffc10012176880 r14: 0 r15: ffffc1002bb09480 unix: [ID 592667 kern.notice] fsb: 0 gsb: ffffc1000eac8000 ds: 4b unix: [ID 592667 kern.notice] es: 4b fs: 0 gs: 1c3 unix: [ID 592667 kern.notice] trp: e err: 0 rip: fffffffffb893ff8 unix: [ID 592667 kern.notice] cs: 30 rfl: 10206 rsp: fffffffc816fdc88 unix: [ID 266532 kern.notice] ss: 38 In preced log of panic I see "odm:odmstatvfs+90". I think this is root of panic, but due in lack of scat and mdb knowlage, I am cannot to investigate this module. When I delete VXVM, there is no panic when I issue the 'df'. If I can provide more information about this case, please let me know. For now I dont know what additional info to provide. Core dump is about of 400 MB, which is more than I can attach to this message.Solved4KViews1like13CommentsHaving issues attempting to reduce the size of a disk group.
I am attempting to reduce the size of my disk group as an excersize and unable to do so. I believe the issue to be that my plex is in a DISABLED REMOVED state and my volume is in a DISABLED ACTIVE state. Doing research I did not do a vxresize before doing a vxdg rmdisk and now trying to get it back and I keep seeing different suggestions but nothing exactly on a DISABLED REMOVED state. Any assistance would be appreciated... bash-3.2# vxprint -g dg_acpdev TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg_acpdev dg_acpdev - - - - - - dm dg_acpdev_d01 emc1_3ac1 - 35281408 - - - - dm dg_acpdev_d02 - - - - REMOVED - - dm dg_acpdev_d03 emc1_3a7c - 70640128 - - - - dm dg_acpdev_d04 - - - - REMOVED - - dm dg_acpdev_d05 emc1_3860 - 35281408 - - - - pl dgacpdev-01 - DISABLED 141201408 - REMOVED - - sd dg_acpdev_d01-01 dgacpdev-01 ENABLED 35281408 0 - - - sd dg_acpdev_d03-01 dgacpdev-01 ENABLED 70640128 35281408 - - - sd dg_acpdev_d04-01 dgacpdev-01 DISABLED 35279872 105921536 REMOVED - - v dgacpdev fsgen DISABLED 141201408 - ACTIVE - -1.8KViews0likes6CommentsDetect more space on existing LUN
Hi, I am using Veritas Foundation 6.1 on SLES. We have a LUN aleady in a diskgroup with a volume. SAN have increased the LUN from 200GB to 300GB. How can we get Veritasto see the additional 100GB? Tried "vxdg free" but this just shows the existing size not the new size Tried rescanning scsi paths but no changes. Tried "vxdisk list LUN_NAME" but that gives old size. Thanks936Views0likes2CommentsTesting the process to expand a disk group, need asistance
Hello, I am fairly new to this Veritas storage world here, so please be patient with me... I was tasked to expand vx disk groups from 1 trb to 2 trb on a Prod, Test, Dev server before a new client comes on board. I created a new disk group to test the process in order to get it clean before I attacked the real deal. The new disk group is dg_acpdev which has 2 disks part of it, but I can't get the maxsize nor can I get these 2 to look at one disk...I have searched online and have seen several resize commands but none of them work. bash-3.2# vxprint -g dg_acpdev TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg_acpdev dg_acpdev - - - - - - dm dg_acpdev_d01 emc1_3ac1 - 35281408 - - - - dm dg_acpdev_d02 emc1_3a7c - 70640128 - - - - v dgacpdev fsgen ENABLED 35280896 - ACTIVE - - pl dgacpdev-01 dgacpdev ENABLED 35280896 - ACTIVE - - sd dg_acpdev_d01-01 dgacpdev-01 ENABLED 35280896 0 - - - v volspec fsgen ENABLED 62914560 - ACTIVE - - pl volspec-01 volspec ENABLED 62914560 - ACTIVE - - sd dg_acpdev_d02-01 volspec-01 ENABLED 62914560 0 - - -Solved1.4KViews0likes2Comments