Maxuproc not get updated even after reboot
Hi, Got a update to change the "maxuproc for wt82369 by 1.5 times" , While verifying we make necessary modification on the Global (wt81958). Normally there is a relation between max_nprocs value and maxuproc value. FYI.. maxuprc = max_nprocs – reserved_procs (default is 5) In this case we modified the max_nprocs value from 30000 to 50000 FYI.. [root@wt81958 GLOBAL] /etc # cat /etc/system | grep max_nprocs set max_nprocs=50000 After the global zone reboot the value is not updated while we hit sysdef [root@wt81958 GLOBAL] /root # sysdef | grep processes 30000 maximum number of processes (v.v_proc) 29995 maximum processes per user id (v.v_maxup) Can anyone please assist us if any thing we missed in this to make the necessary changes to replicate. Awaiting for your valuable suggestions. Thanks, senthilsamSolved3.2KViews0likes3CommentsDoubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentUpgrade of server
Hi all ! I have some outdated RHEL5 installations with older vxfs 5.xx on. Some weeks back I popped out one of the mirrored OS disks and installed both RHEL7 and vxfs 6.2.1. I found no way to mount the old filsystems after the installation was done. No devices under /dev/vx/dsk or /dev/vx/rdsk. I could see the disks with 'vxdisk list' and the older diskgroups with vxdg ( don't remember the exact syntax I used ). Disk layout was already 7 on all filsystems, wich seems to be supported in 6.2.1. At this moment - I felt lost, wasn't going to risk loosing any of the data by experimenting. So, I reverted the changes and booted up in RHEL5 again. I'm quite sure that the task above could be solved - could someone please point me in the right direction ? /Sverre.Solved1.6KViews0likes6CommentsHow do you tell if Veritas Storage Foundation is running SP1 RP3?
We have a request to upgrade all of our SF to SP1RP3. Some may already be at this level. How do you verify? When running a swlist on the HP-UX 11.31 server: VRTSdbed 5.1.100.000 Veritas Storage Foundation for Oracle by Symantec VRTSsfmh 3.1.429.0 Veritas Storage Foundation Managed Host by SymantecSolved1.5KViews0likes2CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would presentthe LUNs on the new SANto my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsSolaris OS compatibility for SFHA Version 4.1
Hi All, Arch=Sparc OS=Solaris 10 Release=3/05 Initial Release SFHA=VCS,VxVM,VxFS Version=4.1 I'm planning to upgrade Solaris host from Initial Release to Latest Release (Solaris 10 1/13 Update 11). Also the host is running VxVM,VxFS & VCS everything verson 4.1 After Solaris Upgrade to latest release will SFHA 4.1 work fine. Will VCS Service Groups come online and VxFS file systems get mounted whilerunning 4.1 ? Kindly note that only for testing purpose i want to know this. Once Solaris is upgraded I will also upgrade SFHA to version 5.1SP1RP4 or to version 6. Reply is highly appreciated. Danish.Solved1.1KViews0likes2Comments5.1 PS1 new install on Sparc 9 VxVM4.0 MP1host having SFDB2
We are upgrading a Solaris 9 Sparc 64 bit using SFDB2 from VxVM 4.0 MP1to 5.1SP1 using DMP. From what I understand this is a new install not upgrade. EMC has remediated that VxVM is not at the expected level. We are moving from CX to VMAX.I will be editing the sd.conf and lpfc.conf accordingly. It will be an array based migration FTS. I was wondering can we just upgrade the VxVM from the CD or do we have to uninstall/install everything (all of SFDB2). If we do, then can you help me with these steps for expediency. Here are my steps: 1) vxvm config - vxprint -th (readable format) and vxprint -m (detailed attributes) save old attributes. 2) Unmount all the VxFS mounted filesystems 3. Stop all volumes and deport db2dg. 3) vxdctl stop ( to stop vxconfigd ) 3a.) mount CD with install pkgs; copy VRTS* pkgs to /tmp/veritas_5.1install dir 4) pkgrm VRTSfsman VRTSvmman VRTSvxfs VRTSvxvm VRTSvlic - old version 5) pkgadd VRTSvlic VRTSvxvm VRTSvxfs VRTSvmman VRTSfsman - new version 6) Reboot the server 7) Do SF Veritas patching showrev -p | grep 114477-04 & (122300-10 for SF&SFDB2 only) 8) Final reboot. 9) mount filesystems 10) import db2dg 10a.) use vxupgrade to upgrade the filesystem disk layout version from version 4 or 5 to -> 6 11.) vxddladm listsupport all (libvxemc.so (EMC SYMM support) should be listed because it was embedded in install (confirm)Solved973Views1like2CommentsVeritas Cluster addition
Hi All, In our environment we are having 8 node Veritas cluster sharing a Filesystem in all servers, and now we have request to add 8 more nodes to the existing cluster. So please provide the steps as well the best way to complete the setup. Thanks Rajini Nagarajan698Views1like2CommentsVersions of Veritas Volume manager
I am running Verias Volume manager on HPUX, version is B.05.10.01. There is embedded java that is version 1.6.0.06. Is there a more recent version that would bump up my Java to a higher version? Thanks Also, removing the embedded java would work.Solved2.3KViews1like6Comments