Doubts on VxVM,VCS Upgrade & root disk encapsulation
Hi All, I have the below queries please 1) In order to stop VxVM to load at system boot time, we need to modify /etc/system file. What entries are to be commented out ? Is it only rootdev:/pseudo/vxio@0:0 set vxio:vol_rootdev_is_volume=1 (OR) also below entries are to be commented out ? forceload: drv/vxdmp forceload: drv/vxio forceload: drv/vxspec 2) My current version of SFHA is 4.1. Once vxfen, gab & llt modules are unloaded to upgrade to 4.1MP2, should i again unload these modules to further upgrade to 5.1SP1 and again to 5.1SP1RP4 (OR) 6.0 ? After each upgrade should i stop the services in /etc/init.d and unload modules (OR) stopping services & unloading modules only once is enough to further upgrade to other versions ? My Plan is to upgrade from 4.1---> 4.1 MP2---> 5.1SP1--->5.1SP1RP4 (OR) 6.0 3) Before upgrading should i also stop & unload the below listed modules 24 12800a8 26920 268 1 vxdmp (VxVM 4.1z: DMP Driver) 25 7be00000 2115c8 269 1 vxio (VxVM 4.1z I/O driver) 27 12a4698 13f0 270 1 vxspec (VxVM 4.1z control/status driver) 213 7b2d7528 c40 272 1 vxportal (VxFS 4.1_REV-4.1B18_sol_GA_s10b) 214 7ae00000 1706a8 20 1 vxfs (VxFS 4.1_REV-4.1B18_sol_GA_s10b) If yes, should i stop & unload after each upgrade (OR) doing it once is enough ? 4) Once the OS comes up with native disks (c#t#d#s#), In order to bring it under VxVM control we need to encapsulate using vxdiskadm. My doubt is will rootdg, rootvol, plexes & subdisks be created automatically? Need a little clarification regarding this please. Response is highly appreciated as always, Thank you very much. Regards, Danish.Solved1.9KViews0likes1CommentHow much space is required in a VxFS file system for the recover file as used by the fscdsconv utility
Hi all, I'm going to migrate a VXVM/VXFS (vers 5.1SP1RP4) volume/file system Solaris Sparc platform to VXVM/VXFS 6.03 Linux 6.5 platform, the file system size is 2TB space, real 1TB.... how much space is required for utility fscdsconv during conversion, there is a formula for this job? Thanks a lot Cecilia1KViews4likes1CommentVeritas Storage Foundation - Volume Disabled After 'rmdisk'
Dear All, I added a LUN to a specific volume and realised that I added it to the wrong volume. To remove the LUN the following command was run : "vxdg -g dg rmdisk vsp-xx-xx" I was then prompted to run the " -k" option to remove the disk. However after re-running the command with the " -k" option : "vxdg -g dg -k rmdisk vsp-xx-xx" ... the volume went into a disabled state. Fortunately no data was lost once the "vxmend" was completed on the volume. I would just like to know if this was to be expected when running the above with the " -k" option ? RegardsSolved2KViews0likes4CommentsMigrating to a new SAN
We're in the process of moving to new datacenters. All servers will be moved over but the SAN won't. The SAN will be replicated to a new SAN in the new datacenters by our SAN admins. That means, the LUNs in the new SAN will be identical to the old, also the LUN numbers. Example output from'vxdisk list' on the current host shows: c1t50050768014052E2d129s2 auto:cdsdisk somedisk01 somedg online This disk will get same LUN nr, but the target name will probably differ as it's new hardware in the new DC. Will this work with Veritas SFHA? If not, is there any way to do this the way the datacenter project wants to? If I could choose to do this my way, I would presentthe LUNs on the new SANto my servers so that I could do a normal host based migration. Add the new LUN to the diskgroup and mirror the data, then remove the old LUN. However, I'm told that the hosts in the current datacenter won't be able to see the new SAN.Solved2.2KViews2likes3CommentsHow to expand a dg "vgbackup" and after a filesystem "/prod01p"
I have a dg with 44 LUNs (100 GB) and I'm allocating more 13 LUNs for expansion, getting in the end with 57 LUNS. Currently the dg this with 44 striped. v vol_prod01p - ENABLED ACTIVE 9219080192 SELECT vol_prod01p-01 fsgen pl vol_prod01p-01 vol_prod01p ENABLED ACTIVE 9219082752 STRIPE 44/128 RW now insert the new disks, as follows: vxdg -g vgbackup adddisk vgbackup250=emc_clariion0_250 vxdg -g vgbackup adddisk vgbackup253=emc_clariion0_253 vxdg -g vgbackup adddisk vgbackup255=emc_clariion0_255 vxdg -g vgbackup adddisk vgbackup257=emc_clariion0_257 vxdg -g vgbackup adddisk vgbackup264=emc_clariion0_264 vxdg -g vgbackup adddisk vgbackup265=emc_clariion0_265 vxdg -g vgbackup adddisk vgbackup266=emc_clariion0_266 vxdg -g vgbackup adddisk vgbackup267=emc_clariion0_267 vxdg -g vgbackup adddisk vgbackup268=emc_clariion0_268 vxdg -g vgbackup adddisk vgbackup269=emc_clariion0_269 vxdg -g vgbackup adddisk vgbackup271=emc_clariion0_271 vxdg -g vgbackup adddisk vgbackup272=emc_clariion0_272 vxdg -g vgbackup adddisk vgbackup273=emc_clariion0_273 from this step which the commands should I use for expansion of the dg subsequently increase the filesystem with these new discs. Information : /dev/vx/dsk/vgbackup/vol_prod01p 4.3T 4.0T 307G 94% /prod01p root@BMG01 # vxassist -g vgbackup maxsize Maximum volume size: 2730465280 (1333235Mb) Thanks.Solved1.6KViews1like7CommentsManually addition of subdisk in vxvm
Hi, We have to some space in the disk, and we found that in our environment, one subdisk is using in two volumes which is under same disk group (Vaultdg) We have found out one subdisk which is free and can be added manually. Shall we proceed to add it,if so then how do i proceed? Command please to add it forcefully as normally it is not taking Quick reply appreciated, as have to do as soon as possible.Solved709Views0likes2Commentshow to install storage foundation 6.0.1 on hp-ux install with only vxfs
i have hp-ux install with SF 5.0.1 ang root is control by lvm and vxfs. it is done when install hp-ux 11 v3. i want to install sf 6.0 and i have error that can mount vxfs files system. help me to install sf 6.0. Hi all # swlist | grep -i vx Base-VXFS B.11.31 Base VxFS File System 4.1 Bundle for HP-UX VRTSvxfs 5.0.31.5 VERITAS File System When try to install SF 6.0 # ./installer Storage Foundation and High Availability Solutions 6.0.1 Install Program Copyright (c) 2012 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. The Licensed Software and Documentation are deemed to be "commercial computer software" and "commercial computer software documentation" as defined in FAR Sections 12.212 and DFARS Section 227.7202. Logs are being written to /var/tmp/installer-201407141057RFR while installer is in progress. Storage Foundation and High Availability Solutions 6.0.1 Install Program Symantec Product Version Installed Licensed ================================================================================ Symantec Licensing Utilities (VRTSvlic) are not installed due to which products and licenses are not discovered. Use the menu below to continue. Task Menu: P) Perform a Pre-Installation Check I) Install a Product C) Configure an Installed Product G) Upgrade a Product O) Perform a Post-Installation Check U) Uninstall a Product L) License a Product S) Start a Product D) View Product Descriptions X) Stop a Product R) View Product Requirements ?) Help Enter a Task: [P,I,C,G,O,U,L,S,D,X,R,?] I Storage Foundation and High Availability Solutions 6.0.1 Install Program 1) Veritas Dynamic Multi-Pathing (DMP) 2) Veritas Cluster Server (VCS) 3) Veritas Storage Foundation (SF) 4) Veritas Storage Foundation and High Availability (SFHA) 5) Veritas Storage Foundation Cluster File System HA (SFCFSHA) 6) Veritas Storage Foundation for Oracle RAC (SF Oracle RAC) b) Back to previous menu Select a product to install: [1-6,b,q] 3 Do you agree with the terms of the End User License Agreement as specified in the storage_foundation/EULA/en/EULA_SF_Ux_6.0.1.pdf file present on media? [y,n,q,?] y Veritas Storage Foundation 6.0.1 Install Program 1) Install minimal required depots - 1262 MB required 2) Install recommended depots - 1840 MB required 3) Install all depots - 1843 MB required 4) Display depots to be installed for each option Select the depots to be installed on all systems? [1-4,q,?] (2) Enter the system names separated by spaces: [q,?] (ictprd) Veritas Storage Foundation 6.0.1 Install Program ictprd Logs are being written to /var/tmp/installer-201407141057RFR while installer is in progress Verifying systems: 100% Estimated time remaining: (mm:ss) 0:00 8 of 8 Checking system communication ............................................................................................................................. Done Checking release compatibility ............................................................................................................................ Done Checking installed product ................................................................................................................................ Done Checking prerequisite patches and depots .................................................................................................................. Done Checking platform version ................................................................................................................................. Done Checking file system free space ........................................................................................................................... Done Checking product licensing ................................................................................................................................ Done Performing product prechecks .............................................................................................................................. Done System verification checks completed The following errors were discovered on the systems: CPI ERROR V-9-20-1127 Failed to mount all mount points of /etc/fstab on ictprd. Check the validation of all the entries in /etc/fstab. The following warnings were discovered on the systems: CPI WARNING V-9-40-3853 FS 5.0.31.5 is installed on ictprd. To proceed with installation will install SF 6.0.1 directly on ictprd. CPI WARNING V-9-40-3861 NetBackup 7.6.0.1 was installed on ictprd. The VRTSpbx depots on ictprd will not be uninstalled. installer log files and summary file are saved at: /opt/VRTS/install/logs/installer-201407141057RFRSolved2.3KViews0likes7Comments