Breaking mirror from a VXVM controlled disk.
Hello Forum members. Scenario: Solaris O.S. Upgrade from version 9 to 10 Plat: SPARC Ineed to break a O.S. disks mirror as a failback plan in case the O.S. upgrade fails. I am unsure on how to proceed since I have not done this before. Do I first need to break mirror and the un encapsulate disks.? So far I have read a few documents and came up with this for the unmirroring process: bash-2.05# vxprint -htqg rootdg dg rootdg default default 80000 1086905353.6.treassun40 dm rootdg01 Disk_0 auto 20351 143328960 - dm rootdg02 Disk_8 auto 20351 143328960 - v home - ENABLED ACTIVE 4100928 ROUND - gen pl home-01 home ENABLED ACTIVE 4100928 CONCAT - RW sd rootdg01-04 home-01 rootdg01 28686143 4100928 0 Disk_0 ENA pl home-02 home ENABLED ACTIVE 4100928 CONCAT - RW sd rootdg02-04 home-02 rootdg02 28686144 4100928 0 Disk_8 ENA v logicdat1_b - ENABLED ACTIVE 4194304 SELECT - fsgen pl logicdat1_b-01 logicdat1_b ENABLED ACTIVE 4202688 CONCAT - RW sd rootdg01-06 logicdat1_b-01 rootdg01 53271359 4202688 0 Disk_0 ENA pl logicdat1_b-02 logicdat1_b ENABLED ACTIVE 4202688 CONCAT - RW sd rootdg02-06 logicdat1_b-02 rootdg02 53271360 4202688 0 Disk_8 ENA v opt - ENABLED ACTIVE 20484288 ROUND - gen pl opt-01 opt ENABLED ACTIVE 20484288 CONCAT - RW sd rootdg01-03 opt-01 rootdg01 32787071 20484288 0 Disk_0 ENA pl opt-02 opt ENABLED ACTIVE 20484288 CONCAT - RW sd rootdg02-05 opt-02 rootdg02 32787072 20484288 0 Disk_8 ENA v rootvol - ENABLED ACTIVE 6146304 ROUND - root pl rootvol-01 rootvol ENABLED ACTIVE 6146304 CONCAT - RW sd rootdg01-B0 rootvol-01 rootdg01 143328959 1 0 Disk_0 ENA sd rootdg01-02 rootvol-01 rootdg01 0 6146303 1 Disk_0 ENA pl rootvol-02 rootvol ENABLED ACTIVE 6146304 CONCAT - RW sd rootdg02-01 rootvol-02 rootdg02 0 6146304 0 Disk_8 ENA v swapvol - ENABLED ACTIVE 16393536 ROUND - swap pl swapvol-01 swapvol ENABLED ACTIVE 16393536 CONCAT - RW sd rootdg01-01 swapvol-01 rootdg01 6146303 16393536 0 Disk_0 ENA pl swapvol-02 swapvol ENABLED ACTIVE 16393536 CONCAT - RW sd rootdg02-02 swapvol-02 rootdg02 6146304 16393536 0 Disk_8 ENA v var - ENABLED ACTIVE 6146304 ROUND - gen pl var-01 var ENABLED ACTIVE 6146304 CONCAT - RW sd rootdg01-05 var-01 rootdg01 22539839 6146304 0 Disk_0 ENA pl var-02 var ENABLED ACTIVE 6146304 CONCAT - RW sd rootdg02-03 var-02 rootdg02 22539840 6146304 0 Disk_8 ENA Disassociate all the plexes reported for the rootmirror disk. vxplex -g rootdg -o rm dis home-02 vxplex -g rootdg -o rm dis logicdat1_b-02 vxplex -g rootdg -o rm dis opt-02 vxplex -g rootdg -o rm dis rootvol-02 vxplex -g rootdg -o rm dis swapvol-02 vxplex -g rootdg -o rm dis var-02 After removing all plexes remove the disk from veritas control vxdg -g rootdg rmdisk rootdg02 Is it nowsafe to runvxunroot to unecnapsulate the rootdisk ? and how do I run this command? If there is any other easier way of accomplishing this unmirroring and uncapsulating tasks please let me know. Regards.Solved4.1KViews1like3CommentsVxVM Shrink a volume online
Greetings, I have a 2.4 terabyte concat volume that's data has been moved/restructured and is now only using about 200 gig worth of the allocated space. We've had very poor luck in the past using the vxresize command to shrink the volume. Does anyone know of another way to consolidate the data onto a single disk and the volume shrunk so that I can reclaim the unsused disks? The volume has a single plex, with 8 subdisks. Your help is appreciated. Thanks!Solved3.3KViews1like5CommentsHow to break root disk Mirror in VxVM
Hi All bash-3.00# vxprint -g rootdg -vh rootvol TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 v rootvol root ENABLED 60821952 - ACTIVE - - pl rootvol-01 rootvol ENABLED 60821952 - ACTIVE - - sd rootdg01-B0 rootvol-01 ENABLED 1 0 - - Block0 sd rootdg01-02 rootvol-01 ENABLED 60821951 1 - - - pl rootvol-02 rootvol ENABLED 60821952 - ACTIVE - - sd rootdg02-01 rootvol-02 ENABLED 60821952 0 - - - bash-3.00# df -h / Filesystem size used avail capacity Mounted on /dev/vx/dsk/bootdg/rootvol 29G 19G 9.4G 67% / 1) From above configuration we see root file system is configured on volume rootvol which is a mirror. Now i'd like to break the mirror and keep the mirror copy for backout purpose as i will be upgrading on the actual root disk. I do not want to delete the plexes or the mirror copy. Suppose in SVM, d0 is a mirror and d10 and d20 are its submirrors, we issue metadetach d0 d20 command to detach the submirror. How do we accomplish the same in the above VxVM configuration ? 2) Plexrootvol-02 has only 1 subdisk rootdg02-01, whereas Plex rootvol-01 has 2 subdisks rootdg01-B0 and rootdg01-02. What is the significance of having 2 subdisks for the plex rootvol-01 ? If the Plexrootvol-01is a mirrored copy of another plexrootvol-02 then the size and number of subdisks in each plex should be the same or not ? ===================================================================================================== use-nvramrc?=true nvramrc=devalias vx-rootdg01 /pci@1f,700000/scsi@2/disk@0,0:a devalias vx-rootdg02 /pci@1f,700000/scsi@2/disk@1,0:a 3) Once the root volume plex has been disassociated can we still use both the above listed device aliases to boot OS from ok prompt ? ok> boot vx-rootdg01 ok> boot vx-rootdg02 Thank you everybody for your response as always. Response is highly appreciated. Regards, Danish.Solved3KViews1like7CommentsEnclosure Based consistent naming, or OS Native?
Using VxVM 5.0 on Solaris. Intend to upgrade soon. Currently all our servers are using OS Native disk naming: #> vxddladm get namingscheme NAMING_SCHEME PERSISTENCE LOWERCASE USE_AVID ============================================================ OS Native No Yes Yes Despite this, the disk names we get are something like "emc0_1234", or "disk_12" depending on what array the server is attached to on the SAN. I assume this is because VxVM will not use the very long WWN disk names that the OS uses. Problem is, sometimes the servers with the generic "disk_XX" Veritas disk names get all their disks renumbered due to disks being removed from VxVM and from the server. When this happens, the disk groups get all mixed up, and we have to rebuild them from backups.As much fun as it is to rebuild the disk groups, I should probably prevent this from happening again if I can.I think thatusing persistent enclosure based names will resolve this. Any reason I should not run 'vxddladm set namingscheme=ebn' on all our servers? If I remove a disk from a disk group, then remove it from VxVM and finally from the server, will the /etc/vx/disk.info file update? Thanks.2.9KViews0likes9CommentsVCS Switch Over problem
Hi, I do have som problems when testing Cluster Switch (2 node cluster VCS 6.2, Solaris 10). We test with init 6. Active node always hangs with : 2014/02/09 08:07:36 VCS ERROR V-16-10001-11511 (node1) Volume:vol_v1:offline:Cannot find Volume v1, either the Volume is a Volume Set or Veritas Volume Manager configuration daemon (vxconfigd) is not in enabled mode 2014/02/09 08:07:36 VCS INFO V-16-6-15002 (node1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/resstatechange node1 mnt_1 ONLINE OFFLINE successfully 2014/02/09 08:07:37 VCS INFO V-16-2-13716 (node1) Resource(vol_v1): Output of the completed operation (offline) ============================================== VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible ============================================== 2014/02/09 08:07:37 VCS ERROR V-16-2-13064 (node1) Agent is calling clean for resource(vol_v1) because the resource is up even after offline completed. 2014/02/09 08:07:37 VCS ERROR V-16-10001-11511 (node1) Volume:vol_v1:clean:Cannot find Volume v1, either the Volume is a Volume Set or Veritas Volume Manager configuration daemon (vxconfigd) is not in enabled mode 2014/02/09 08:07:38 VCS INFO V-16-2-13716 (node1) Resource(vol_v1): Output of the completed operation (clean) ============================================== VxVM vxprint ERROR V-5-1-684 IPC failure: Configuration daemon is not accessible ============================================== I noticed that the init script /etc/rc0.d/K99vxvm-shutdown does stop the vxconfigd and also does "/usr/sbin/vxdctl -k stop". My question is do I need any vxvm init script since the upgrade from 5 to 6.1 is done and we have SMF service in place, or should I increase the timeouts of the VCS stop procedures? Thanks a lot in advance! CheersSolved2.8KViews0likes10CommentsFlag status unknown
Hi All, I am trying to configure the nfs under VCS and getting the Flag status unknown error for NFSRestart service. So can you please suggest me the solution to this problem. Attached is the screenshot of the vcs console and main.cf configuration file. Thanks!!!!!!!!!!!! RavinderSolved2.7KViews1like5Commentsneed understand vxrpint concat for mutiple plex
Hi, I am having difficuty to understand vxprint output: v-bash-4.1# vxprint Disk group: testdg TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0PUTIL0 dg testdg testdg - - - - - - dm testdg01 disk_1 - 16710624 - - - - dm testdg02 disk_2 - 16710624 - - - - vdatavol1 fsgen DISABLED 16710624 - EMPTY - - pl dataplex1 datavol1 DISABLED 16710624 - EMPTY - - sd testdg01s0 dataplex1 ENABLED83553120 - - - sd testdg02s0 dataplex1 ENABLED83553128355312- - - pl dataplex2 datavol1 DISABLED 16710624 - EMPTY - - sd testdg01s1 dataplex2 ENABLED83553120 - - - sd testdg02s1 dataplex2 ENABLED83553128355312- - - the volume datavol1 was create by: vxmake -g testdg -U fsgen vol datavol1 plex=dataplex1,dataplex2 size caculation should be : testdg01s0 = 4G, testdg02s0 = 4G, ---dataplex1 testdg01s1 = 4G, testdg02s1 = 4G, ---dataplex2. how come total datavol1 size only shows 8G, it should be 16 isn't it?2.4KViews0likes4Commentshow to install storage foundation 6.0.1 on hp-ux install with only vxfs
i have hp-ux install with SF 5.0.1 ang root is control by lvm and vxfs. it is done when install hp-ux 11 v3. i want to install sf 6.0 and i have error that can mount vxfs files system. help me to install sf 6.0. Hi all # swlist | grep -i vx Base-VXFS B.11.31 Base VxFS File System 4.1 Bundle for HP-UX VRTSvxfs 5.0.31.5 VERITAS File System When try to install SF 6.0 # ./installer Storage Foundation and High Availability Solutions 6.0.1 Install Program Copyright (c) 2012 Symantec Corporation. All rights reserved. Symantec, the Symantec Logo are trademarks or registered trademarks of Symantec Corporation or its affiliates in the U.S. and other countries. Other names may be trademarks of their respective owners. The Licensed Software and Documentation are deemed to be "commercial computer software" and "commercial computer software documentation" as defined in FAR Sections 12.212 and DFARS Section 227.7202. Logs are being written to /var/tmp/installer-201407141057RFR while installer is in progress. Storage Foundation and High Availability Solutions 6.0.1 Install Program Symantec Product Version Installed Licensed ================================================================================ Symantec Licensing Utilities (VRTSvlic) are not installed due to which products and licenses are not discovered. Use the menu below to continue. Task Menu: P) Perform a Pre-Installation Check I) Install a Product C) Configure an Installed Product G) Upgrade a Product O) Perform a Post-Installation Check U) Uninstall a Product L) License a Product S) Start a Product D) View Product Descriptions X) Stop a Product R) View Product Requirements ?) Help Enter a Task: [P,I,C,G,O,U,L,S,D,X,R,?] I Storage Foundation and High Availability Solutions 6.0.1 Install Program 1) Veritas Dynamic Multi-Pathing (DMP) 2) Veritas Cluster Server (VCS) 3) Veritas Storage Foundation (SF) 4) Veritas Storage Foundation and High Availability (SFHA) 5) Veritas Storage Foundation Cluster File System HA (SFCFSHA) 6) Veritas Storage Foundation for Oracle RAC (SF Oracle RAC) b) Back to previous menu Select a product to install: [1-6,b,q] 3 Do you agree with the terms of the End User License Agreement as specified in the storage_foundation/EULA/en/EULA_SF_Ux_6.0.1.pdf file present on media? [y,n,q,?] y Veritas Storage Foundation 6.0.1 Install Program 1) Install minimal required depots - 1262 MB required 2) Install recommended depots - 1840 MB required 3) Install all depots - 1843 MB required 4) Display depots to be installed for each option Select the depots to be installed on all systems? [1-4,q,?] (2) Enter the system names separated by spaces: [q,?] (ictprd) Veritas Storage Foundation 6.0.1 Install Program ictprd Logs are being written to /var/tmp/installer-201407141057RFR while installer is in progress Verifying systems: 100% Estimated time remaining: (mm:ss) 0:00 8 of 8 Checking system communication ............................................................................................................................. Done Checking release compatibility ............................................................................................................................ Done Checking installed product ................................................................................................................................ Done Checking prerequisite patches and depots .................................................................................................................. Done Checking platform version ................................................................................................................................. Done Checking file system free space ........................................................................................................................... Done Checking product licensing ................................................................................................................................ Done Performing product prechecks .............................................................................................................................. Done System verification checks completed The following errors were discovered on the systems: CPI ERROR V-9-20-1127 Failed to mount all mount points of /etc/fstab on ictprd. Check the validation of all the entries in /etc/fstab. The following warnings were discovered on the systems: CPI WARNING V-9-40-3853 FS 5.0.31.5 is installed on ictprd. To proceed with installation will install SF 6.0.1 directly on ictprd. CPI WARNING V-9-40-3861 NetBackup 7.6.0.1 was installed on ictprd. The VRTSpbx depots on ictprd will not be uninstalled. installer log files and summary file are saved at: /opt/VRTS/install/logs/installer-201407141057RFRSolved2.3KViews0likes7CommentsCollection from VRTSexplorer
Dear All I have some enviornments of SFHA and VCS installed on Linux and Solaris servers. I have output of VRTSexplorer logs for every system. As the VRTSexplorer contains almost everything collected, I wanted to know if is it possible to get the following output of these requirements 1. Which license and version and sub-version is installed on system? like for example if version 5.0 is installed which patch version is installed ? I want something except "vxlicrep" 2. If in the case of Replication, which file will give output of Bandwidth limit if it is configured. 3. If in the case of Replication, which file will give output of packet size if it is configured. Any suggession will be appreciated. Assume I just have VRTSexplorer and investigating logs and need to know the following. Regards2.2KViews0likes4Comments