adding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question:can i add the disks to the DG and volumes to RVGwhile the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlinkto add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.4KViews0likes14CommentsChange the Host IP address in the Veritas cluster
Hello All, We have a Veritas cluster server setup (VCS-HA,VCS-CFS & VERITAS-RAC) where on few setup we required to change the data IP address of some host(node). I refer few notes but not sure except /etc/hosts is there any file need to update/edit. Please help me if you have any process/technote to make those change & make that changed IP persistant. Also would like to know the impact of this activity. The systems are Linux 6.5 & 6.6 & cluster versions are VCS 6.2 & 6.1.Solved3.6KViews0likes3Comments‘vxdmppr’ utility information
Hello, With VxVM, we get ‘vxdmppr’ utility which performs SCSI 3 PR operations on the disks similar to sg_persist on Linux. But we don’t find much documentation around this utility. In one of the blogs we saw that its unsupported utility. Can someone throw light on it. Has someone used it in the past? Or does anyone know how this utility is getting used in VxVM? How to know if this is supported or not? Rafiq1.8KViews1like7Commentsmnt_app resource failover
Hi, Below is the resource dependecies in my enviroment, what happened is somebody unmount the /app filesystem.So, resource went on faulted state, when i checked that resource criticality it shows mnt_app as non-critical resource 0 and vol_app and app_dg is set as critical 1 The depediencies below shows parent child relationship (mnt_app) as parent and (vol_app)as child. If parent is set as critical and child is non-critical 0, do it failover to another node. Or If child is set as critical or parent as non-critical 0, then it failover? Please assist as soon as possible. root@lyle# hares -dep |grep app PDM_PRD_MG APP_aphelion mnt_app PDM_PRD_MG APP_tibjmsd mnt_app PDM_PRD_MG Blind_check_stopDB mnt_app PDM_PRD_MG IPMultB_pdmprdappdb MNicB_DB PDM_PRD_MG appdg SRDF_app PDM_PRD_MG mnt_app vol_app PDM_PRD_MG vol_activelogs appdg PDM_PRD_MG vol_app appdg PDM_PRD_MG vol_archivelogs appdg PDM_PRD_MG vol_index appdgSolved1.7KViews1like5CommentsVxVM 4.1 SAN migrate at host level Mirrored volumes are associated with DRL
Hi, I have done SAN migrate earlier without DRL & Subvolume setup. It's mirroed between 2 arrays at host level using vxvm.But nowI'm in a position where I need to migrate 100+volumes between different new arrays. Some volumes have got DRL(log only plexes) & few subvolumes.Unfortunately no documentation of how theseDRL(log only plexes) & fewsubvolume can be migrated.so I'm looking for some help. The cluster is: - Solaris 10 - Solaris Cluster 3.1 two node campus cluster - VxVM 4.1 with CVM - 2 * NetApp storages mirroed at host level using VxVM Migration Plan questionsis: - 2 * new fujitsu storages have installed & created luns & mapped to hosts on both as per existing Netapp storages - All new luns detected correctly under OS & VxVM - I have 4 way mirroing at host level using VxVM for some of volumes. how can I proceed for DRL(log only plexes) & few subvolume volumes? Sample output:- v lunx3 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-04 lunx3 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx3-S01 lunx3-04 lunx3-L01 1 2097152 0 3/3 ENA => subvolume v lunx3-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-P01 lunx3-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp2_datavol-64 lunx3-P01 netapp2_datavol 157523968 528 LOG FAS60400_0 ENA pl lunx3-P02 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-65 lunx3-P02 netapp2_datavol 157524496 2097152 0 FAS60400_0 ENA pl lunx3-P03 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-63 lunx3-P03 netapp1_datavol 157523968 2097152 0 FAS60401_5 ENA v lunx4 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-04 lunx4 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx4-S01 lunx4-04 lunx4-L01 1 2097152 0 3/3 ENA => subvolume v lunx4-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-P01 lunx4-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp1_datavol-64 lunx4-P01 netapp1_datavol 161718272 528 LOG FAS60401_5 ENA pl lunx4-P02 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-65 lunx4-P02 netapp1_datavol 159621120 2097152 0 FAS60401_5 ENA pl lunx4-P03 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-66 lunx4-P03 netapp2_datavol 159622176 2097152 0 FAS60400_0 ENA Thanks in Advance Ramesh SundaramSolved1.5KViews0likes4CommentsNFS agent for CFS share
Dear All, I have setup CFSshare. I can see under cfsshare display, but if I run "cfsshare share /emm2 rw" is NOT sharing but from the Java Console of the cluster. The NFS resource has got a question MARK. So I am really not too what to do now., root@node_1# cfsmount /emm5 rw Mounting... [/dev/vx/dsk/mobile_dg/mobile_vol_4] already mounted at /emm5 onnode_0 [/dev/vx/dsk/mobile_dg/mobile_vol_4] already mounted at /emm5 on node_1 root@node_1# cfsshare share /emm5 Warning: V-35-465: Resource [share3] is not online on system [node_0]. Warning: V-35-465: Resource [share3] is not online on system [node_1]. root@node_1# cfsshare display CNFS metadata filesystem : /locks Protocols Configured : NFS #RESOURCE MOUNTPOINT PROTOCOL OPTIONS share1 /emm2 NFS rw share2 /emm3 NFS rw share3 /emm5 NFS rw root@node_1# Regards, Thokozani1.5KViews0likes3CommentsOracle Data Base Replication with VVR under SFCFSHA/DR
Hi All; we are looking for whether VVR can use for the oracle database replication instead of oracle data guard solution. If it is used, do you know veritas gives support for any problem faced. even VVR keeps the write order fidelity, it is not certain the database integrity will ve preserved at the disaster site. do you have any best practices and white papers, experience, anything you suggest for this deploymeny?1.1KViews1like4CommentsHow To Manage Multiple VCS Clusters from a Central Location
What is the preferred tool/process used to manage multiple VCS clusters? We have several clusters in our environment and to date, the administrators RDP to one of the nodes in the cluster, open the Veritas Cluster Manager - Java Console. What is the solution to see all clusters in a single window or dashboard and also get notification of cluster events? Where is the documentation of how to setup the solution? Are there any videos that demonstrate how to setup or how the solution works?954Views2likes1Commentupgrade Storage Foundation and VCS 4.1 to SFHA 6.X
HI All, I'm going to upgrade my production environment with VXVM,VXFS and VCS 4.1. My question are: 1) After upgrade to Solaris10 up11, it's possibile to remove all Veritas packages and reinstalling everything at version SFHA 6.1 (and recover vcs configuration)? Does it support? 2) I see from Admin Guide this extract: Currently, only the Version 7, 8, 9, and 10 disk layouts can be created and mounted. The Version 6 disk layout can be mounted, but only for upgrading to a supported version what does it mean? Thanks in advance Rick931Views1like4CommentsSFHA Solutions 6.0.1: Troubleshooting unprobed resources in Veritas Cluster Server
Veritas Cluster Server (VCS) monitors resources when they are online and offline to ensure that they are not started on systems where they are not supposed to run. When you configure VCS, you should convey to the VCS engine the definitions of the cluster, service groups, resources, and dependencies among service groups and resources. VCS uses the following two configuration files in a default configuration: main.cf—Defines the cluster, including service groups and resources types.cf—Defines the resource types For more information about configuring VCS using VCS configuration files, see: About configuring VCS About the main.cf file About the types.cf file VCS mainly fails to probe the resources or service group during following scenarios: When a new types.cf file is not copied into the /etc/VRTSvcs/conf/config/ directory duringa a VCS upgrade. If VCS fails to probe the resources, the service group does not come online and also gets auto-disabled in the cluster. This happens due to old types.cf files in the /etc/VRTSvcs/conf/config/ directory. When the definitions of the cluster, service groups, resources, dependencies, and attributes remain undefined or incorrect in the main.cf file. This causes configuration errors, and a STALE_ADMIN_WAIT message is displayed. When the installation of an agent for a specific node has failed. When a resource returns the resource state as “UNKNOWN”, which means the agent or resource is unable to monitor the configured resource. When the resource is disabled. For more information on probing the resources or service group, or troubleshooting service groups, see Probing a resource Service group not fully probed Service group autodisabled Some of the probing issues can be resolved by copying the latest types.cf file from the /etc/VRTSvcs/conf/ directory to the /etc/VRTSvcs/conf/config/ directory as follows: 1. Stop the cluster onall nodes: # hastop -all –force Applications continue to run, but do not fail over. 2. Back up the original types.cf file: # mv /etc/VRTSvcs/conf/config/types.cf /etc/VRTSvcs/conf/config/types.cf.date 3. Copy the types.cf file: # cp /etc/VRTSvcs/conf/types.cf /etc/VRTSvcs/conf/config/types.cf 4. Verify that both types.cf file are of same size: # ls -l /etc/VRTSvcs/conf/types.cf # ls -l /etc/VRTSvcs/conf/config/types.cf 5. Start the cluster on the node: # hastart You must execute the hastart command on all nodes in the cluster, and also verify that the types.cf file did not revert to the original version. If so, then repeat the procedure and shut down the Low Latency Transport (LLT) and Global Atomic Broadcast (GAB), after you execute the hastop command. You can also find information about probing resources and troubleshooting service groups in the PDF versions of the following guides: Veritas Cluster Server Administrator’s Guide Veritas Storage Foundation and High Availability Solutions Troubleshooting Guide VCS documentation for other platforms and releases can be found on the SORT website.914Views2likes0Comments