adding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question:can i add the disks to the DG and volumes to RVGwhile the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlinkto add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.4KViews0likes14Commentsdetaching vmdk files on vmware vm
When the application failover happens in VMware Guest enviorments, the VCS is responsible for failing over the application to other vm/vcs node on diffrent ESX host. In a scenario where the ESX/ESXi host itself faults, the VCS agents begin to fail over the application to the failover target system that resides on another host. The VMwareDisks agent communicates with the new ESX/ESXi host and initiates a disk detach operation on the faulted virtual machine. The agent then attaches the disk to the new failover target virtual machine. In this senario, how are the stale i/o from failing over guest/ESX host avoided? Are we on the mercy of VMware to take care of it? With SCSI3 PR this was the main problem that was solved. Moreover in such senario's even a garceful online detach wouldnt have gone through. I didnt find any references on VMware discussions forums as well. My customer wants to know about it, before he can deploy the application. Thanks, Raf3KViews0likes7Commentsconcurrency violation
Hi Team, This alert came in my environment in one of the AIX server 6.1 ofconcurrency violation. Subject: VCS SevereError for Service Group sapgtsprd, Service group concurrency violation Event Time: Wed Dec 3 06:46:54 EST 2014 Entity Name: sapgtsprd Entity Type: Service Group Entity Subtype: Failover Entity State: Service group concurrency violation Traps Origin: Veritas_Cluster_Server System Name: mapibm625 Entities Container Name: GTS_Prod Entities Container Type: VCS ================================================================================== engineA.log are, 2014/12/03 06:46:54 VCS INFO V-16-1-10299 Resource App_saposcol (Owner: Unspecified, Group: sapgtsprd) is online on mapibm625 (Not initiated by VCS) 2014/12/03 06:46:54 VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sapgtsprd 2014/12/03 06:46:54 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group sapgtsprd on all nodes 2014/12/03 06:46:55 VCS WARNING V-16-6-15034 (mapibm625) violation:-Offlining group sapgtsprd on system mapibm625 2014/12/03 06:46:55 VCS INFO V-16-1-50135 User root fired command: hagrp -offline sapgtsprd mapibm625 from localhost 2014/12/03 06:46:55 VCS NOTICE V-16-1-10167 Initiating manual offline of group sapgtsprd on system mapibm625 ====================================================================================================== What isConcurrency Violation in VCS? What are the steps we should have to take to resolve this?Kindly explain in detail. Thanks, AllaboutunixSolved2.8KViews2likes4Comments‘vxdmppr’ utility information
Hello, With VxVM, we get ‘vxdmppr’ utility which performs SCSI 3 PR operations on the disks similar to sg_persist on Linux. But we don’t find much documentation around this utility. In one of the blogs we saw that its unsupported utility. Can someone throw light on it. Has someone used it in the past? Or does anyone know how this utility is getting used in VxVM? How to know if this is supported or not? Rafiq1.8KViews1like7CommentsVxVM 4.1 SAN migrate at host level Mirrored volumes are associated with DRL
Hi, I have done SAN migrate earlier without DRL & Subvolume setup. It's mirroed between 2 arrays at host level using vxvm.But nowI'm in a position where I need to migrate 100+volumes between different new arrays. Some volumes have got DRL(log only plexes) & few subvolumes.Unfortunately no documentation of how theseDRL(log only plexes) & fewsubvolume can be migrated.so I'm looking for some help. The cluster is: - Solaris 10 - Solaris Cluster 3.1 two node campus cluster - VxVM 4.1 with CVM - 2 * NetApp storages mirroed at host level using VxVM Migration Plan questionsis: - 2 * new fujitsu storages have installed & created luns & mapped to hosts on both as per existing Netapp storages - All new luns detected correctly under OS & VxVM - I have 4 way mirroing at host level using VxVM for some of volumes. how can I proceed for DRL(log only plexes) & few subvolume volumes? Sample output:- v lunx3 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-04 lunx3 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx3-S01 lunx3-04 lunx3-L01 1 2097152 0 3/3 ENA => subvolume v lunx3-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-P01 lunx3-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp2_datavol-64 lunx3-P01 netapp2_datavol 157523968 528 LOG FAS60400_0 ENA pl lunx3-P02 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-65 lunx3-P02 netapp2_datavol 157524496 2097152 0 FAS60400_0 ENA pl lunx3-P03 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-63 lunx3-P03 netapp1_datavol 157523968 2097152 0 FAS60401_5 ENA v lunx4 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-04 lunx4 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx4-S01 lunx4-04 lunx4-L01 1 2097152 0 3/3 ENA => subvolume v lunx4-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-P01 lunx4-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp1_datavol-64 lunx4-P01 netapp1_datavol 161718272 528 LOG FAS60401_5 ENA pl lunx4-P02 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-65 lunx4-P02 netapp1_datavol 159621120 2097152 0 FAS60401_5 ENA pl lunx4-P03 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-66 lunx4-P03 netapp2_datavol 159622176 2097152 0 FAS60400_0 ENA Thanks in Advance Ramesh SundaramSolved1.5KViews0likes4CommentsRegarding resource grp faults
Hi, Clearing the faulty resource with this way, Clearing faulted resources in a service group Clear a resource to remove a fault and make the resource available to go online. To clear faulted, non-persistent resources in a service group Type the following command: hagrp -clear service_group [-sys system] and Clearing the faulty resource in this way, server14# hares -display | grep FAULT DBAppOramon State server14 FAULTED OK, Now you got the resource name. In order to online it back you must clear the resource first. Use the resource name that you grep just now. server14# hares -clear DBAppOramon -sys server14 Check back whether it is working fine. server14# hares -display | grep FAULT Now, you can online it back. But before that you must make sure where it was supposed to be online. You can check it back atmain.cf server14# hares -online DBAppOramon -sys server14 Now you can check whether the reource will go online. It will take some time to online back. server14# hares -display | grep DBAppOramon | grep ONLINE DBAppOramon State server14 ONLINE ============================== ============================== ========= Is there any difference between these two ways, both of these ways are used to clear the faults, right,Correct me if I am wrong please.Solved1.5KViews1like3CommentsNFS agent for CFS share
Dear All, I have setup CFSshare. I can see under cfsshare display, but if I run "cfsshare share /emm2 rw" is NOT sharing but from the Java Console of the cluster. The NFS resource has got a question MARK. So I am really not too what to do now., root@node_1# cfsmount /emm5 rw Mounting... [/dev/vx/dsk/mobile_dg/mobile_vol_4] already mounted at /emm5 onnode_0 [/dev/vx/dsk/mobile_dg/mobile_vol_4] already mounted at /emm5 on node_1 root@node_1# cfsshare share /emm5 Warning: V-35-465: Resource [share3] is not online on system [node_0]. Warning: V-35-465: Resource [share3] is not online on system [node_1]. root@node_1# cfsshare display CNFS metadata filesystem : /locks Protocols Configured : NFS #RESOURCE MOUNTPOINT PROTOCOL OPTIONS share1 /emm2 NFS rw share2 /emm3 NFS rw share3 /emm5 NFS rw root@node_1# Regards, Thokozani1.5KViews0likes3CommentsSymantec High Availability 6.1 Solution: Configuration in VMware Site Recovery Manager (SRM) environment
The Symantec High Availability 6.1 Solution provides scripts to perform some of the configuration tasks required for application monitoring continuity in a VMware SRM environment. In the event of a failure, when the SRM recovery plan is executed, the Symantec High Availability recovery command retrieves the application status, and the Symantec Cluster Server (VCS) network and application-specific agents bring the network and application components online. This ensures application monitoring continuity after the failover. For information about configuring the Symantec High Availability Solution in a VMware SRM environment, see: Configuring Symantec High Availability Solution in VMware SRM environment This quick reference guide provides: Acomplete workflow of the configuration tasks involved Aquick start for each high-level task Alist of reference documents and their download location Storage Foundation and High Availability and ApplicationHA documentation for other releases and platforms can be found on theSORT website.1.2KViews2likes2CommentsVeritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide Veritas InfoScale documentation for other releases andplatforms can be found on the SORT website.1.1KViews5likes0Comments