File permission change for directory in vcs
Hi All , We have a directory say /test which is under vcs in AIX ,The filesystem is JFS2 . The owner and group of directory is root:root .The filesystem is currently mounted by vcs . canwe chnage the permission with chown command . NOTE: i dont see ownership assigned for mountpoint when i checked via hares -display <Mountpoint> Regards DilipSolved1.7KViews3likes4CommentsVCS verificaiton and Automation
Hello Guys, We do configure many VCS cluster and we would like to verify all the confiugraion before we hand over to customer. We would like to automate the verification. here is my plan to do basic verificaiton - Verify LLT config -Verify GAB config - Check main.cf , and see volume group or VIP is not configured in boot up. Anything else I need to check ? feell free to share any ideaSolved942Views3likes2CommentsApplicationHA on LPAR with VCS and SRDF replication agent
Hello, I am evaluating AppllicationHA in my environment which is composed of LPARs in two frames installed in two separate sites. The LPARs are accesing two EMC storage VMAX and there is a replication link using SRDF. We are not able to use Live Partition Mobility between the two PowerSystem Servers because there is not an storage shared betweem them. For this reason we will like to use a VCS management cluster installed in the frames (one node on each PowerSystem Server) and this VCS cluster is using SRDF replication agent. My question is if we configure applicationHA in this VCS management cluster to protect a LPAR, can this LPAR use the VCS with SRDF agent? Doing so, we will solve our limitation and we will be able to relocate the LPAR to the other frame automatically. Thanks and best regards Osvaldo1.5KViews2likes1Commentconcurrency violation
Hi Team, This alert came in my environment in one of the AIX server 6.1 ofconcurrency violation. Subject: VCS SevereError for Service Group sapgtsprd, Service group concurrency violation Event Time: Wed Dec 3 06:46:54 EST 2014 Entity Name: sapgtsprd Entity Type: Service Group Entity Subtype: Failover Entity State: Service group concurrency violation Traps Origin: Veritas_Cluster_Server System Name: mapibm625 Entities Container Name: GTS_Prod Entities Container Type: VCS ================================================================================== engineA.log are, 2014/12/03 06:46:54 VCS INFO V-16-1-10299 Resource App_saposcol (Owner: Unspecified, Group: sapgtsprd) is online on mapibm625 (Not initiated by VCS) 2014/12/03 06:46:54 VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sapgtsprd 2014/12/03 06:46:54 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group sapgtsprd on all nodes 2014/12/03 06:46:55 VCS WARNING V-16-6-15034 (mapibm625) violation:-Offlining group sapgtsprd on system mapibm625 2014/12/03 06:46:55 VCS INFO V-16-1-50135 User root fired command: hagrp -offline sapgtsprd mapibm625 from localhost 2014/12/03 06:46:55 VCS NOTICE V-16-1-10167 Initiating manual offline of group sapgtsprd on system mapibm625 ====================================================================================================== What isConcurrency Violation in VCS? What are the steps we should have to take to resolve this?Kindly explain in detail. Thanks, AllaboutunixSolved2.8KViews2likes4CommentsSFRAC in AIX 7.1
Dear experts, I trying to install SFRAC in AIX Machine. I guess almost most of the configuration procedure already over. Kindly correct me if I need to do any other procedure to make 100% complete. Task completed: 1) Installation SFRAC & clusterware (local file system on both nodes) 2) install binaries (local file system on both nodes) 3) configure cssd agent 4) relink oracle binaries For datafiles I created seperately shared vg,vol,mount point using cfsmntadm. Kindly clear my below doubts. Online service groups: ClusterService ======> It hold web IP ressource & online on one node ( What is the use of this IP & is should be online two hosts ?) cvm =======> online on both node (hold voting & ocr) oradata_sg =======> online on both node (old datafiles) vrts_vea_cfs_int_cfsmount1 ===> online on both node (is it created by cfsmntadm ? ) vxfen ==> online on both node (IO fencing) My question is : 1) Do I need to link my datafiles SG to any other SG ? 2) Do i need to configure any other res,SG for network ? (I guess clusterware taking care vIP) 3) In main.cf file oradata_sg & vrts_vea_cfs_int_cfsmount1 both poing for datafiles mountpoint. can I remove vrts_vea_cfs_int_cfsmount1 sg ? root@flanker:/> hastatus -sum -- SYSTEM STATE -- System State Frozen A flanker RUNNING 0 A soren RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService flanker Y N ONLINE B ClusterService soren Y N OFFLINE B cvm flanker Y N ONLINE B cvm soren Y N ONLINE B oradata_sg flanker Y N ONLINE B oradata_sg soren Y N ONLINE B vrts_vea_cfs_int_cfsmount1 flanker Y N ONLINE B vrts_vea_cfs_int_cfsmount1 soren Y N ONLINE B vxfen flanker Y N ONLINE B vxfen soren Y N ONLINESolved932Views2likes5CommentsSFRAC in AIX 7.1
Dear experts, I trying to install SFRAC in AIX Machine. I guess almost most of the configuration procedure already over. Kindly correct me if I need to do any other procedure to make 100% complete. Task completed: 1) Installation SFRAC & clusterware (local file system on both nodes) 2) install binaries (local file system on both nodes) 3) configure cssd agent 4) relink oracle binaries For datafiles I created seperately shared vg,vol,mount point using cfsmntadm. Kindly clear my below doubts. Online service groups: ClusterService ======> It hold web IP ressource & online on one node ( What is the use of this IP & is should be online two hosts ?) cvm =======> online on both node (hold voting & ocr) oradata_sg =======> online on both node (old datafiles) vrts_vea_cfs_int_cfsmount1 ===> online on both node (is it created by cfsmntadm ? ) vxfen ==> online on both node (IO fencing) My question is : 1) Do I need to link my datafiles SG to any other SG ? 2) Do i need to configure any other res,SG for network ? (I guess clusterware taking care vIP) 3) In main.cf file oradata_sg & vrts_vea_cfs_int_cfsmount1 both poing for datafiles mountpoint. can I remove vrts_vea_cfs_int_cfsmount1 sg ? root@flanker:/> hastatus -sum -- SYSTEM STATE -- System State Frozen A flanker RUNNING 0 A soren RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService flanker Y N ONLINE B ClusterService soren Y N OFFLINE B cvm flanker Y N ONLINE B cvm soren Y N ONLINE B oradata_sg flanker Y N ONLINE B oradata_sg soren Y N ONLINE B vrts_vea_cfs_int_cfsmount1 flanker Y N ONLINE B vrts_vea_cfs_int_cfsmount1 soren Y N ONLINE B vxfen flanker Y N ONLINE B vxfen soren Y N ONLINESolved638Views2likes1CommentBasic question on vxdisk list ..
Hi All, This is my first post in the community & I am very new to Veritas, please excuse me if questions are of layman category I have a AIX server with volume manager 5.1, I am trying to understand the "vxdisk list" output. I visited the SORT site to see the documents, found volume manager admin guide https://sort.symantec.com/documents/doc_details/sfha/5.1/AIX/ProductGuides/ From below output # vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk1 auto:sliced disk1 mydg online hdisk2 auto:sliced disk2 mydg online hdisk3 auto:sliced disk3 mydg online hdisk4 auto:sliced disk4 mydg online What is Disk Access Name & What is Disk Media name ? what is the purpose of both ? Thanks NeeSolved1.2KViews2likes2CommentsSFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products
Group Membership and Atomic Broadcast (GAB) is a kernel component of Veritas Cluster Server (VCS) that provides globally-ordered messages that keep nodes synchronized. GAB maintains the cluster state information and the correct membership on the cluster. However, GAB needs another kernel component, Low Latency Transport (LLT), to send messages to the nodes and keep the cluster nodes connected. How GAB and LLT function together in a VCS cluster? VCS uses GAB and LLT to share data among nodes over private networks. LLT is the transport protocol responsible for fast kernel-to-kernel communications. GAB carries the state of the cluster and the cluster configuration to all the nodes on the cluster. These components provide the performance and reliability that VCS requires. In a cluster, nodes must share the groups, resources and the resource states. LLT and GAB help the nodes communicate. For information on LLT, GAB, and private networks, see: About LLT and GAB About network channels for heartbeating GAB seeding The GAB seeding function ensures that a new cluster starts with an accurate membership count of the number of nodes in the cluster. It prevents your cluster from a preexisting network partition upon initial start-up. A preexisting network partition refers to the failure in the communication channels that occurs while the nodes are down and VCS cannot respond. When the nodes start, GAB seeding reduces the vulnerability to network partitioning, regardless of the cause of the failure. GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information about preexisting network partitions, and how seeding functions in VCS, see: About preexisting network partitions About VCS seeding Enabling automatic seeding of GAB If I/O fencing is configured in the enabled mode, you can edit the /etc/vxfenmode file to enable automatic seeding of GAB. If the cluster is stuck with a preexisting split-brain condition, I/O fencing allows automatic seeding of GAB. You can set the minimum number of nodes to form a cluster for GAB to seed by configuring the Control port seed and Quorum flag parameters in the /etc/gabtab file. Quorum is the number of nodes that need to join a cluster for GAB to complete seeding. For information on configuring the autoseed_gab_timeout parameter in the /etc/vxfenmode file, see: About I/O fencing configuration files For information on configuring the control port seed and the Quorum flag parameters in GAB, see: About GAB run-time or dynamic tunable parameters For information on split-brain conditions, see: About the Steward process: Split-brain in two-cluster global clusters How I/O fencing works in different event scenarios Example of a preexisting network partition (split-brain) Role of GAB seeding in cluster membership For information on how the nodes gain cluster membership, seeding a cluster, and manual seeding of a cluster, see: About cluster membership Initial joining of systems to cluster membership Seeding a new cluster Seeding a cluster using the GAB auto-seed parameter through I/O fencing Manual seeding of a cluster Troubleshooting issues that are related to GAB seeding and preexisting network partitions For information on the issues that you may encounter when GAB seeds a cluster and preexisting network partitions, see: Examining GAB seed membership Manual GAB membership seeding Waiting for cluster membership after VCS start-up Summary of best practices for cluster communications System panics to prevent potential data corruption Fencing startup reports preexisting split-brain Clearing preexisting split-brain condition Recovering from a preexisting network partition (split-brain) Example Scenario I – Recovering from a preexisting network partition Example Scenario II – Recovering from a preexisting network partition Example Scenario III – Recovering from a preexisting network partition gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on seeding clusters to prevent preexisting network partitions, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.739Views2likes0CommentsSFHA Solutions 6.2: Symantec Storage plug-in for OEM 12c
Symantec Storage plug-in for OEM12c enables you to view and manage the Storage Foundation and VCS objects through the Oracle Enterprise Manager 12c (OEM), which has a graphical interface. It works with the Symantec Storage Foundation and High Availability 6.2 product suite. The Symantec Storage plug-in allows you to: SmartIO: manage Oracle database objects using SmartIO. Snapshot: create point-in-time copies (Storage Checkpoint, Database FlashSnap, Space-optimized Snapshot, and FileSnap) of Oracle databases using SFDB features Cluster: vew cluster-specific information. You can get the plug-in by downloading the attached file. For more information about installing and using the plug-in, download the attached Application Note. Terms of use for this information are found in Legal Notices.3.1KViews1like0CommentsENABLE READ ONLY FOR AN ENCLOSURE WITH DMP
Hello, I have a media server where the disks belonging to a specific enclosure are misbehaving. Their response times are extremely high using vxdmpadm iostat and i can also see from the OS logs that they are faulty. How can i just disable writes to those disks. I dont want to to completely disable the enclosure as we have data on it that may be used for restore purposes so we need to keep read access to it.Solved1.9KViews1like5Comments