SFHA Solutions 6.0.1: Using the hastatus command in Veritas Cluster Server (VCS)
You can use the hastatus command to display the changes to cluster objects such as resource, group, and system and to monitor transitions on a cluster. You can also verify the status of the cluster. The hastatus command functionality is also applicable to prior releases. For information on using the hastatus command, see: Querying status of service groups Querying status of remote and local clusters How the VCS engine (HAD) affects performance VCS command line reference Verifying the cluster Verifying the status of nodes and service groups For troubleshooting scenarios and solutions for using the hastatus command, see: Service group is waiting for the resource to be brought online/taken Offline Agent not running hastatus (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris For more information on using the hastatus command, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.896Views1like0CommentsSFHA Solutions 6.0.1: About the GAB logging daemon (gablogd) in Veritas Cluster Server and other Veritas Storage Foundation and High Availability products
The Group Membership and Atomic Broadcast (GAB) logging daemon collects the GAB-related logs during I/O fencing events or when the GAB message sequence fails. The daemon stores the data in a compact binary form. By default, the GAB logging daemon is enabled. You can tune the gab_ibuf_count parameter (one of the GAB load-time parameters) value to enable or disable the daemon and set the buffer count. The default value of the parameter is 8. Note: GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information on the GAB tunable parameters and tuning the gab_ibuf_count parameter, see: About GAB tunable parameters About GAB load-time or static tunable parameters Using the gabread_ffdc utility to read the GAB binary log files If GAB encounters some problem, then the First Failure Data Capture (FFDC) logs are generated and dumped by the gablogd daemon. For information on using the gabread_ffdc utility to read the GAB binary log files, see: GAB message logging Overriding the gab_ibuf_count parameter using the gabconfig –k option The gab_ibuf_count parameter controls whether the GAB logging daemon is enabled or disabled. If you want to override the gab_ibuf_count control parameter, use the gabconfig -k option to disable login to the gab daemon while the cluster is up and running. However, to re-enable the parameter you must restart GAB and all its client processes. For information on using the gabconfig –k options, see: gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on the GAB logging daemon, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website. For more information on GAB seeding, see: SFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products501Views0likes0CommentsSFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products
Group Membership and Atomic Broadcast (GAB) is a kernel component of Veritas Cluster Server (VCS) that provides globally-ordered messages that keep nodes synchronized. GAB maintains the cluster state information and the correct membership on the cluster. However, GAB needs another kernel component, Low Latency Transport (LLT), to send messages to the nodes and keep the cluster nodes connected. How GAB and LLT function together in a VCS cluster? VCS uses GAB and LLT to share data among nodes over private networks. LLT is the transport protocol responsible for fast kernel-to-kernel communications. GAB carries the state of the cluster and the cluster configuration to all the nodes on the cluster. These components provide the performance and reliability that VCS requires. In a cluster, nodes must share the groups, resources and the resource states. LLT and GAB help the nodes communicate. For information on LLT, GAB, and private networks, see: About LLT and GAB About network channels for heartbeating GAB seeding The GAB seeding function ensures that a new cluster starts with an accurate membership count of the number of nodes in the cluster. It prevents your cluster from a preexisting network partition upon initial start-up. A preexisting network partition refers to the failure in the communication channels that occurs while the nodes are down and VCS cannot respond. When the nodes start, GAB seeding reduces the vulnerability to network partitioning, regardless of the cause of the failure. GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information about preexisting network partitions, and how seeding functions in VCS, see: About preexisting network partitions About VCS seeding Enabling automatic seeding of GAB If I/O fencing is configured in the enabled mode, you can edit the /etc/vxfenmode file to enable automatic seeding of GAB. If the cluster is stuck with a preexisting split-brain condition, I/O fencing allows automatic seeding of GAB. You can set the minimum number of nodes to form a cluster for GAB to seed by configuring the Control port seed and Quorum flag parameters in the /etc/gabtab file. Quorum is the number of nodes that need to join a cluster for GAB to complete seeding. For information on configuring the autoseed_gab_timeout parameter in the /etc/vxfenmode file, see: About I/O fencing configuration files For information on configuring the control port seed and the Quorum flag parameters in GAB, see: About GAB run-time or dynamic tunable parameters For information on split-brain conditions, see: About the Steward process: Split-brain in two-cluster global clusters How I/O fencing works in different event scenarios Example of a preexisting network partition (split-brain) Role of GAB seeding in cluster membership For information on how the nodes gain cluster membership, seeding a cluster, and manual seeding of a cluster, see: About cluster membership Initial joining of systems to cluster membership Seeding a new cluster Seeding a cluster using the GAB auto-seed parameter through I/O fencing Manual seeding of a cluster Troubleshooting issues that are related to GAB seeding and preexisting network partitions For information on the issues that you may encounter when GAB seeds a cluster and preexisting network partitions, see: Examining GAB seed membership Manual GAB membership seeding Waiting for cluster membership after VCS start-up Summary of best practices for cluster communications System panics to prevent potential data corruption Fencing startup reports preexisting split-brain Clearing preexisting split-brain condition Recovering from a preexisting network partition (split-brain) Example Scenario I – Recovering from a preexisting network partition Example Scenario II – Recovering from a preexisting network partition Example Scenario III – Recovering from a preexisting network partition gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on seeding clusters to prevent preexisting network partitions, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.757Views2likes0CommentsSFHA Solutions 6.0.1 – Using the hastop command to stop the VCS engine and related processes
Use the hastop command to stop the Veritas Cluster Server (VCS) engine (High Availability Daemon, HAD) and related processes on all the nodes, on a local node, or on a specific node of the cluster. You can customize the behavior of the hastop command by configuring the EngineShutdown attribute for the cluster. The value of the EngineShutdown attribute, a cluster attribute, specifies how the engine is to proceed when you issue the hastop command. The hastop command is available after the VRTSvcs package is installed on a cluster node. The following links provide more information on using the hastop command: Stopping the VCS engine and related processes About controlling the hastop behavior by using the EngineShutdown attribute Additional considerations for stopping VCS Veritas Cluster Server command line reference You can use the hastop command with the following user-defined service group attributes: Evacuate attribute – allows you to issue hastop -local –evacuate to VCS to automatically fail over a service group to another node on the cluster. SysDownPolicy attribute – determines whether a service group is autodisabled when the system is down and if the service group is taken offline when the system is rebooted or is shut down gracefully. For more information on the Evacuate and SysDownPolicy attributes, see: Service group attributes of Veritas Cluster Server For more information on using the hastop command to stop the VCS engine and related processes, see: Veritas Cluster Server 6.0.1 Administrator's Guide hastop (1M) manual pages: AIX HP-UX Linux Solaris Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.564Views0likes0CommentsUsing third party Multipathing with VCS LVM agents
Using third party Multipathing with VCS LVM agents is mostly not clear in the documentation. With SF, you can understand why 3rd party Multipathing needs to be tested to be supported as SF integrates at a low-level with the storage and multipathing effect this, but with VCS, VCS integrates with LVM at a high level, activating and deactivating he volume groups (equivalent to importing and deporting Veritas diskgroups), so it is first unclear why any 3rd party multipathing software should not be supported, except for perhaps O/S multipathing which is tightly integrated with LVM where the command to activate or deactivate the diskgroup MAY be different if multipathing is involved. For AIX, the HCL specifically mentions VCS LVM agents with third party Multipathing: The VCS LVM agent supports the EMC PowerPath third-party driver on EMC's Symmetrix 8000 and DMX series arrays. The VCS LVM agent supports the HITACHI HDLM third-party driver on Hitachi USP/NSC/USPV/USPVM, 9900V series arrays. but VCS LVM agents are not mentioned for Linux or HP-ux - these should be, even if this is to say "no third party Multipathing is supported with VCS LVM agents" In the Linux 5.1 bundled agents guide, it IS clear that no third party Multipathing is supported with VCS LVM agents: You cannot use the DiskReservation agent to reserve disks that have multiple paths. The LVMVolumeGroup and the LVMLogicalVolume agents can only be used with the DiskReservation agent, Symantec does not support the configuration of logical volumes on disks that have multiple paths However, it is not so clear with 6.0 which says: No fixed dependencies exist for LVMVolumeGroup Agent. When you create a volume group on disks with single path, Symantec recommends that you use the DiskReservation agent So in 6.0, DiskReservation is optional, not mandatory as in 5.1, but the bundled agents guide does not say why the DiskReservation agent is mandatory in 5.1 and it does not elaborate why it is recommended in 6.0 - i.e. the 6.0 bundled agents guide does not explain the benefits of using the DiskReservation agent or the issues you may encounter if you don't use the DiskReservation agent. The 6.0 bundled agents guide says for the DiskReservation agent: In case of Veritas Dynamic Multi-Pathing, the LVMVolumeGroup and the LVMLogicalVolume agents can be used without the DiskReservation agent This says you can use Veritas Dynamic Multi-Pathing with LVM, but it doesn't explicitly say you can't use other multipathing software, and for the LVMVolumeGroup agent, the 6.0 bundled agents guide gives examples using multipathing, but it does not say if this is Veritas Dynamic Multi-Pathing or third party. To me, as the examples say multipathing and NOT specifically Veritas DMP, this implies ALL multipathing is supported, but it is not clear. So it seems that for 5.1 and before, for Linux and HP-ux (if you go by HCL), all disk multipathing was not supported with VCS LVM agents, but in the 10 years I worked for a consultant at Symantec, not one customer did NOT use disk multipathing - this is essential redundancy and there is no point having LVM agents if they don't support disk multipathing, so I am not sure it can be correct that multipathing is not supported with VCS LVM agents. This is redeemed in part by the recent introduction of Veritas Dynamic Multi-Pathing being available as a separate product which doesn't require SF and can be used on non-VxVM disks. So can the support be clarified for 5.1 and 6.0 of the support of third party Multipathing with VCS LVM agents on this forum and the documents updated to make what is supported clearer. Thanks Mike2.1KViews0likes5Comments