concurrency violation
Hi Team, This alert came in my environment in one of the AIX server 6.1 ofconcurrency violation. Subject: VCS SevereError for Service Group sapgtsprd, Service group concurrency violation Event Time: Wed Dec 3 06:46:54 EST 2014 Entity Name: sapgtsprd Entity Type: Service Group Entity Subtype: Failover Entity State: Service group concurrency violation Traps Origin: Veritas_Cluster_Server System Name: mapibm625 Entities Container Name: GTS_Prod Entities Container Type: VCS ================================================================================== engineA.log are, 2014/12/03 06:46:54 VCS INFO V-16-1-10299 Resource App_saposcol (Owner: Unspecified, Group: sapgtsprd) is online on mapibm625 (Not initiated by VCS) 2014/12/03 06:46:54 VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sapgtsprd 2014/12/03 06:46:54 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group sapgtsprd on all nodes 2014/12/03 06:46:55 VCS WARNING V-16-6-15034 (mapibm625) violation:-Offlining group sapgtsprd on system mapibm625 2014/12/03 06:46:55 VCS INFO V-16-1-50135 User root fired command: hagrp -offline sapgtsprd mapibm625 from localhost 2014/12/03 06:46:55 VCS NOTICE V-16-1-10167 Initiating manual offline of group sapgtsprd on system mapibm625 ====================================================================================================== What isConcurrency Violation in VCS? What are the steps we should have to take to resolve this?Kindly explain in detail. Thanks, AllaboutunixSolved2.8KViews2likes4CommentsSFHA Solutions 6.2 (Aix): Configuring IBM PowerVM LPAR guest for disaster recovery
The IBM PowerVM is configured for disaster recovery by replicating the boot disk using the replication methods like Hitachi TrueCopy and EMC SRDF. IBM has its own rootvg replication technology like duplicating and cloning utility. The network configuration for the LPAR on the primary site may not be effective on the secondary site, if the two sites are in different IP subnets. To apply the different network configurations on the different sites, you need to make additional configuration changes to the LPAR resource. For more info about how to set up the LPAR guest (managed LPAR) for disaster recovery, please refer to the section “Configuring IBM PowerVM LPAR guest for disaster recovery” in the Disaster Recovery Implementation Guide Symantec Storage Foundation and High Availability documentation is available on theSORT website352Views0likes0CommentsRoot cause for resource fault in vcs 6.0
Hi Team, We have a AIX 6.0 server in which vcs 6.0 is runningresource got faulted and now came online automatically. Submitting some details, We only know that there is something wrong with database listener configuration, however unable to find the root cause of it why resource faulted and came online automatically Can you please help to understand about this? Before: [root@cylibm004 /]# hastatus -sum -- SYSTEM STATE -- System State Frozen A cylibm003 RUNNING 0 A cylibm004 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService cylibm003 Y N ONLINE B ClusterService cylibm004 Y N OFFLINE B DB_GLSCENR5 cylibm003 Y N OFFLINE B DB_GLSCENR5 cylibm004 Y N PARTIAL B DB_GLSCYL cylibm003 Y N ONLINE B DB_GLSCYL cylibm004 Y N OFFLINE -- RESOURCES FAILED -- Group Type Resource System C DB_GLSCENR5 Netlsnr lsnr_glscyl cylibm004 After: [root@cylibm004 /]# hastatus -sum -- SYSTEM STATE -- System State Frozen A cylibm003 RUNNING 0 A cylibm004 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService cylibm003 Y N ONLINE B ClusterService cylibm004 Y N OFFLINE B DB_GLSCENR5 cylibm003 Y N OFFLINE B DB_GLSCENR5 cylibm004 Y N ONLINE B DB_GLSCYL cylibm003 Y N ONLINE B DB_GLSCYL cylibm004 Y N OFFLINE -----Original Message----- From: Notifier Sent: Friday, November 07, 2014 2:15 PM Subject: VCS Error for Resource lsnr_glscyl, Resource has faulted Event Time: Fri Nov 7 14:15:06 CST 2014 Entity Name: lsnr_glscyl Entity Type: Resource Entity Subtype: Netlsnr Entity State: Resource has faulted Traps Origin: Veritas_Cluster_Server System Name: cylibm004 Entities Container Name: DB_GLSCENR5 Entities Container Type: Service Group Entities Owner: unknown ============================================================================================================================== EngineA.log - 2014/11/07 14:14:54 VCS WARNING V-16-10011-8 (cylibm004) Netlsnr:lsnr_glscyl:LsnrTest.pl: File /oracle/.profile is not a valid text file 2014/11/07 14:14:55 VCS INFO V-16-20002-211 (cylibm004) Netlsnr:lsnr_glscyl:monitor:Monitor procedure /opt/VRTSagents/ha/bin/Netlsnr/LsnrTest.pl returned the output: Cannot get "LOGNAME" variable. 2014/11/07 14:14:55 VCS ERROR V-16-2-13067 (cylibm004) Agent is calling clean for resource(lsnr_glscyl) because the resource became OFFLINE unexpectedly, on its own. 2014/11/07 14:14:55 VCS NOTICE V-16-20002-42 (cylibm004) Netlsnr:lsnr_glscyl:clean:Listener(LISTENER_GLSCENR5) kill TERM 12845176 2014/11/07 14:15:06 VCS INFO V-16-2-13068 (cylibm004) Resource(lsnr_glscyl) - clean completed successfully. 2014/11/07 14:15:06 VCS WARNING V-16-20002-226 (cylibm004) Netlsnr:lsnr_glscyl:monitor:getargs for process tnslsnr failed with return code 0 2014/11/07 14:15:06 VCS INFO V-16-1-10307 Resource lsnr_glscyl (Owner: unknown, Group: DB_GLSCENR5) is offline on cylibm004 (Not initiated by VCS) 2014/11/07 14:15:40 VCS INFO V-16-1-50086 CPU usage on cylibm004 is 65% 2014/11/07 14:18:10 VCS INFO V-16-1-50086 CPU usage on cylibm004 is 64% 2014/11/07 14:21:40 VCS INFO V-16-1-50086 CPU usage on cylibm004 is 66% 2014/11/07 14:25:08 VCS INFO V-16-1-10299 Resource lsnr_glscyl (Owner: unknown, Group: DB_GLSCENR5) is online on cylibm004 (Not initiated by VCS) 2014/11/07 14:25:08 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group DB_GLSCENR5 on all nodes 2014/11/07 14:25:08 VCS NOTICE V-16-1-10447 Group DB_GLSCENR5 is online on system cylibm004 2014/11/07 14:27:10 VCS INFO V-16-1-50086 CPU usage on cylibm004 is 64% 2014/11/07 14:30:40 VCS INFO V-16-1-50086 CPU usage on cylibm004 is 65% 2014/11/07 14:31:10 VCS NOTICE V-16-1-50086 CPU usage on cylibm004 is 70% =========================================================================================== ============================================================================================ Main.cf file - MountPoint = "/DB_GLSCENR5/oracle" BlockDevice = "/dev/GLSCENR5_Oracle" FSType = jfs2 FsckOpt = "-y" ) Mount GLSCENR5_data3 ( Critical = 0 MountPoint = "/DB_GLSCENR5/data3" BlockDevice = "/dev/GLSCENR5_data3" FSType = jfs2 FsckOpt = "-y" ) Netlsnr lsnr_glscyl ( Critical = 0 Owner = oracle Home = "/DB_GLSCENR5/oracle/product/11.2.0.3" TnsAdmin = "/var/opt/oracle" Listener = LISTENER_GLSCENR5 EnvFile = "/oracle/.profile" ) Oracle ora_glscyl ( Critical = 0 Sid = glscenr5 Owner = oracle Home = "/DB_GLSCENR5/oracle/product/11.2.0.3" EnvFile = "/oracle/.profile" ) Proxy DB_GLSCENR5_Proxy ( TargetResName = csgnic ) DB_GLSCENR5_IP requires DB_GLSCENR5_Proxy GLSCENR5_ARCH requires DB_GLS_CENR5_LVMVG GLSCENR5_BACKUP requires DB_GLS_CENR5_LVMVG GLSCENR5_DATA1 requires DB_GLS_CENR5_LVMVG GLSCENR5_DATA2 requires DB_GLS_CENR5_LVMVG GLSCENR5_ORACLE requires DB_GLS_CENR5_LVMVG GLSCENR5_data3 requires DB_GLS_CENR5_LVMVG lsnr_glscyl requires DB_GLSCENR5_IP lsnr_glscyl requires ora_glscyl ora_glscyl requires GLSCENR5_ARCH ora_glscyl requires GLSCENR5_BACKUP ora_glscyl requires GLSCENR5_DATA1 ora_glscyl requires GLSCENR5_DATA2 ora_glscyl requires GLSCENR5_ORACLE ora_glscyl requires GLSCENR5_data3 // resource dependency tree // // group DB_GLSCENR5 // { // Netlsnr lsnr_glscyl // { // Oracle ora_glscyl // { // Mount GLSCENR5_ORACLE // { // LVMVG DB_GLS_CENR5_LVMVG // } // Mount GLSCENR5_DATA2 // { // LVMVG DB_GLS_CENR5_LVMVG // } // Mount GLSCENR5_DATA1 // { // LVMVG DB_GLS_CENR5_LVMVG // } // Mount GLSCENR5_BACKUP // { // LVMVG DB_GLS_CENR5_LVMVG // } // Mount GLSCENR5_ARCH // { // LVMVG DB_GLS_CENR5_LVMVG // } // Mount GLSCENR5_data3 // { // LVMVG DB_GLS_CENR5_LVMVG // } // } // IP DB_GLSCENR5_IP // { // Proxy DB_GLSCENR5_Proxy // } // } // } group DB_GLSCYL ( SystemList = { cylibm003 = 0, cylibm004 = 1 } ) IP DB_GLSCYL_IP ( Critical = 0 Device = en2 Address = "132.189.249.119" NetMask = "255.255.255.128" ) LVMVG DB_GLSCYL_LVMVG ( VolumeGroup = DB_GLS_PRD MajorNumber @cylibm003 = 40 MajorNumber @cylibm004 = 40 ) Mount GLS_ARCH ( Critical = 0 MountPoint = "/DB_GLSCYL/arch" BlockDevice = "/dev/GLSCYL_Arch" FSType = jfs2 FsckOpt = "-y" ) Mount GLS_BACKUP ( Critical = 0 MountPoint = "/DB_GLSCYL/backup" BlockDevice = "/dev/GLSCYL_Backup" FSType = jfs2 FsckOpt = "-y" ) Mount GLS_DATA1 ( Critical = 0 MountPoint = "/DB_GLSCYL/data1" BlockDevice = "/dev/GLSCYL_Data1" FSType = jfs2 FsckOpt = "-y" ) Mount GLS_DATA2 ( Critical = 0 MountPoint = "/DB_GLSCYL/data2" BlockDevice = "/dev/GLSCYL_Data2" FSType = jfs2 FsckOpt = "-y" ) Mount GLS_ORACLE ( Critical = 0 MountPoint = "/DB_GLSCYL/oracle" BlockDevice = "/dev/GLSCYL_Oracle" FSType = jfs2 FsckOpt = "-y" ) Netlsnr lsnr_dbglscyl ( Critical = 0 Owner = oracle Home = "/DB_GLSCYL/oracle/product/11.2.0.3" TnsAdmin = "/var/opt/oracle" Listener = LISTENER_GLSCYL EnvFile = "/oracle/.profile" ) Oracle ora_dbglscyl ( Critical = 0 Sid = glscyl Owner = oracle Home = "/DB_GLSCYL/oracle/product/11.2.0.3" EnvFile = "/oracle/.profile" ) Proxy GLSCYL_PROXY ( TargetResName = csgnic ) DB_GLSCYL_IP requires GLSCYL_PROXY GLS_ARCH requires DB_GLSCYL_LVMVG GLS_BACKUP requires DB_GLSCYL_LVMVG GLS_DATA1 requires DB_GLSCYL_LVMVG GLS_DATA2 requires DB_GLSCYL_LVMVG GLS_ORACLE requires DB_GLSCYL_LVMVG lsnr_dbglscyl requires DB_GLSCYL_IP lsnr_dbglscyl requires ora_dbglscyl ora_dbglscyl requires GLS_ARCH ora_dbglscyl requires GLS_BACKUP ora_dbglscyl requires GLS_DATA1 ora_dbglscyl requires GLS_DATA2 ora_dbglscyl requires GLS_ORACLE // resource dependency tree // // group DB_GLSCYL // { // Netlsnr lsnr_dbglscyl // { // Oracle ora_dbglscyl // { // Mount GLS_BACKUP // { // LVMVG DB_GLSCYL_LVMVG // } // Mount GLS_ARCH // { // LVMVG DB_GLSCYL_LVMVG // } // Mount GLS_ORACLE // { // LVMVG DB_GLSCYL_LVMVG // } // Mount GLS_DATA2 // { // LVMVG DB_GLSCYL_LVMVG // } // Mount GLS_DATA1 // { // LVMVG DB_GLSCYL_LVMVG // } // } // IP DB_GLSCYL_IP // { // Proxy GLSCYL_PROXY // } // } // } ========================================================= Thanks, AllaboutunixSolved2.1KViews1like3Commentsvxdisk list shows Simple type disks as "error"
Hi , Am facing issue on Aix 7 with VxVm VRTSvxvm 6.0.300.0 A F Veritas Volume Manager by and am using the Third party Multipathing software (Dynapath). Problem is that after the "vxdisk scandisk" vxdisk list is showing the foriegn disk in "error " state.Please suggest a way. Steps: - creating nodes in /dev/vx/dmp, /dev/vx/rdmp - Added as foreign disk - vxdisk scandisks -vxdctl enable -vxdisk list bash-3.2# vxdisk list DEVICE TYPE DISK GROUP STATUS hdisk0 auto:LVM - - LVM hdiskdpd0 simple - - error hdisk1 auto:LVM - - LVM hdisk3 auto:LVM - - LVM hdisk4 auto:none - - online invalid hdisk5 auto:LVM - - LVM bash-3.2# vxdisk list hdiskdpd0 Device: hdiskdpd0 devicetag: hdiskdpd0 type: simple flags: error private foreign pubpaths: block=/dev/hdiskdpd0 char=/dev/rhdiskdpd0 guid: - udid: INVALID site: - errno: Device path not valid4.2KViews0likes28CommentsSFHA Solutions 6.0.1: About virtualization solutions for AIX environments
Virtualization technologies such as IBM PowerVM enable you to create isolated virtual environments or logical partitions on the same physical system. The virtualized computing environment can be abstracted from all physical devices. This enables you to consolidate and centrally manage your workloads on a system for greater flexibility and efficiency.Veritas Storage Foundation and High Availability Solutions (SFHA Solutions) 6.0.1 supports AIX Virtual I/O Server (VIOS), logical partitions (LPARs), and the VIOS-based virtual environment. For more information on VIOS, LPARs, and the VIOS-based virtual enfironment, see: Introduction to AIX logical partition (LPAR) virtualization technology SFHA Solutions provide the following functionality for LPARs: Storage visibility Storage management Replication support High availability Disaster recovery For information on supported configurations for SFHA Solutions in VIOS environments, see: Supported configurations for Virtual I/O servers (VIOS) on AIX Supported use cases for SFHA Solutions in the VIOS environment include: Application management and availability Live partition mobility About migration from Physical to VIO environment Boot device management For implementation details for SFHA Virtualization Solutions, see: Veritas Storage Foundation and High Availability Solutions Virtualization Guide for AIX Veritas Storage Foundation and High Availability documentation for other releases can be found on theSORT website.610Views1like0CommentsSFHA Solutions 6.0.1: Using the hastatus command in Veritas Cluster Server (VCS)
You can use the hastatus command to display the changes to cluster objects such as resource, group, and system and to monitor transitions on a cluster. You can also verify the status of the cluster. The hastatus command functionality is also applicable to prior releases. For information on using the hastatus command, see: Querying status of service groups Querying status of remote and local clusters How the VCS engine (HAD) affects performance VCS command line reference Verifying the cluster Verifying the status of nodes and service groups For troubleshooting scenarios and solutions for using the hastatus command, see: Service group is waiting for the resource to be brought online/taken Offline Agent not running hastatus (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris For more information on using the hastatus command, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.885Views1like0CommentsSFHA Solutions 6.0.1: About the GAB logging daemon (gablogd) in Veritas Cluster Server and other Veritas Storage Foundation and High Availability products
The Group Membership and Atomic Broadcast (GAB) logging daemon collects the GAB-related logs during I/O fencing events or when the GAB message sequence fails. The daemon stores the data in a compact binary form. By default, the GAB logging daemon is enabled. You can tune the gab_ibuf_count parameter (one of the GAB load-time parameters) value to enable or disable the daemon and set the buffer count. The default value of the parameter is 8. Note: GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information on the GAB tunable parameters and tuning the gab_ibuf_count parameter, see: About GAB tunable parameters About GAB load-time or static tunable parameters Using the gabread_ffdc utility to read the GAB binary log files If GAB encounters some problem, then the First Failure Data Capture (FFDC) logs are generated and dumped by the gablogd daemon. For information on using the gabread_ffdc utility to read the GAB binary log files, see: GAB message logging Overriding the gab_ibuf_count parameter using the gabconfig –k option The gab_ibuf_count parameter controls whether the GAB logging daemon is enabled or disabled. If you want to override the gab_ibuf_count control parameter, use the gabconfig -k option to disable login to the gab daemon while the cluster is up and running. However, to re-enable the parameter you must restart GAB and all its client processes. For information on using the gabconfig –k options, see: gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on the GAB logging daemon, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website. For more information on GAB seeding, see: SFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products501Views0likes0CommentsSFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products
Group Membership and Atomic Broadcast (GAB) is a kernel component of Veritas Cluster Server (VCS) that provides globally-ordered messages that keep nodes synchronized. GAB maintains the cluster state information and the correct membership on the cluster. However, GAB needs another kernel component, Low Latency Transport (LLT), to send messages to the nodes and keep the cluster nodes connected. How GAB and LLT function together in a VCS cluster? VCS uses GAB and LLT to share data among nodes over private networks. LLT is the transport protocol responsible for fast kernel-to-kernel communications. GAB carries the state of the cluster and the cluster configuration to all the nodes on the cluster. These components provide the performance and reliability that VCS requires. In a cluster, nodes must share the groups, resources and the resource states. LLT and GAB help the nodes communicate. For information on LLT, GAB, and private networks, see: About LLT and GAB About network channels for heartbeating GAB seeding The GAB seeding function ensures that a new cluster starts with an accurate membership count of the number of nodes in the cluster. It prevents your cluster from a preexisting network partition upon initial start-up. A preexisting network partition refers to the failure in the communication channels that occurs while the nodes are down and VCS cannot respond. When the nodes start, GAB seeding reduces the vulnerability to network partitioning, regardless of the cause of the failure. GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information about preexisting network partitions, and how seeding functions in VCS, see: About preexisting network partitions About VCS seeding Enabling automatic seeding of GAB If I/O fencing is configured in the enabled mode, you can edit the /etc/vxfenmode file to enable automatic seeding of GAB. If the cluster is stuck with a preexisting split-brain condition, I/O fencing allows automatic seeding of GAB. You can set the minimum number of nodes to form a cluster for GAB to seed by configuring the Control port seed and Quorum flag parameters in the /etc/gabtab file. Quorum is the number of nodes that need to join a cluster for GAB to complete seeding. For information on configuring the autoseed_gab_timeout parameter in the /etc/vxfenmode file, see: About I/O fencing configuration files For information on configuring the control port seed and the Quorum flag parameters in GAB, see: About GAB run-time or dynamic tunable parameters For information on split-brain conditions, see: About the Steward process: Split-brain in two-cluster global clusters How I/O fencing works in different event scenarios Example of a preexisting network partition (split-brain) Role of GAB seeding in cluster membership For information on how the nodes gain cluster membership, seeding a cluster, and manual seeding of a cluster, see: About cluster membership Initial joining of systems to cluster membership Seeding a new cluster Seeding a cluster using the GAB auto-seed parameter through I/O fencing Manual seeding of a cluster Troubleshooting issues that are related to GAB seeding and preexisting network partitions For information on the issues that you may encounter when GAB seeds a cluster and preexisting network partitions, see: Examining GAB seed membership Manual GAB membership seeding Waiting for cluster membership after VCS start-up Summary of best practices for cluster communications System panics to prevent potential data corruption Fencing startup reports preexisting split-brain Clearing preexisting split-brain condition Recovering from a preexisting network partition (split-brain) Example Scenario I – Recovering from a preexisting network partition Example Scenario II – Recovering from a preexisting network partition Example Scenario III – Recovering from a preexisting network partition gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on seeding clusters to prevent preexisting network partitions, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.752Views2likes0CommentsSFHA Solutions 6.0.1 – Using the hastop command to stop the VCS engine and related processes
Use the hastop command to stop the Veritas Cluster Server (VCS) engine (High Availability Daemon, HAD) and related processes on all the nodes, on a local node, or on a specific node of the cluster. You can customize the behavior of the hastop command by configuring the EngineShutdown attribute for the cluster. The value of the EngineShutdown attribute, a cluster attribute, specifies how the engine is to proceed when you issue the hastop command. The hastop command is available after the VRTSvcs package is installed on a cluster node. The following links provide more information on using the hastop command: Stopping the VCS engine and related processes About controlling the hastop behavior by using the EngineShutdown attribute Additional considerations for stopping VCS Veritas Cluster Server command line reference You can use the hastop command with the following user-defined service group attributes: Evacuate attribute – allows you to issue hastop -local –evacuate to VCS to automatically fail over a service group to another node on the cluster. SysDownPolicy attribute – determines whether a service group is autodisabled when the system is down and if the service group is taken offline when the system is rebooted or is shut down gracefully. For more information on the Evacuate and SysDownPolicy attributes, see: Service group attributes of Veritas Cluster Server For more information on using the hastop command to stop the VCS engine and related processes, see: Veritas Cluster Server 6.0.1 Administrator's Guide hastop (1M) manual pages: AIX HP-UX Linux Solaris Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.551Views0likes0CommentsUsing third party Multipathing with VCS LVM agents
Using third party Multipathing with VCS LVM agents is mostly not clear in the documentation. With SF, you can understand why 3rd party Multipathing needs to be tested to be supported as SF integrates at a low-level with the storage and multipathing effect this, but with VCS, VCS integrates with LVM at a high level, activating and deactivating he volume groups (equivalent to importing and deporting Veritas diskgroups), so it is first unclear why any 3rd party multipathing software should not be supported, except for perhaps O/S multipathing which is tightly integrated with LVM where the command to activate or deactivate the diskgroup MAY be different if multipathing is involved. For AIX, the HCL specifically mentions VCS LVM agentswith third party Multipathing: The VCS LVM agent supports the EMC PowerPath third-party driver on EMC's Symmetrix 8000 and DMX series arrays. The VCS LVM agent supports the HITACHI HDLM third-party driver on Hitachi USP/NSC/USPV/USPVM, 9900V series arrays. but VCS LVM agentsare not mentioned for Linux or HP-ux - these should be, even if this is to say "no third party Multipathing is supported with VCS LVM agents" In the Linux 5.1 bundled agents guide, it IS clear thatno third party Multipathing is supported with VCS LVM agents: You cannot use the DiskReservation agent to reserve disks that have multiple paths. The LVMVolumeGroup and the LVMLogicalVolume agents can only be used with the DiskReservation agent, Symantec does not support the configuration of logical volumes on disks that have multiple paths However, it is not so clear with 6.0 which says: No fixed dependencies exist for LVMVolumeGroup Agent. When you create a volume group on disks with single path, Symantec recommends that you use the DiskReservation agent So in 6.0, DiskReservation is optional, not mandatory as in 5.1, but thebundled agents guide does not say why theDiskReservation agent is mandatory in 5.1 and it does not elaborate why it is recommended in 6.0 - i.e. the 6.0bundled agents guide does not explain the benefits of using theDiskReservation agent or the issues you may encounter if you don't use theDiskReservation agent. The6.0bundled agents guide says for theDiskReservation agent: In case of Veritas Dynamic Multi-Pathing, the LVMVolumeGroup and the LVMLogicalVolume agents can be used without the DiskReservation agent This says you can use Veritas Dynamic Multi-Pathing with LVM, but it doesn't explicitly say you can't use other multipathing software, and for the LVMVolumeGroup agent, the 6.0bundled agents guidegives examples using multipathing, but it does not say if this is Veritas Dynamic Multi-Pathing or third party. To me, as the examples say multipathing and NOT specifically Veritas DMP, this implies ALL multipathing is supported, but it is not clear. So it seems that for 5.1 and before, for Linux and HP-ux (if you go by HCL), all disk multipathing was not supported with VCS LVM agents, but in the 10 years I worked for a consultant at Symantec, not one customer did NOT use disk multipathing - this is essential redundancy and there is no point having LVM agents if they don't support disk multipathing, so I am not sure it can be correct that multipathing is not supported with VCS LVM agents. This is redeemed in part by the recent introduction of Veritas Dynamic Multi-Pathing being available as a separate product which doesn't require SF and can be used on non-VxVM disks. So can the support be clarified for 5.1 and 6.0 of the support ofthird party Multipathing with VCS LVM agents on this forum and the documents updated to make what is supported clearer. Thanks Mike2.1KViews0likes5Comments