vxdisk list showing errors on multiple disks, and I am unable to start cluster on slave node.
Hello, If anybody have same experience and can help me, I am gonna be very thankful I am using solars 10 (x86 141445-09) + EMC PowerPath (5.5.P01_b002) + vxvm (5.0,REV=04.15.2007.12.15) on two node cluster. This is fileserver cluster. I've added couple new LUNs and when I try to scan for new disk :"vxdisk scandisks" command hangs and after that time I was unable to do any vxvm job on that node, everytime command hangs. I've rebooted server in maintanance windows, (before reboot switched all SGs on 2nd node) After that reboot I am unable to join to cluster with reason 2014/04/13 01:04:48 VCS WARNING V-16-10001-1002 (filesvr1) CVMCluster:cvm_clus:online:CVMCluster start failed on this node. 2014/04/13 01:04:49 VCS INFO V-16-2-13001 (filesvr1) Resource(cvm_clus): Output of the completed operation (online) ERROR: 2014/04/13 01:04:49 VCS ERROR V-16-10001-1005 (filesvr1) CVMCluster:???:monitor:node - state: out of cluster reason: Cannot find disk on slave node: retry to add a node failed Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.warning] V-5-1-8222 slave: missing disk 1306358680.76.filesvr1 Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.warning] V-5-1-7830 cannot find disk 1306358680.76.filesvr1 Apr 13 01:10:09 s_local@filesvr1 vxvm: vxconfigd: [ID 702911 daemon.error] V-5-1-11092 cleanup_client: (Cannot find disk on slave node) 222 here is output from 2nd node (working fine) Disk: emcpower33s2 type: auto flags: online ready private autoconfig shared autoimport imported guid: {665c6838-1dd2-11b2-b1c1-00238b8a7c90} udid: DGC%5FVRAID%5FCKM00111001420%5F6006016066902C00915931414A86E011 site: - diskid: 1306358680.76.filesvr1 dgname: fileimgdg dgid: 1254302839.50.filesvr1 clusterid: filesvrvcs info: format=cdsdisk,privoffset=256,pubslice=2,privslice=2 and here is from node where i see this problems Device: emcpower33s2 devicetag: emcpower33 type: auto flags: error private autoconfig pubpaths: block=/dev/vx/dmp/emcpower33s2 char=/dev/vx/rdmp/emcpower33s2 guid: {665c6838-1dd2-11b2-b1c1-00238b8a7c90} udid: DGC%5FVRAID%5FCKM00111001420%5F6006016066902C00915931414A86E011 site: - errno: Configuration request too large Multipathing information: numpaths: 1 emcpower33c state=enabled Can anybody help me? I am not sure about Configuration request too largeSolved5.9KViews1like16Commentsadding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question: can i add the disks to the DG and volumes to RVG while the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlink to add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.6KViews0likes14CommentsUsing third party Multipathing with VCS LVM agents
Using third party Multipathing with VCS LVM agents is mostly not clear in the documentation. With SF, you can understand why 3rd party Multipathing needs to be tested to be supported as SF integrates at a low-level with the storage and multipathing effect this, but with VCS, VCS integrates with LVM at a high level, activating and deactivating he volume groups (equivalent to importing and deporting Veritas diskgroups), so it is first unclear why any 3rd party multipathing software should not be supported, except for perhaps O/S multipathing which is tightly integrated with LVM where the command to activate or deactivate the diskgroup MAY be different if multipathing is involved. For AIX, the HCL specifically mentions VCS LVM agents with third party Multipathing: The VCS LVM agent supports the EMC PowerPath third-party driver on EMC's Symmetrix 8000 and DMX series arrays. The VCS LVM agent supports the HITACHI HDLM third-party driver on Hitachi USP/NSC/USPV/USPVM, 9900V series arrays. but VCS LVM agents are not mentioned for Linux or HP-ux - these should be, even if this is to say "no third party Multipathing is supported with VCS LVM agents" In the Linux 5.1 bundled agents guide, it IS clear that no third party Multipathing is supported with VCS LVM agents: You cannot use the DiskReservation agent to reserve disks that have multiple paths. The LVMVolumeGroup and the LVMLogicalVolume agents can only be used with the DiskReservation agent, Symantec does not support the configuration of logical volumes on disks that have multiple paths However, it is not so clear with 6.0 which says: No fixed dependencies exist for LVMVolumeGroup Agent. When you create a volume group on disks with single path, Symantec recommends that you use the DiskReservation agent So in 6.0, DiskReservation is optional, not mandatory as in 5.1, but the bundled agents guide does not say why the DiskReservation agent is mandatory in 5.1 and it does not elaborate why it is recommended in 6.0 - i.e. the 6.0 bundled agents guide does not explain the benefits of using the DiskReservation agent or the issues you may encounter if you don't use the DiskReservation agent. The 6.0 bundled agents guide says for the DiskReservation agent: In case of Veritas Dynamic Multi-Pathing, the LVMVolumeGroup and the LVMLogicalVolume agents can be used without the DiskReservation agent This says you can use Veritas Dynamic Multi-Pathing with LVM, but it doesn't explicitly say you can't use other multipathing software, and for the LVMVolumeGroup agent, the 6.0 bundled agents guide gives examples using multipathing, but it does not say if this is Veritas Dynamic Multi-Pathing or third party. To me, as the examples say multipathing and NOT specifically Veritas DMP, this implies ALL multipathing is supported, but it is not clear. So it seems that for 5.1 and before, for Linux and HP-ux (if you go by HCL), all disk multipathing was not supported with VCS LVM agents, but in the 10 years I worked for a consultant at Symantec, not one customer did NOT use disk multipathing - this is essential redundancy and there is no point having LVM agents if they don't support disk multipathing, so I am not sure it can be correct that multipathing is not supported with VCS LVM agents. This is redeemed in part by the recent introduction of Veritas Dynamic Multi-Pathing being available as a separate product which doesn't require SF and can be used on non-VxVM disks. So can the support be clarified for 5.1 and 6.0 of the support of third party Multipathing with VCS LVM agents on this forum and the documents updated to make what is supported clearer. Thanks Mike2.1KViews0likes5CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information about configuring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0CommentsSFHA Solutions 6.0.1: Using the hastatus command in Veritas Cluster Server (VCS)
You can use the hastatus command to display the changes to cluster objects such as resource, group, and system and to monitor transitions on a cluster. You can also verify the status of the cluster. The hastatus command functionality is also applicable to prior releases. For information on using the hastatus command, see: Querying status of service groups Querying status of remote and local clusters How the VCS engine (HAD) affects performance VCS command line reference Verifying the cluster Verifying the status of nodes and service groups For troubleshooting scenarios and solutions for using the hastatus command, see: Service group is waiting for the resource to be brought online/taken Offline Agent not running hastatus (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris For more information on using the hastatus command, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.914Views1like0CommentsSFHA Solutions 6.0.1: Understanding single-node VCS clusters and the single-node mode
You can install Veritas Cluster Server (VCS) on a single system to configure a single-node VCS cluster. You can use VCS single-node clusters for basic application administration tasks as well as application high availability-related tasks. For application administration, you can use Veritas Operations Manager (VOM) as a single console to manage a wide range of applications running across a variety of operating systems in your datacenter. You can perform administrative actions such as gracefully starting and stopping applications. Additionally, these tasks do not require you to undergo any application-specific or operating system-specific training. For application high availability, you can configure application restart and system reboot as fault management remedies in single-node VCS clusters. Symantec ApplicationHA leverages VCS clustering capabilities in virtualization environments. With Symantec ApplicationHA, you can configure application restart, system reboot, and virtual system failovers, as fault management remedies. The single-node mode is a different concept from the single-node cluster. The term single-node mode or one-node mode refers to the “-onenode” option that you can specify in the ‘hastart’ VCS commands, to start VCS on a node, without any communication links to other VCS nodes. You can invoke a VCS node in single-node mode, irrespective of whether the node is part of a single-node cluster or a multi-node cluster. In the single-node mode, the VCS policy engine, also known as High Availability Daemon (HAD), does not communicate with the Global Atomic Broadcast (GAB) module. The node therefore cannot participate in application failover. You cannot form a multi-node cluster by adding a single-node cluster, that is in single-node mode, to another node. To create a multi-node cluster using single-node clusters, you must first ensure that you unconfigure the single-node mode, and configure the LLT/GAB modules. Common use cases of single-node clusters include: Enabling datacenter administrators to start and stop a large variety of configured applications from VOM management console without the need for specialized training in the applications or the operating systems on which the applications are loaded. Hosting the Coordination Point or CP Server of the VCS fencing (VxFEN) module on a single-node cluster Hosting an application on a single-node cluster at the disaster recovery site (remote site), to economize on hardware in a Global Cluster Option (GCO) setup. In this case you cannot configure local application failover, but you can fail over the application back to the protected site. Creating a single-node cluster as a first step to creating a multi-node cluster. Ensure that you do not configure VCS in single-node mode for such a cluster. For more information on installing a single-node VCS cluster and adding it to other clusters, see: Creating a single-node cluster using the installer program Creating a single-node cluster manually Adding a node to a single-node cluster Setting up a node to join the single-node cluster Adding nodes using the VCS installer Manually adding a node to a cluster For more information on configuring VCS in single-node mode, see: Configuring VCS in single-node mode VCS documentation for other releases and platforms, as well as Symantec ApplicationHA can be found on the SORT website900Views1like0CommentsSFHA Solutions 6.1: New Virtual Business Services features
The following Virtual Business Services (VBS) features are available in the 6.1 release: VBS can remain operational in spite of a tier failure Ability to run custom script on service groups VBS status tracking This article provides a brief overview of the new features. You can access the Virtual Business Service –Availability User’s Guide for additional information on the features and the steps to implement the new features. VBS can remain operational in spite of a tier failure Before the VBS 6.1 release, the VBS start and stop operations did not complete if any of its tier had failed. This feature allows you to proceed with the operation in spite of the failed tiers in the VBS. Ability to run custom script on service groups This feature enables you to run a customized script that performs the required actions on a parent tier when a child tier recovers. This allows the parent tier to run without any interruptions while the child tier recovers and its dependency is reestablished seamlessly. You can configure the custom script when you configure service group dependencies in a VBS. To configure a custom script, see: Custom script execution Configuring custom script execution for soft dependencies Configuring dependencies for a virtual business service VBS operations status tracking This feature makes the VBS operations more transparent and easier to track from the command line. You can track the status and details of operations (tasks) performed on virtual business services and the corresponding actions taken on constituent tiers. This is especially useful in virtual business services with a large number of tiers and dependencies. For more information on VBS status tracking, see: Tracking VBS operations Tracking information about tasks performed on a VBS Tracking information about tier-level sub-tasks performed as a part of a VBS task Tracking step-by-step progress of a VBS task Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.887Views0likes0CommentsSFHA Solutions 6.1: Migrating from EMC PowerPath to DMP
You can use Dynamic Multi-Pathing (DMP) instead of third-party drivers for advanced storage management. To migrate from EMC PowerPath to DMP, you must remove devices from EMC PowerPath control and enable DMP on the devices. The migration causes application downtime on the host, as you must stop any applications running on the host that are using EMC PowerPath devices, and any Symantec Cluster Server (VCS) services that are running. For information on migrating to DMP and supported migration paths, see: Setting up DMP to manage native devices (Linux) For operating system specific commands to migrate from EMC PowerPath to DMP, see: Migrating to DMP from EMC PowerPath (AIX) Migrating to DMP from EMC PowerPath (Linux) Migrating to DMP from EMC PowerPath (Solaris) To migrate devices from EMC PowerPath to DMP: Stop the applications that use the PowerPath meta-devices. In a Symantec Cluster Server (VCS) environment, stop the VCS service group of the application, which will stop the application. Unmount any file systems that use the volume group on the PowerPath device. On Linux and AIX, stop the logical volume manager (LVM) volume groups that use the PowerPath device. On Solaris, export the ZFS pools that use the PowerPath Device. On AIX, if the root volume group (rootvg) is under PowerPath control, migrate the rootvg to DMP. For more information, see: Migrating a SAN root disk from EMC PowerPath to DMP control Remove the disk access names for the PowerPath devices from VxVM. Take the device out of PowerPath control. Verify that the PowerPath device has been removed from PowerPath control. Run a device scan to bring the devices under DMP control. Import the VxVM disk group(s). Turn on the DMP support for the LVM volume group. Mount the file systems. Restart the applications. For information on migrating to DMP on Storage Foundation for Windows, see the chapter titled "Migrating from EMC PowerPath to Veritas Dynamic Multi-Pathing for Windows" in the 6.0.1 Veritas Dynamic Multi-Pathing for Windows Installation and Upgrade Guide. Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.869Views0likes0CommentsSFHA Solutions 6.0.1: About the Veritas Cluster Server (VCS) startup sequence
Communication among VCS components When you install VCS, user-space components and kernel-space components are installed on a system. The VCS engine, also known as the high availability daemon (HAD) exists in the user space. The HA daemon contains the decision logic for the cluster and maintains a view of the cluster. The VCS engine on each system in the cluster maintains a synchronized view of the cluster. For example, when you take a resource offline, or bring a system from the cluster online, VCS on each system updates the view of the cluster. The kernel-space components consist of the Group Atomic and Broadcast (GAB) and Low Latency Transport (LLT) modules. Each system that has the VCS engine installed on it communicates through GAB and LLT. GAB maintains the cluster membership and cluster communications. LLT maintains the traffic on the network and communicates heartbeat signal information of each system to GAB. About Group Membership Services and Atomic Broadcast (GAB) About Low Latency Transport (LLT) About the VCS startup sequence The start and stop variables for the Asynchronous Monitoring Framework (AMF), LLT, GAB, I/O fencing (VxFEN), and VCS engine modules define the default behavior of these modules during a system restart or a system shutdown. For a clean VCS startup or shutdown, you must either enable or disable the startup and shutdown modes for all these modules. VCS startup depends on the kernel-space modules and other user-space modules starting in a specific order. The VCS startup sequence is as follows: LLT GAB I/O fencing AMF VCS For more information on setting the start and stop environment variables, VCS modules, and starting and stopping VCS, see: Environment variables to start and stop VCS modules About the I/O fencing module About the IMF notification module About the high availability daemon (HAD) Administering the AMF kernel driver Starting VCS Stopping VCS Stopping the VCS engine and related processes In a single-node cluster, you can disable the start and stop environment variables for LLT, GAB, and VxFEN, if you have not configured these kernel modules. If you disable LLT and GAB, set the ONENODE variable to Yes in the/etc/default/vcs file. The following topics provide information on troubleshooting startup issues: VCS:10622 local configuration missing VCS:10623 local configuration invalid VCS:11032 registration failed. Exiting Waiting for cluster membership Enabling debug logs for the VCS engine LLT startup script displays errors Fencing startup reports preexisting split-brain Issues during fencing startup on VCS cluster nodes set up for server-based fencing VCS documentation for other releases and platforms can be found on the SORT website.853Views1like2CommentsSFHA Solutions 6.2 (AIX and Solaris): Share local storage across the network using Flexible Storage Sharing
Cluster File System (CFS) 6.2 brings the Flexible Storage Sharing (FSS) feature to Solaris and AIX environments, enabling you to share Direct Attached Storage (DAS) across nodes in the cluster to run in SAN-free or hybrid modes. FSS takes advantage of high speed interconnects to allow shared access to local storage enabling you to create logical volumes in both shared and shared-nothing storage configurations, to create a high-performance, highly available shared namespace. With FSS, enterprises can use software to provide data redundancy, high availability, and disaster recovery capabilities, without requiring physically shared storage. For more information about FSS, see: Flexible Storage Sharing use cases Limitations of Flexible Storage Sharing Optimizing storage with Flexible Storage Sharing Installing Storage Foundation Cluster File System High Availability (SFCFSHA) or Storage Foundation for Oracle RAC (SF Oracle RAC) automatically enables the FSS feature. No additional installation steps are required. The fencing coordination points can either be SCSI-3 PR capable shared storage or CP servers. For information on administering FSS, see: Administering Flexible Storage Sharing About Flexible Storage Sharing disk support About the volume layout for Flexible Storage Sharing disk groups Setting the host prefix Exporting a disk for Flexible Storage Sharing Setting the Flexible Storage Sharing attribute on a disk group Using the host disk class and allocating storage Administering mirrored volumes using vxassist Displaying exported disks and network shared disk group Installing Storage Foundation Cluster File System High Availability (SFCFSHA) or Storage Foundation for Oracle RAC (SF Oracle RAC) automatically enables the FSS feature. No additional installation steps are required. The fencing coordination points can either be SCSI-3 PR capable shared storage or CP servers. For more information on Flexible Storage Sharing, see the following related Symantec Connect articles: Flexible Storage Sharing: DAS Cluster Demo Demo: Adding COmpute Nodes with Flexible Storage Sharing Clustered NFS on DAS Storage Remove the Rust: Unlock DAS and go SAN-Free High Availability and Performance Oracle Configuration with Flexible Storage Sharing in a SAN-Free Environment using Intel SSDs Commoditizing High Availability and Storage using Flexible Storage Sharing Growing my Commoditized Storage and HA Environment with an Extra Node Building Application and Data Availability without SAN Veritas Operations Manager 6.1: Managing Flexible Storage Sharing Configure Flexible Storage Sharing using Veritas Operations Manager 6.1 Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.825Views0likes0Comments