detaching vmdk files on vmware vm
When the application failover happens in VMware Guest enviorments, the VCS is responsible for failing over the application to other vm/vcs node on diffrent ESX host. In a scenario where the ESX/ESXi host itself faults, the VCS agents begin to fail over the application to the failover target system that resides on another host. The VMwareDisks agent communicates with the new ESX/ESXi host and initiates a disk detach operation on the faulted virtual machine. The agent then attaches the disk to the new failover target virtual machine. In this senario, how are the stale i/o from failing over guest/ESX host avoided? Are we on the mercy of VMware to take care of it? With SCSI3 PR this was the main problem that was solved. Moreover in such senario's even a garceful online detach wouldnt have gone through. I didnt find any references on VMware discussions forums as well. My customer wants to know about it, before he can deploy the application. Thanks, Raf3KViews0likes7Comments‘vxdmppr’ utility information
Hello, With VxVM, we get ‘vxdmppr’ utility which performs SCSI 3 PR operations on the disks similar to sg_persist on Linux. But we don’t find much documentation around this utility. In one of the blogs we saw that its unsupported utility. Can someone throw light on it. Has someone used it in the past? Or does anyone know how this utility is getting used in VxVM? How to know if this is supported or not? Rafiq1.8KViews1like7Commentsabout the VCS behaviour on faulted resources
Hi Team, Because ManageFaults is ALL by default, when the resource faults, VCS calls Clean entry point. What is the task for Clean entry point for this resource? As far as I know, when VCS declares the resource as faulted, depending on CRITICAL attribute for resource and AutoFailOver, VCS fail sover the resource group. This resource group stays as faulted on the first node. So, I need to clean fault manually to be able to make the service group online on this node again. So,what the effect of "clean entry point" if the reosurce faults? If service group is faulted on both primary and secondary node, as far as I know, it will not be failed-over. It stays as faulted on both node until clearing it manually? Is there any way to automate the fail-over while the service group are faulted on both node? In fact I would like to understand the role of "clean entry point" for resource when it faults bcause I am always clearing the resource's fault manually? Is it there before declaring the resource is faulted? Please explain me.1.6KViews1like3CommentsVeritas InfoScale 7.0: Introduction to the vxdbd daemon
The Storage Foundation for Databases (SFDB) commands use the vxdbd daemon to run privileged commands. For example, a database administrator is allowed to run commands for creating snapshots using the SFDB scripts. In addition, SFDB uses the daemon to establish communication between the host and the SFDB repository that is deployed on a different host. To run the vxdbd daemon in the authenticated mode, log in as a superuser (root) and authenticate the daemon. After the authentication process is successful, the SFDB tools communicate with the daemon through the authenticated user. For more information see: Configuring vxdbd for SFDB tools authentication Adding nodes to a cluster that is using authentication for SFDB tools Authorizing users to run SFDB commands Generally, the vxdbd daemon is configured to automatically start when the system does. However, if the daemon fails to start or is unresponsive, you can manually start and stop vxdbd using the /opt/VRTS/bin/sfae_config script. This script can also determine the status of the daemon. For more information on using the script, see: Starting and stopping vxdbd The vxdbd daemon uses an insignificant amount of your resources to perform its routine operations. However, you can restrict the resource usage by setting the MAX_CONNECTIONS and MAX_REQUEST_SIZE parameters. Before you set these parameter values, refer to the documentation. Lower values for these parameters may cause the SFDB commands to fail. For more information, see: Limiting vxdbd resource usage. For more information on the vxdbd daemon, see: Configuring listening port for the vxdbd daemon Configuring encryption ciphers for vxdbd If the vxdbd daemon fails to run, perform the troubleshooting instructions available at: Troubleshooting vxdbd For detailed information, see the Veritas InfoScale™ 7.0 Storage and Availability Management for Oracle Databases. Veritas InfoScale documentation can be found on the SORT website.1.1KViews1like0CommentsVeritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide Veritas InfoScale documentation for other releases andplatforms can be found on the SORT website.1.1KViews5likes0CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information aboutconfiguring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0CommentsSFHA Solutions 6.0.1: Using vxcdsconvert to make Veritas Volume Manager (VxVM) disks and disk groups portable between systems for Cross-platform Data Sharing (CDS)
The vxcdsconvert command makes disks and disk groups portable between systems running VxVM with the Cross-platform Data Sharing (CDS) feature. For more information on the CDS feature, see: Overview of the CDS feature Setting up your system to use CDS You can resize the CDS disks to larger than 1 TB. For more information, see: Dynamic LUN expansion You can use the vxcdsconvert command to: Check whether disks and disk groups can be made portable (using the –A option). Convert disks and disk groups to be CDS-compatible. For more information on the conversion procedure, see: Converting non-CDS disks to CDS disks Converting a non-CDS disk group to a CDS disk group Note the following points: The vxcdsconvert command requires that disk groups be version 110 or greater. When a disk group is made portable, all disks within the disk group are also converted. Converting a disk group that contains RAID-5 volumes and logs fails, if there is insufficient space in the disk group to create an additional temporary RAID-5 log. The default private region size increased from 512 KB to 1 MB in SFHA release 3.2, and from 1 MB to 32 MB in release 5.0. vxcdsconvert (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris VxVM documentation for other platforms and releases can be found on the SORT website.1KViews3likes0CommentsVeritas InfoScale 7.0: Introducing the Veritas InfoScale product suite
Veritas InfoScale products address enterprise IT service continuity needs. They provide resiliency and software defined storage for critical services across your datacenter infrastructure. The Veritas InfoScale product suite offers the following products: Veritas InfoScale Foundation Veritas InfoScale Availability Veritas InfoScale Storage Veritas InfoScale Enterprise The following products from Symantec Storage Foundation and High Availability Solutions are rebranded and repackaged under the Veritas InfoScale family: Storage Foundation (SF) Storage Foundation and High Availability (SFHA) Cluster Server (VCS) Dynamic Multi-Pathing (DMP) Storage Foundation Cluster File System (SFCFS) Storage Foundation Cluster File System High Availability (SFCFSHA) Storage Foundation for Oracle RAC (SF Oracle RAC) Storage Foundation for Sybase ASE CE (SFSYBASECE) Dynamic Multi-Pathing for VMware These changes are intended to simplify the customer buying experience and improve customer life time value. Veritas InfoScale Foundation Veritas InfoScale Foundation simplifies the management of storage across the data center with an efficient application-aware storage management solution. This product works across heterogeneous storage and server environments. It includes features like: Dynamic Multi-pathing Advanced support for virtual storage Veritas InfoScale Storage Veritas InfoScale Storage provides a high-performance storage management solution that maximizes storage efficiency, data availability, operating system agility, and performance. This product works across heterogeneous server and storage environments. It includes the features like: Replication Caching Advanced storage management features like compression, deduplication, thin reclamation, SmartMove, and FileSnap Clustering features Database features like Veritas Extension for ODM, Portable Data Containers, Storage Checkpoints and SmartTier for Oracle Veritas InfoScale Availability Veritas InfoScale Availability is a comprehensive high availability and disaster recovery solution that protects critical business services from planned and unplanned downtime. The critical business services include individual databases, custom applications, and complex multi-tiered applications, which may span across physical and virtual environments and over any distance. It includes the features like: Clustering features high availability (HA) Disaster recovery features (HA/DR) Veritas InfoScale Enterprise Veritas InfoScale Enterprise provides a powerful combination of comprehensive storage management and application availability. With built-in application acceleration, Veritas InfoScale Enterprise lets you optimize data efficiently across heterogeneous storage or server environments and recover applications instantly from downtime. It includes features like: Clustering features including high availability and disaster recovery (HA/DR) Replication Caching Advanced storage management features like compression, deduplication, thin reclamation, SmartMove, and FileSnap Database features like Veritas Extension for ODM, Portable Data Containers, Storage Checkpoints and SmartTier for Oracle For more information on Veritas InfoScale products, see: About the release About the Veritas InfoScale product suite Mapping of Storage Foundation High Availability (SFHA) offerings to the new InfoScale Family Entitlement mapping for upgrades from Storage Foundation High Availability (SFHA) offerings to InfoScale Veritas InfoScale documentation can be found on theSORT website.1KViews0likes0CommentsSFHA Solutions 6.1: New Virtual Business Services features
The following Virtual Business Services (VBS) features are available in the 6.1 release: VBS can remain operational in spite of a tier failure Ability to run custom script on service groups VBS status tracking This article provides a brief overview of the new features. You can access the Virtual Business Service –Availability User’s Guide for additional information on the features and the steps to implement the new features. VBS can remain operational in spite of a tier failure Before the VBS 6.1 release, the VBS start and stop operations did not complete if any of its tier had failed. This feature allows you to proceed with the operation in spite of the failed tiers in the VBS. Ability to run custom script on service groups This feature enables you to run a customized script that performs the required actions on a parent tier when a child tier recovers. This allows the parent tier to run without any interruptions while the child tier recovers and its dependency is reestablished seamlessly. You can configure the custom script when you configure service group dependencies in a VBS. To configure a custom script, see: Custom script execution Configuring custom script execution for soft dependencies Configuring dependencies for a virtual business service VBS operations status tracking This feature makes the VBS operations more transparent and easier to track from the command line. You can track the status and details of operations (tasks) performed on virtual business services and the corresponding actions taken on constituent tiers. This is especially useful in virtual business services with a large number of tiers and dependencies. For more information on VBS status tracking, see: Tracking VBS operations Tracking information about tasks performed on a VBS Tracking information about tier-level sub-tasks performed as a part of a VBS task Tracking step-by-step progress of a VBS task Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.881Views0likes0Comments