Veritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide Veritas InfoScale documentation for other releases andplatforms can be found on the SORT website.1.1KViews5likes0CommentsSFHA Solutions 6.0.1: Using vxcdsconvert to make Veritas Volume Manager (VxVM) disks and disk groups portable between systems for Cross-platform Data Sharing (CDS)
The vxcdsconvert command makes disks and disk groups portable between systems running VxVM with the Cross-platform Data Sharing (CDS) feature. For more information on the CDS feature, see: Overview of the CDS feature Setting up your system to use CDS You can resize the CDS disks to larger than 1 TB. For more information, see: Dynamic LUN expansion You can use the vxcdsconvert command to: Check whether disks and disk groups can be made portable (using the –A option). Convert disks and disk groups to be CDS-compatible. For more information on the conversion procedure, see: Converting non-CDS disks to CDS disks Converting a non-CDS disk group to a CDS disk group Note the following points: The vxcdsconvert command requires that disk groups be version 110 or greater. When a disk group is made portable, all disks within the disk group are also converted. Converting a disk group that contains RAID-5 volumes and logs fails, if there is insufficient space in the disk group to create an additional temporary RAID-5 log. The default private region size increased from 512 KB to 1 MB in SFHA release 3.2, and from 1 MB to 32 MB in release 5.0. vxcdsconvert (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris VxVM documentation for other platforms and releases can be found on the SORT website.1KViews3likes0CommentsSFHA Solutions 6.0.1: Maximizing storage utilization using thin provisioning
Thin provisioning helps you save SAN/NAS storage space. In thin provisioning, instead of provisioning full storage space to a user, the thin provisioning software allocates disk space in a flexible manner based on the minimum space that is required by each user at the given time. Thin provisioning saves considerable storage space in contrast with the conventional storage, where storage is allocated beyond the current requirements of the application. For more information on the advantages of using thin provisioning, refer to: Why Thin Provisioning and Thin Reclamation? Thin provisioning - addressing the problems with over allocation How does Veritas Operations Manager help you reclaim underutilized storage? Veritas Operations Manager offers different solutions for using thin provisioning and thin reclamation. These solutions are listed below: Thin Provisioning Reclamation Add-on You can use the Thin Provisioning Reclamation Add-on to optimize storage space by using Veritas File System (VxFS) thin capabilities and Storage Foundation Thin Reclamation API. You can run thin reclamation on the following objects in Veritas Operations Manager: Storage array: Select thin pools or LUNs in the context of an array. Business entity: Specify a flexible reclamation boundary based on a business entity. Since a business entity can include many objects and is inherently dynamic, the LUNs that are reclaimed change automatically whenever changes occur within that business entity. Hosts: Select multiple file systems in context of the hosts. The associated LUNs are reclaimed. You can configure thin reclamation processes to run at regular scheduled intervals. If you have not scheduled the process run, you can also run it manually. For more information about configuring and running the Thin Provisioning Reclamation Add-on, see: Veritas Operations Manager Thin Provisioning Reclamation Add-on Configuring a thin reclamation process for an array Configuring a thin reclamation process for a business entity Configuring a thin reclamation process for host Running reclamation processes manually Veritas Storage Foundation Add-on for Storage Provisioning You can use the Storage Foundation Add-on for Storage Provisioning to migrate the volume from thick to thin Logical Unit Number (LUN). Using this add-on, you can select a specific volume to migrate, and optionally change its layout before migration. For more information on using the Storage Foundation Add-on for Storage Provisioning, see: Moving volumes from thick to thin LUNs Impact Analysis report Veritas Operations Manager policy checks Veritas Operations Manager policy check feature uses individual rules to validate if the datacenter configuration conforms to a pre-defined standard. You can create policy templates to check the performance, availability, and utilization of the storage objects in your datacenter. For more information on policy checks, see: About policy checks Storage reclamation and thin provisioning reports Veritas Operations Manager provides extensive reporting capabilities centered around Storage Foundation and High Availability products. It includes reports related to storage utilization, storage reclamation, and inventory. For more information on thin provisioning reports, see: Storage reclamation and thin provisioning reports For more information about Storage Foundation thin reclamation and thin provisioning features, see: Veritas Storage Foundation Administrator’s Guide Veritas Storage Foundation and High Availability Solutions 6.0.1 Solutions Guide Storage Foundation and High Availability and Veritas Operations Manager documentation for other releases and platforms can be found on the SORT website.415Views3likes0CommentsVeritas InfoScale 7.0: Introduction to the vxdbd daemon
The Storage Foundation for Databases (SFDB) commands use the vxdbd daemon to run privileged commands. For example, a database administrator is allowed to run commands for creating snapshots using the SFDB scripts. In addition, SFDB uses the daemon to establish communication between the host and the SFDB repository that is deployed on a different host. To run the vxdbd daemon in the authenticated mode, log in as a superuser (root) and authenticate the daemon. After the authentication process is successful, the SFDB tools communicate with the daemon through the authenticated user. For more information see: Configuring vxdbd for SFDB tools authentication Adding nodes to a cluster that is using authentication for SFDB tools Authorizing users to run SFDB commands Generally, the vxdbd daemon is configured to automatically start when the system does. However, if the daemon fails to start or is unresponsive, you can manually start and stop vxdbd using the /opt/VRTS/bin/sfae_config script. This script can also determine the status of the daemon. For more information on using the script, see: Starting and stopping vxdbd The vxdbd daemon uses an insignificant amount of your resources to perform its routine operations. However, you can restrict the resource usage by setting the MAX_CONNECTIONS and MAX_REQUEST_SIZE parameters. Before you set these parameter values, refer to the documentation. Lower values for these parameters may cause the SFDB commands to fail. For more information, see: Limiting vxdbd resource usage. For more information on the vxdbd daemon, see: Configuring listening port for the vxdbd daemon Configuring encryption ciphers for vxdbd If the vxdbd daemon fails to run, perform the troubleshooting instructions available at: Troubleshooting vxdbd For detailed information, see the Veritas InfoScale™ 7.0 Storage and Availability Management for Oracle Databases. Veritas InfoScale documentation can be found on the SORT website.1.1KViews1like0CommentsInfoScale 7.0 for Windows: About the product correlation between Symantec Storage Foundation and High Availability Solutions and Veritas InfoScale Family
In the 7.0 release, the following products from Symantec Storage Foundation and High Availability Solutions are rebranded and repackaged under the Veritas InfoScale family. Storage Foundation (SFW) Storage Foundation and High Availability Solutions (SFW HA) Cluster Server (VCS) Dynamic Multi-Pathing for Windows (DMPW) The Veritas InfoScale family consists of the following products: Veritas InfoScale Foundation Veritas InfoScale Availability Veritas InfoScale Storage Veritas InfoScale Enterprise The following table depicts the existing-to-new product correlation: Existing product New product SFW Basic Veritas InfoScale Foundation VCS Veritas InfoScale Availability SFW Veritas InfoScale Storage SFW HA Veritas InfoScale Enterprise As a result of repackaging the following changes apply: DMPW and VBS are not available as separate products. The following product options are not available as user-selectable options during the product installation. All these features are now available by default with the concerned InfoScale product. FlashSnap Replace Disk Management Snap-in with SFW VEA GUI Volume Replicator (VVR) Fast failover Global Cluster Option For information about the simplified product packaging, changes to the user-selectable product installation options, and the components that are offered in each InfoScale product, see: Veritas InfoScale™ What's New Guide InfoScale documentation for other platforms can be found on the SORT website.532Views1like0CommentsVeritas Resiliency Platform 1.1 key components
Veritas Resiliency Platform brings separate data centers together in a resiliency domain for managing and monitoring workload automation and disaster recovery (DR). Resiliency Platform provides two types of servers, which are deployed as virtual appliances on the data centers. Resiliency Manager The main interface for managing the resiliency domain. A Resiliency Manager is deployed at each data center. After you complete a simple configuration on the virtual appliance, you do all further configuration and operations from an easy-to-use browser-based console. Since the built-in replication between Resiliency Managers keeps the data at each Resiliency Manager synchronized, it does not matter which Resiliency Manager you connect to in the resiliency domain – you see the same information on the browser. Resiliency Manager Infrastructure Management Server (IMS) The server that collects data from the customer assets at the data center. The IMS orchestrates with the APIs of the various platforms that Resiliency Platform integrates with, for example virtualization platforms, such as VMware vSphere and Microsoft Hyper-V, as well as array-based replication and replication appliances. Once you have added the assets that you want to monitor and protect to the IMS, the IMS continues to automatically discover their status. Infrastructure Management Server (IMS) Resiliency Platform release 1.1 adds support for using the Veritas InfoScale Operations Manager server for discovery and management of Veritas InfoScale applications. Veritas Resiliency Platform support for InfoScale applications You can find other versions of Veritas Resiliency Platform on the SORT documentation page.557Views1like0CommentsSFW HA 6.1: Support for SmartIO
SmartIO is a new feature introduced in Symantec Storage Foundation and High Availability Solutions (SFW HA) 6.1 for Windows. SmartIO improves I/O performance of applications and Hyper-V virtual machines by using Solid State Devices (SSDs) as a caching location for read-only I/O caching. Traditional disks are often an I/O bottleneck for high transaction applications. To compensate for this, administrators usually either increase the in-RAM cache size or buy expensive storage. To address this issue, SmartIO uses an SSD-based cache to drive high performance applications. SSDs are available in many sizes and connectivity types. This adds a new layer of complexity and decentralization of the storage. SmartIO adds a central management layer between the physical SSDs and the applications that need to access them. SmartIO lets you use the SSDs to maximize application performance without requiring in-depth knowledge of the technologies. SmartIO supports volume-level read-only caching as SSDs are primarily beneficial in high-read environments. To use SmartIO, you create a cache area (storage space allocated on the SSDs for caching) using one or more non-shared SSDs and link volumes to the cache area to enable caching for the volumes. Using SmartIO, you can also disable caching and grow, shrink, or delete a cache area. In a clustered environment, you may create auto cache areas on all cluster nodes. After failover, the implicitly linked volumes use the auto cache area on the failover node. If the auto cache area is not present on the failover node, then caching is not performed on the failover node. If the data volume is disconnected, caching for that volume is stopped. Caching is restarted once the volume is reconnected and brought online. If the cache area is disconnected, the cache area is taken offline and stops caching of all the volumes linked with it. SmartIO has the following limitations: You cannot reserve a cache area for a particular volume. You can create a new cache area and link the volume with it. File pinning or block pinning is not supported. The cache is volatile and does not persist after the system is restarted. For more information on the SmartIO feature, see the following sections of the "SmartIO" chapter in the Symantec Storage Foundation Administrator's Guide: About SmartIO Administering SmartIO through GUI Administering SmartIO through CLI Symantec Storage Foundation and High Availability Solutions for Windows (SFW HA) documentation for other releases and platforms can be found on the SORT website.487Views1like0CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information aboutconfiguring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0Commentsabout the VCS behaviour on faulted resources
Hi Team, Because ManageFaults is ALL by default, when the resource faults, VCS calls Clean entry point. What is the task for Clean entry point for this resource? As far as I know, when VCS declares the resource as faulted, depending on CRITICAL attribute for resource and AutoFailOver, VCS fail sover the resource group. This resource group stays as faulted on the first node. So, I need to clean fault manually to be able to make the service group online on this node again. So,what the effect of "clean entry point" if the reosurce faults? If service group is faulted on both primary and secondary node, as far as I know, it will not be failed-over. It stays as faulted on both node until clearing it manually? Is there any way to automate the fail-over while the service group are faulted on both node? In fact I would like to understand the role of "clean entry point" for resource when it faults bcause I am always clearing the resource's fault manually? Is it there before declaring the resource is faulted? Please explain me.1.6KViews1like3Comments