Veritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide Veritas InfoScale documentation for other releases andplatforms can be found on the SORT website.1.1KViews5likes0CommentsSmartIO blueprint and deployment guide for Solaris platform
SmartIO for Solaris was introduced in Storage Foundation HA 6.2. SmartIO enables data efficiency on your SSDs through I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per Input/Output Operations Per Second (IOPS). SmartIO supports both read and write-back caching for the VxFS file systems that are mounted on VxVM volumes, in multiple caching modes and configurations. SmartIO also supports block-level read caching for applications running on VxVM volumes. The SmartIO Blueprint for Solaris give an overview of the benefits of using SmartIO technology, the underlying technology, and the essential configuration steps to configure it. In the SmartIO Deployment Guide for Solaris, multiple deployment scenarios of SmartIO and how to manage them are covered in detail. Let us know if you have any questions or feedback!457Views3likes0CommentsSFHA Solutions 6.2: Configuring secure shell or remote shell communication between nodes when installing Symantec products
To install and configure Symantec software, you need to establish secure shell (ssh) or remote shell (rsh) communication with superuser privileges between the nodes where the installer is running and the target nodes. You can install products to remote systems using either ssh or rsh. Symantec recommends that you use ssh as it is more secure than rsh. You can set up ssh and rsh connections in the following ways. You can use UNIX shell commands to manually set up the connection.Using this method, you can log into and execute commands on a remote system. You can run the installer directly to set up the ssh/rsh connection interactively during the install procedure. You first create a Digital Signature Algorithm (DSA) key pair. From the key pair, you can append the public key from the source system to the authorized keys file on the target systems. You can run the installer -comsetup command.Using this method, you can interactively set up the ssh and rsh connections using the installer -comsetup command. You can run the pwdutil.pl password utility.If you want to run the installer with the response file present in your own scripts, then the ssh connection should be set up prior to running installer. The password utility, pwdutil.pl, is bundled in the Symantec Storage Foundation and High Availability (SFHA) Solutions 6.2 release under the scripts directory. You can run the utility in your script to set up the ssh and rsh connection automatically. Both the script-based and web-based installers support establishing passwordless communication. For more information about configuring secure shell or remote shell communication between nodes, see: Manually configuring passwordless ssh Setting up ssh and rsh connection using the installer -comsetup command Setting up ssh and rsh connection using the pwdutil.pl utility SFHA documentation for other releases andplatforms can be found on theSORTwebsite.438Views3likes0CommentsSFHA Solutions 6.0.1: Using vxcdsconvert to make Veritas Volume Manager (VxVM) disks and disk groups portable between systems for Cross-platform Data Sharing (CDS)
The vxcdsconvert command makes disks and disk groups portable between systems running VxVM with the Cross-platform Data Sharing (CDS) feature. For more information on the CDS feature, see: Overview of the CDS feature Setting up your system to use CDS You can resize the CDS disks to larger than 1 TB. For more information, see: Dynamic LUN expansion You can use the vxcdsconvert command to: Check whether disks and disk groups can be made portable (using the –A option). Convert disks and disk groups to be CDS-compatible. For more information on the conversion procedure, see: Converting non-CDS disks to CDS disks Converting a non-CDS disk group to a CDS disk group Note the following points: The vxcdsconvert command requires that disk groups be version 110 or greater. When a disk group is made portable, all disks within the disk group are also converted. Converting a disk group that contains RAID-5 volumes and logs fails, if there is insufficient space in the disk group to create an additional temporary RAID-5 log. The default private region size increased from 512 KB to 1 MB in SFHA release 3.2, and from 1 MB to 32 MB in release 5.0. vxcdsconvert (1M) 6.0.1 manual pages: AIX HP-UX Linux Solaris VxVM documentation for other platforms and releases can be found on the SORT website.1KViews3likes0CommentsSymantec High Availability 6.1 Solution: Configuring disaster recovery in a VMware environment with non-shared storage
You can use the Symantec High Availability 6.1 Solution to configure disaster recovery (DR) in a VMware environment with non-shared disks, which are managed using Storage Foundation for Windows (SFW). The replication is configured using Symantec Volume Replicator (VVR) or Hitachi TrueCopy (HTC)/ EMC SRDF solution. For information about configuring disaster recovery in a VMware environment using the Symantec High Availability Solution, see: Configuring disaster recovery in a VMware environment using the Symantec High Availability solution This quick reference guide provides: A typical disaster recovery setup in a VMware environment involving non-shared storage Configuration differences between VVR and HTC/SRDF-based replication A DR configuration workflow for HTC/SRDF-based replication Sample service group dependency graphs for VVR and HTC/SRDF-based replication A list of reference documents and where you can download them Storage Foundation and High Availability and ApplicationHA documentation for other releases and platforms can be found on theSORT website.393Views2likes1CommentSymantec High Availability 6.1 Solution: Configuration in VMware Site Recovery Manager (SRM) environment
The Symantec High Availability 6.1 Solution provides scripts to perform some of the configuration tasks required for application monitoring continuity in a VMware SRM environment. In the event of a failure, when the SRM recovery plan is executed, the Symantec High Availability recovery command retrieves the application status, and the Symantec Cluster Server (VCS) network and application-specific agents bring the network and application components online. This ensures application monitoring continuity after the failover. For information about configuring the Symantec High Availability Solution in a VMware SRM environment, see: Configuring Symantec High Availability Solution in VMware SRM environment This quick reference guide provides: Acomplete workflow of the configuration tasks involved Aquick start for each high-level task Alist of reference documents and their download location Storage Foundation and High Availability and ApplicationHA documentation for other releases and platforms can be found on theSORT website.1.2KViews2likes2CommentsSFW 6.1: Configuring high availability for Microsoft failover cluster quorum
A Microsoft failover cluster uses a single physical disk resource and a basic disk volume resource for a quorum. If the physical disk fails, the quorum resource fails and the cluster becomes unusable. To avoid this failure, you must use a dynamic disk group resource. Unlike a physical disk resource that contains a single disk, a dynamic disk group resource can contain multiple disks.A dynamic disk group resource provides a high level of redundancy by allowing the mirroring of disks. With Storage Foundation for Windows (SFW) a Microsoft failover cluster can use dynamic disks as a quorum disk resource. This lets you make your cluster quorum highly available by using host-based mirroring. To use SFW to configure a Microsoft failover cluster quorum, you must perform the following tasks: Configure a Microsoft failover cluster. Install SFW. Create a cluster dynamic disk group with a volume created on it. This disk group should be separate from the disk group that you create for the application data. Use the Microsoft Failover Cluster Manager to: Add the clustered disk group as a resource to the cluster. Configure cluster quorum settings. For a quick reference guide to configure a dynamic quorum resource, see: Configuring high availability for Microsoft failover cluster quorum Storage Foundation for Windows documentation for other releases and platforms can be found on theSORT website.447Views2likes0Commentsconcurrency violation
Hi Team, This alert came in my environment in one of the AIX server 6.1 ofconcurrency violation. Subject: VCS SevereError for Service Group sapgtsprd, Service group concurrency violation Event Time: Wed Dec 3 06:46:54 EST 2014 Entity Name: sapgtsprd Entity Type: Service Group Entity Subtype: Failover Entity State: Service group concurrency violation Traps Origin: Veritas_Cluster_Server System Name: mapibm625 Entities Container Name: GTS_Prod Entities Container Type: VCS ================================================================================== engineA.log are, 2014/12/03 06:46:54 VCS INFO V-16-1-10299 Resource App_saposcol (Owner: Unspecified, Group: sapgtsprd) is online on mapibm625 (Not initiated by VCS) 2014/12/03 06:46:54 VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sapgtsprd 2014/12/03 06:46:54 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group sapgtsprd on all nodes 2014/12/03 06:46:55 VCS WARNING V-16-6-15034 (mapibm625) violation:-Offlining group sapgtsprd on system mapibm625 2014/12/03 06:46:55 VCS INFO V-16-1-50135 User root fired command: hagrp -offline sapgtsprd mapibm625 from localhost 2014/12/03 06:46:55 VCS NOTICE V-16-1-10167 Initiating manual offline of group sapgtsprd on system mapibm625 ====================================================================================================== What isConcurrency Violation in VCS? What are the steps we should have to take to resolve this?Kindly explain in detail. Thanks, AllaboutunixSolved2.8KViews2likes4CommentsSymantec Data Insight 4.5.1: Documentation Available
Symantec Data Insight 4.5.1 product guides(PDF and HTML pages) are now available on theSORT documentation page. TheSymantec Data Insight 4.5.1 documentation set includes the following manuals: Symantec Data Insight Release Notes Symantec Data Insight Self-Service Portal Quick Reference Guide Symantec Data Insight Installation Guide Symantec Data Insight User's Guide Symantec Data Insight Administrator's Guide Third-Party Legal Notices371Views2likes0Comments