Veritas Risk Advisor: Working with the Comparison Module
Veritas Risk Advisor (VRA) is a data protection and downtime avoidance risk assessment solution that lets you diagnose disaster recovery and high availability (clustering) problems (also called “gaps”) and optimize data protection and reduce the risk of downtime. VRA enables enterprises to effectively manage business continuity implementations to ensure that critical business data is protected. VRA automatically detects and alerts you to any potential gaps, best practice violations, or service level agreement (SLA) breaches. What is Comparison Module The Comparison module helps you identify the host configuration drifts hiding in your IT. Such drifts often fail cluster failover processes, and reduce the availability of your organization’s applications. In this module, you can create comparison groups that include hosts, clusters, or business entities, and easily track configuration differences between them. The Comparison module uses worksheets and comparison groups. Worksheet A worksheet is a logical container of comparison groups. It also contains all suppressions and difference monitoring information. Worksheets are defined and saved at the user level, which means that each user has his/her own worksheet. Comparison Groups A comparison group is a dynamic group of hosts that you want to compare. The following types of comparison groups are available: Hosts Clusters Business Entities Golden Copy Each group type behaves differently in terms of the group scope and comparison functionality. You begin by creating a worksheet, and then by creating comparison groups. Once defined, comparison groups can be assigned to a worksheet. Once the worksheets and comparison groups are created, you can compare the host configurations with the following options: Hardware Software Operating System Users and Groups OS Kernel Parameters / Limits Learning More For more information on working with Comparison Module, see “Using the Comparison module” in the Veritas Risk Advisor User’s Guide. You can access the User’s Guide and other VRA documentation in the Documents area of the SORT website.860Views0likes0CommentsVeritas InfoScale 7.0: Configuring I/O fencing
I/O fencing protects the data on shared disks when nodes in a cluster detect a change in the cluster membership that indicates a split-brain condition. In a partitioned cluster, a split-brain condition occurs where one side of the partition thinks the other side is down and takes over its resources. When you install Veritas InfoScale Enterprise, the installer installs the I/O fencing driver, which is part of the VRTSvxfen package. After you install and configure the product, you must configure I/O fencing so that it can protect your data on shared disks. You can configure disk-based I/O fencing, server-based I/O fencing, or majority-based I/O fencing. Before you configure I/O fencing, make sure that you meet the I/O fencing requirements. After you meet the requirements, you can refer to About planning to configure I/O fencing to perform the preparatory tasks and then configure I/O fencing. For more details about I/O fencing configuration, see: Cluster Server Configuration and Upgrade Guide Veritas InfoScale documentation for other releases andplatforms can be found on the SORT website.1.1KViews5likes0Commentsadding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question:can i add the disks to the DG and volumes to RVGwhile the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlinkto add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.4KViews0likes14CommentsVeritas InfoScale 7.0 for Linux: SmartIO caching on SSD devices exported by FSS
SmartIO supports the use of Solid-State Drives (SSD) exported by FSS to provide caching services for applications running on Veritas Volume Manager (VxVM) and Veritas File System (VxFS). In this scenario, Flexible Storage Sharing (FSS) exports SSDs from nodes that have a local SSD. FSS then creates a pool of the exported SSDs in the cluster. From this shared pool, a cache area is created for any or all nodes in the cluster. Each cache area is accessible only to that particular node for which it is created. The cache area can be a VxVM cache area or a VxFS cache area. SmartIO supports write-back caching for local mounts on remote SSD devices exported by FSS. However, write-back caching is not supported on remote SSD devices for Cluster File System (CFS) environments. If you plan to use only a portion of an exported SSD device for caching purposes, ensure that the volume used for caching is created on a disk group with disk group version 200 or later. The volume layout of the cache area on remote SSDs follows the simple stripe layout, not the default FSS allocation policy of mirroring across hosts. If the cache area on a remote SSD needs to be resized to meet growing needs, ensure that you specify an exported device only. The operation fails if a non-exported device is specified. The cache areas can be enabled to support warm or persistent caching across reboots. For more informatoin: Setting up cache areas using SSDs exported by FSS Status of cache areas when nodes leave or join the cluster The complete SmartIO documentation is available on SORT.524Views0likes0CommentsSmartIO blueprint and deployment guide for Solaris platform
SmartIO for Solaris was introduced in Storage Foundation HA 6.2. SmartIO enables data efficiency on your SSDs through I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per Input/Output Operations Per Second (IOPS). SmartIO supports both read and write-back caching for the VxFS file systems that are mounted on VxVM volumes, in multiple caching modes and configurations. SmartIO also supports block-level read caching for applications running on VxVM volumes. The SmartIO Blueprint for Solaris give an overview of the benefits of using SmartIO technology, the underlying technology, and the essential configuration steps to configure it. In the SmartIO Deployment Guide for Solaris, multiple deployment scenarios of SmartIO and how to manage them are covered in detail. Let us know if you have any questions or feedback!457Views3likes0CommentsOracle Data Base Replication with VVR under SFCFSHA/DR
Hi All; we are looking for whether VVR can use for the oracle database replication instead of oracle data guard solution. If it is used, do you know veritas gives support for any problem faced. even VVR keeps the write order fidelity, it is not certain the database integrity will ve preserved at the disaster site. do you have any best practices and white papers, experience, anything you suggest for this deploymeny?1.1KViews1like4Commentsconcurrency violation
Hi Team, This alert came in my environment in one of the AIX server 6.1 ofconcurrency violation. Subject: VCS SevereError for Service Group sapgtsprd, Service group concurrency violation Event Time: Wed Dec 3 06:46:54 EST 2014 Entity Name: sapgtsprd Entity Type: Service Group Entity Subtype: Failover Entity State: Service group concurrency violation Traps Origin: Veritas_Cluster_Server System Name: mapibm625 Entities Container Name: GTS_Prod Entities Container Type: VCS ================================================================================== engineA.log are, 2014/12/03 06:46:54 VCS INFO V-16-1-10299 Resource App_saposcol (Owner: Unspecified, Group: sapgtsprd) is online on mapibm625 (Not initiated by VCS) 2014/12/03 06:46:54 VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group sapgtsprd 2014/12/03 06:46:54 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group sapgtsprd on all nodes 2014/12/03 06:46:55 VCS WARNING V-16-6-15034 (mapibm625) violation:-Offlining group sapgtsprd on system mapibm625 2014/12/03 06:46:55 VCS INFO V-16-1-50135 User root fired command: hagrp -offline sapgtsprd mapibm625 from localhost 2014/12/03 06:46:55 VCS NOTICE V-16-1-10167 Initiating manual offline of group sapgtsprd on system mapibm625 ====================================================================================================== What isConcurrency Violation in VCS? What are the steps we should have to take to resolve this?Kindly explain in detail. Thanks, AllaboutunixSolved2.8KViews2likes4CommentsVxVM 4.1 SAN migrate at host level Mirrored volumes are associated with DRL
Hi, I have done SAN migrate earlier without DRL & Subvolume setup. It's mirroed between 2 arrays at host level using vxvm.But nowI'm in a position where I need to migrate 100+volumes between different new arrays. Some volumes have got DRL(log only plexes) & few subvolumes.Unfortunately no documentation of how theseDRL(log only plexes) & fewsubvolume can be migrated.so I'm looking for some help. The cluster is: - Solaris 10 - Solaris Cluster 3.1 two node campus cluster - VxVM 4.1 with CVM - 2 * NetApp storages mirroed at host level using VxVM Migration Plan questionsis: - 2 * new fujitsu storages have installed & created luns & mapped to hosts on both as per existing Netapp storages - All new luns detected correctly under OS & VxVM - I have 4 way mirroing at host level using VxVM for some of volumes. how can I proceed for DRL(log only plexes) & few subvolume volumes? Sample output:- v lunx3 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-04 lunx3 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx3-S01 lunx3-04 lunx3-L01 1 2097152 0 3/3 ENA => subvolume v lunx3-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx3-P01 lunx3-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp2_datavol-64 lunx3-P01 netapp2_datavol 157523968 528 LOG FAS60400_0 ENA pl lunx3-P02 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-65 lunx3-P02 netapp2_datavol 157524496 2097152 0 FAS60400_0 ENA pl lunx3-P03 lunx3-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-63 lunx3-P03 netapp1_datavol 157523968 2097152 0 FAS60401_5 ENA v lunx4 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-04 lunx4 ENABLED ACTIVE 2097152 CONCAT - RW sv lunx4-S01 lunx4-04 lunx4-L01 1 2097152 0 3/3 ENA => subvolume v lunx4-L01 - ENABLED ACTIVE 2097152 SELECT - fsgen pl lunx4-P01 lunx4-L01 ENABLED ACTIVE LOGONLY CONCAT - RW => LOGONLY plex sd netapp1_datavol-64 lunx4-P01 netapp1_datavol 161718272 528 LOG FAS60401_5 ENA pl lunx4-P02 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp1_datavol-65 lunx4-P02 netapp1_datavol 159621120 2097152 0 FAS60401_5 ENA pl lunx4-P03 lunx4-L01 ENABLED ACTIVE 2097152 CONCAT - RW sd netapp2_datavol-66 lunx4-P03 netapp2_datavol 159622176 2097152 0 FAS60400_0 ENA Thanks in Advance Ramesh SundaramSolved1.5KViews0likes4CommentsSFHA Solutions 6.2: Configuring secure shell or remote shell communication between nodes when installing Symantec products
To install and configure Symantec software, you need to establish secure shell (ssh) or remote shell (rsh) communication with superuser privileges between the nodes where the installer is running and the target nodes. You can install products to remote systems using either ssh or rsh. Symantec recommends that you use ssh as it is more secure than rsh. You can set up ssh and rsh connections in the following ways. You can use UNIX shell commands to manually set up the connection.Using this method, you can log into and execute commands on a remote system. You can run the installer directly to set up the ssh/rsh connection interactively during the install procedure. You first create a Digital Signature Algorithm (DSA) key pair. From the key pair, you can append the public key from the source system to the authorized keys file on the target systems. You can run the installer -comsetup command.Using this method, you can interactively set up the ssh and rsh connections using the installer -comsetup command. You can run the pwdutil.pl password utility.If you want to run the installer with the response file present in your own scripts, then the ssh connection should be set up prior to running installer. The password utility, pwdutil.pl, is bundled in the Symantec Storage Foundation and High Availability (SFHA) Solutions 6.2 release under the scripts directory. You can run the utility in your script to set up the ssh and rsh connection automatically. Both the script-based and web-based installers support establishing passwordless communication. For more information about configuring secure shell or remote shell communication between nodes, see: Manually configuring passwordless ssh Setting up ssh and rsh connection using the installer -comsetup command Setting up ssh and rsh connection using the pwdutil.pl utility SFHA documentation for other releases andplatforms can be found on theSORTwebsite.438Views3likes0CommentsUnmirror Second disk (sdb)
HI, I'm trying to unmirror my second disk sdb in the rootdg but was unable to do it correctly. Ayy help would be appreciated. The Veritas version is 4.1 and the system is installed with the following RPMs on RHEL 5.0 VRTSvxvm-platform-4.1.40.00-MP4_RHEL5 VRTSvxfs-platform-4.1.40.00-MP4_RHEL5 VRTSvxfen-4.1.40.00-MP4_RHEL5 VRTSvxvm-common-4.1.40.00-MP4_RHEL5 VRTSvxfs-common-4.1.40.00-MP4_RHEL5 # vxprint -hvt -g rootdg V NAME RVG/VSET/CO KSTATE STATE LENGTH READPOL PREFPLEX UTYPE PL NAME VOLUME KSTATE STATE LENGTH LAYOUT NCOL/WID MODE SD NAME PLEX DISK DISKOFFS LENGTH [COL/]OFF DEVICE MODE SV NAME PLEX VOLNAME NVOLLAYR LENGTH [COL/]OFF AM/NM MODE SC NAME PLEX CACHE DISKOFFS LENGTH [COL/]OFF DEVICE MODE DC NAME PARENTVOL LOGVOL SP NAME SNAPVOL DCO v dump - DISABLED ACTIVE 37748736 SELECT - fsgen v oradump - DISABLED ACTIVE 4194304 SELECT - fsgen v oravol - DISABLED ACTIVE 20971520 SELECT - fsgen v rootvol - ENABLED ACTIVE 41945652 ROUND - root pl rootvol-01 rootvol ENABLED ACTIVE 41945652 CONCAT - RW sd sda-02 rootvol-01 sda 0 41945652 0 sda ENA v swapvol - ENABLED ACTIVE 16384000 ROUND - swap pl mirswapvol-01 swapvol DISABLED STALE 0 CONCAT - RW pl swapvol-01 swapvol ENABLED ACTIVE 16384000 CONCAT - RW sd sda-01 swapvol-01 sda 41945652 4192965 0 sda ENA sd sda-04 swapvol-01 sda 67110137 12191035 4192965 sda ENA When attempted todis the mirswapvol-01 plex, I got the following error: # vxplex -g rootdg dis mirswapvol-01 VxVM vxplex ERROR V-5-1-621 Enabled Volume swapvol would be left with no active restricted plexes, When try to re-mirror the disk, it said sdb is already in the rootdg: ]# vxrootmir -t rootmirror sdb rootmir sda is your root disk AND sdb is the dmp disk on which root disk is to be mirrored Is this correct ? [y,n,q,?] (default: n) y VxVM vxrootmir ERROR V-5-2-2238 Disk sdb is already being used as auto:sliced disk - in disk group -; Disk cannot be used for mirroring root. VxVM vxrootmir INFO V-5-2-0 To use this disk for root disk mirroring, remove it from the VxVM disk group using the 'vxdg rmdisk' command and/or remove it from the VxVM configuration with the 'vxdiskunsetup' utility If try to remove sdb from rootdg, it said sdb is not in configuration: ]# vxdg -g rootdg rmdisk sdb VxVM vxdg ERROR V-5-1-555 Disk sdb not found in configuration941Views0likes3Comments