Veritas InfoScale 7.0: Licensing Veritas InfoScale 7.0
You require a license to install and use Veritas InfoScale products. There are two ways you can register the Veritas InfoScale product license keys: Use key-based licensing When you purchase a Veritas InfoScale product, you receive a License Key certificate. The certificate specifies the product keys and the number of product licenses purchased. Use keyless licensing The license is registered based on the product you install. Within 60 days of choosing this option, you must install a valid license key corresponding to the license level, or continue with keyless licensing by managing the systems with Veritas InfoScale Operations Manager. You can register the product license keys either manually or by using the installer. You can use the vxlicinstupgrade utility, if you want to: Upgrade to another Veritas InfoScale product Upgrade a temporary license to a permanent license Manage co-existence of multiple licenses For more information on licensing Veritas InfoScale products, see: About Veritas InfoScale product licensing Registering Veritas InfoScale using product license keys Registering Veritas InfoScale product using keyless licensing Updating your product licenses Using the vxlicinstupgrade utility Veritas InfoScale documentation can be found on the SORT website.1.6KViews3likes0CommentsSFHA 6.0.2: Configuring VCS on Linux guests in a VMware environment
Toensure failover of critical applications running inside virtual machines in a VMware environment, you must install Veritas Cluster Server 6.0.2 or later on the virtual machines (guests). VCS is distributed as part of the Storage Foundation and High Availability (SFHA) stack of products. Prior to the 6.0.2 release, if you installed VCS in a VMware virtual environment, application failover could not co-exist with VMware vMotion. The limitation existed because application failover requires shared storage between failover systems, while vMotion does not support shared storage. VCS 6.0.2 introduces the VCS VMwareDisks agent which works around this limitation. The agent performs a storage disk attach-detach operation as required to support application failover on VCS cluster nodes. For more information on the VCS VMwareDisks agent: See the Storage agents section of the VCS Bundled Agents Reference Guide. Another important feature of VCS 6.0.2 for the VMware environment is that you can perform several VCS installation, configuration, and administrative tasks directly from your vSphere Client. Before you can perform the tasks from the vSphere Client, you must install the Symantec High Availability Console on an independent Windows system in your datacenter. When installed, the Symantec High Availability Console registers a plug-in with the vCenter Server. As a result, the Symantec High Availability tab appears in the vSphere Client Inventory view. You can perform most VCS tasks from the tab. To learn how to install the Symantec High Availability Console, see: Installing the Symantec High Availability Console For more information on the Symantec High Availability Console, see: Symantec Availability Console Installation Guide Symantec High Availability Console Readme You can perform the following tasks from the vSphere Client Symantec High Availability tab: Install VCS guest components Configure high availability of VCS service groups for Oracle database service groups using the Symantec High Availability wizard Configure high availability of VCS service groups for SAP Application Server using the Symantec High Availability wizard Configure high availability of VCS service groups for WebSphere MQ using the Symantec High Availability wizard Configure high availability of generic applications using the Symantec High Availability wizard Administer application availability The complete VCS 6.0.2 documentation set is available here, VCS documentation for other releases and platforms can be found on the SORT website.588Views2likes0CommentsSFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products
Group Membership and Atomic Broadcast (GAB) is a kernel component of Veritas Cluster Server (VCS) that provides globally-ordered messages that keep nodes synchronized. GAB maintains the cluster state information and the correct membership on the cluster. However, GAB needs another kernel component, Low Latency Transport (LLT), to send messages to the nodes and keep the cluster nodes connected. How GAB and LLT function together in a VCS cluster? VCS uses GAB and LLT to share data among nodes over private networks. LLT is the transport protocol responsible for fast kernel-to-kernel communications. GAB carries the state of the cluster and the cluster configuration to all the nodes on the cluster. These components provide the performance and reliability that VCS requires. In a cluster, nodes must share the groups, resources and the resource states. LLT and GAB help the nodes communicate. For information on LLT, GAB, and private networks, see: About LLT and GAB About network channels for heartbeating GAB seeding The GAB seeding function ensures that a new cluster starts with an accurate membership count of the number of nodes in the cluster. It prevents your cluster from a preexisting network partition upon initial start-up. A preexisting network partition refers to the failure in the communication channels that occurs while the nodes are down and VCS cannot respond. When the nodes start, GAB seeding reduces the vulnerability to network partitioning, regardless of the cause of the failure. GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information about preexisting network partitions, and how seeding functions in VCS, see: About preexisting network partitions About VCS seeding Enabling automatic seeding of GAB If I/O fencing is configured in the enabled mode, you can edit the /etc/vxfenmode file to enable automatic seeding of GAB. If the cluster is stuck with a preexisting split-brain condition, I/O fencing allows automatic seeding of GAB. You can set the minimum number of nodes to form a cluster for GAB to seed by configuring the Control port seed and Quorum flag parameters in the /etc/gabtab file. Quorum is the number of nodes that need to join a cluster for GAB to complete seeding. For information on configuring the autoseed_gab_timeout parameter in the /etc/vxfenmode file, see: About I/O fencing configuration files For information on configuring the control port seed and the Quorum flag parameters in GAB, see: About GAB run-time or dynamic tunable parameters For information on split-brain conditions, see: About the Steward process: Split-brain in two-cluster global clusters How I/O fencing works in different event scenarios Example of a preexisting network partition (split-brain) Role of GAB seeding in cluster membership For information on how the nodes gain cluster membership, seeding a cluster, and manual seeding of a cluster, see: About cluster membership Initial joining of systems to cluster membership Seeding a new cluster Seeding a cluster using the GAB auto-seed parameter through I/O fencing Manual seeding of a cluster Troubleshooting issues that are related to GAB seeding and preexisting network partitions For information on the issues that you may encounter when GAB seeds a cluster and preexisting network partitions, see: Examining GAB seed membership Manual GAB membership seeding Waiting for cluster membership after VCS start-up Summary of best practices for cluster communications System panics to prevent potential data corruption Fencing startup reports preexisting split-brain Clearing preexisting split-brain condition Recovering from a preexisting network partition (split-brain) Example Scenario I – Recovering from a preexisting network partition Example Scenario II – Recovering from a preexisting network partition Example Scenario III – Recovering from a preexisting network partition gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on seeding clusters to prevent preexisting network partitions, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.750Views2likes0CommentsVeritas InfoScale Enterprise 7.1: Block-level encryption of VxVM data volumes
VxVM provides advanced security for data at rest through encryption of VxVM data volumes. Encryption is a technology that converts data or information into code that can be decrypted only by authorized users. You can encrypt VxVM data volumes to: •Protect sensitive data from unauthorized access •Retire disks from use or ship them for replacement without the overhead of secure wiping of content The implementation uses the Advanced Encryption Standard (AES) cryptographic algorithm with 256-bit key size validated by the Federal Information Processing Standard (FIPS) Publication 140-2, (FIPS PUB 140-2) security standard. You can encrypt volumes or disk groups in your storage environment. VxVM generates a volume encryption key at the time of volume creation. The volume encryption key is secured (wrapped) using a key wrap. The wrapped key is stored with the volume record. The volume encryption key is not stored on disk. You can secure the volume encryption key using one of the following methods: Using Passphrases (PBE) Using Key Management Server for encryption If you encrypt a disk group, all volumes in the disk group are encrypted. Any volume created later on the disk group will also be encrypted by default. Only new volumes that are created using disk group version 220 or later can be encrypted by VxVM. When you start an encrypted volume, VxVM uses the key wrap to retrieve the volume encryption key and enable access to the volume. Some of the administrative tasks you can perform are as follows: Creating encrypted volumes Viewing encrypted volumes Automating startup for encrypted volumes Configuring a Key Management Server Veritas InfoScale documentation for other releases and platforms can be found on the SORT website.859Views1like0CommentsInfoScale 7.0 for UNIX: Overview of product family and licensing
From release 7.0 onwards, the Storage Foundation and High Availability product suite is rebranded to InfoScale. With InfoScale 7.0, you get to choose from a simplified product suite comprising of four products. The InfoScale product suite consists of the following products: InfoScale Enterprise, InfoScale Availability, InfoScale Storage, and InfoScale Foundation. With the simplification of products, there are only four licenses to choose from, one for each product. InfoScale product addresses use cases from the storage and or availability domain. The table lists the products and their components to give you an overall view and capabilities of each product. Product Component Value Proposition InfoScale Family - Product and component mapping InfoScale Foundation Storage Foundation Basic offering targeting storage management InfoScale Storage Storage Foundation SFCFS Comprehensive storage management InfoScale Availability Cluster Server HA DR Application availability and disaster recovery InfoScale Enterprise Storage Foundation SFHA SFCFSHA SFRAC SFSRBASECE Cluster Server HA DR Application availability in addition to comprehensive storage management During product license registration, the license program registers all the components that are bundled for a product. For example, when you register the license for InfoScale Storage, the license program registers licenses for Storage Foundation and Storage Foundation and Cluster File System components. Once you install the product, you can choose which components to configure based on your business goals and workload requirements. The mode of purchasing license has not changed. While installing the product you can either enter the license key and install the product or opt for keyless licensing in which case you select a product level to license a particular product. Note that with keyless licensing, you have a period of 60 days to purchase the license. With keyless licensing, you need to mandatorily register the license in the Veritas InfoScale Operations Manager (VIOM) management tool. Ways to license an InfoScale Product Registering a license can be through the installer script or manual installation or upgrade to an InfoScale product. To register a license by the installer script, it prompts you to enter the license during the installation process. With the manual license registration procedure, you need to run the vxlicinst command to enter the license key and register the product. When you upgrade to an InfoScale product, you need to run the vxlicinstupgrade command to enter the license key and register the product. The vxlicinstupgrade command also allows upgrade from a temporary license to a permanent license, and manages co-existence of InfoScale products. For more details, see the vxlicinstupgrade binary. For more information on product overview and licensing, refer to the following sections: About Veritas InfoScale product licensing Registering Veritas InfoScale using product license keys Registering Veritas InfoScale product using keyless licensing Updating your product licenses Using the vxlicinstupgrade utility About the VRTSvlic RPM The links in this article are specific to the Linux platform. Veritas InfoScale documentation for other releases and platforms can be found on the SORT website.766Views1like0CommentsVeritas Resiliency Platform 1.1 key components
Veritas Resiliency Platform brings separate data centers together in a resiliency domain for managing and monitoring workload automation and disaster recovery (DR). Resiliency Platform provides two types of servers, which are deployed as virtual appliances on the data centers. Resiliency Manager The main interface for managing the resiliency domain. A Resiliency Manager is deployed at each data center. After you complete a simple configuration on the virtual appliance, you do all further configuration and operations from an easy-to-use browser-based console. Since the built-in replication between Resiliency Managers keeps the data at each Resiliency Manager synchronized, it does not matter which Resiliency Manager you connect to in the resiliency domain – you see the same information on the browser. Resiliency Manager Infrastructure Management Server (IMS) The server that collects data from the customer assets at the data center. The IMS orchestrates with the APIs of the various platforms that Resiliency Platform integrates with, for example virtualization platforms, such as VMware vSphere and Microsoft Hyper-V, as well as array-based replication and replication appliances. Once you have added the assets that you want to monitor and protect to the IMS, the IMS continues to automatically discover their status. Infrastructure Management Server (IMS) Resiliency Platform release 1.1 adds support for using the Veritas InfoScale Operations Manager server for discovery and management of Veritas InfoScale applications. Veritas Resiliency Platform support for InfoScale applications You can find other versions of Veritas Resiliency Platform on the SORT documentation page.558Views1like0CommentsAdding Shared Storage - Possible Inconsistancy?
Hi Folks, Can you clarify my understandinging? I have added an extra disk into a server, however when I view it frommylittleexpensiveserver02 I get a different view see below DEVICE TYPE DISK GROUP STATUS disk_0 auto:none - - online invalid fas31400_1 auto:cdsdisk netbackup02 netbackup online thinrclm nohotuse fas31400_2 auto:cdsdisk netbackup01 netbackup online thinrclm nohotuse fas31400_3 auto:cdsdisk netbackup03 netbackup online thinrclm fas31400_4 auto - - error [root@server01 ~]# DEVICE TYPE DISK GROUP STATUS disk_0 auto:none - - online invalid fas31400_1 auto:cdsdisk - (netbackup) online thinrclm fas31400_2 auto:cdsdisk - (netbackup) online thinrclm fas31400_3 auto - - error fas31400_4 auto:cdsdisk - (netbackup) online thinrclm [root@server02 ~]# group resource system message --------------- -------------------- -------------------- -------------------- server01 RUNNING server02 RUNNING nbu server01 ONLINE nbu server02 OFFLINE ------------------------------------------------------------------------- nbu_dg server01 ONLINE nbu_dg server02 OFFLINE nbu_ip server01 ONLINE nbu_ip server02 OFFLINE nbu_mount server01 ONLINE ------------------------------------------------------------------------- nbu_mount server02 OFFLINE nbu_server server01 ONLINE nbu_server server02 OFFLINE nbu_CISN-STOR-UNIX_proxy server01 ONLINE nbu_CISN-STOR-UNIX_proxy server02 ONLINE ------------------------------------------------------------------------- nbu_bond0_proxy server01 ONLINE nbu_bond0_proxy server02 ONLINE nbu_ie1csnap002_proxy server01 ONLINE nbu_ie1csnap002_proxy server02 ONLINE nbu_vol server01 ONLINE ------------------------------------------------------------------------- nbu_vol server02 OFFLINE I dont understand why I am seeing different views of the storage unless server02 is looking at it down a different scsi bus? Any help appreciated --Steve921Views1like4CommentsCFS NFS configuration
Hello team, I'm trying to configure a CFS NFS HA service, so I've executed this command on the cluster: cfsshare config -p nfs apbasedg locks /locks and then configured all the shares. The cfsnfs_locks volume was already created and mounted on the vcs, but now it's not able to mount it anymore: cfsshare display VCS WARNING V-16-1-10260 Resource does not exist: cfsnfs_locks CNFS metadata filesystem : Protocols Configured : none #RESOURCE MOUNTPOINT PROTOCOL OPTIONS scs-ESP-nfsshare_ESPgoldclient /ESP/goldclient NFS scs-ESP-nfsshare_ESPusrsaptrans /ESP/usr/sap/trans NFS scs-ESP-nfsshare_sapmntESP /sapmnt/ESP NFS scs-ESP-nfsshare_userdataESP /userdata/ESP NFS #cfsmntadm display List of mount points registered with cluster-configurationbut not associated with any node : /locks # vxdisk -g apbasedg list DEVICE TYPE DISK GROUP STATUS vmdk0_10 auto:sliced apbasedg_0 apbasedg online shared this is the volume where the cfsnfs_locks volume is available v cnfs_locks fsgen ENABLED 2097152 - ACTIVE - - pl cnfs_locks-01 cnfs_locks ENABLED 2097152 - ACTIVE - - but I'm not able to mount the volume: # cfsmount /locks Error: V-35-33: cfsmount: Cluster mount /locks not associated with any node It worked at first, but now no more. Can you please help Thanks1.1KViews1like2CommentsSFHA Solutions 6.2: Installing Storage Foundation and High Availability (SFHA) using Red Hat Satellite server
Beginning in this release, you can install Storage Foundation and High Availability (SFHA) on your system with Red Hat Satellite server 5.6. Red Hat Satellite server installation is supported for RHEL6 and RHEL7 operating systems. Red Hat Satellite server is a systems management solution. You can manage the system by creating a channel, which is a collection of software RPMs. Using the channel, you can segregate the RPMs by defining some rules. For example, a channel may contain RPMs only from a specific Red Hat distribution or a channel may contain SFHA RPMs for custom usage in your organization's network. You can install RPMs and rolling patches on the systems that the Red Hat Satellite server manages. You can use the Red Hat Satellite server to: Inventory the hardware and the software information of your systems. Install and update software on systems. Collect and distribute custom software RPMs into manageable groups. Provision (Kickstart) systems. Manage and deploy configuration files to systems. Monitor your systems. Provision virtual guests. Start, stop, and configure virtual guests. For more information on the Red Hat Satellite server, see: Installing SFHA using the Red Hat Satellite server Using Red Hat Satellite server to install SFHA products Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.309Views1like0Comments