SmartIO blueprint and deployment guide for Solaris platform
SmartIO for Solaris was introduced in Storage Foundation HA 6.2. SmartIO enables data efficiency on your SSDs through I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per Input/Output Operations Per Second (IOPS). SmartIO supports both read and write-back caching for the VxFS file systems that are mounted on VxVM volumes, in multiple caching modes and configurations. SmartIO also supports block-level read caching for applications running on VxVM volumes. The SmartIO Blueprint for Solaris give an overview of the benefits of using SmartIO technology, the underlying technology, and the essential configuration steps to configure it. In the SmartIO Deployment Guide for Solaris, multiple deployment scenarios of SmartIO and how to manage them are covered in detail. Let us know if you have any questions or feedback!456Views3likes0CommentsSymantec ApplicationHA 6.2: Monitoring applications with Intelligent Monitoring Framework
Symantec ApplicationHA 6.2: Monitoring applications with Intelligent Monitoring Framework Introduced in this release, the Intelligent Monitoring Framework (IMF) feature improves ApplicationHA efficiency with: Faster detection of application faults Ability to monitor a large number of application components, with minimal effect on performance IMF is automatically enabled, if you use the Symantec High Availability Wizard to configure an application for monitoring. The feature was introduced in ApplicationHA 6.1 for Windows. In ApplicationHA 6.2, it is extended to AIX, Linux, and Solaris. For details, see the following topics: How intelligent monitoring works:AIX,Linux (KVM), Linux (VMware), andSolaris. Enabling debug logs for IMF:AIX,Linux (KVM),Linux (VMware), andSolaris. Gathering IMF information for support analysis:AIX,Linux (KVM),Linux (VMware), andSolaris. This release introduces IMF support for the folloing ApplicationHA agents: Apache HTTP Server DB2 Database (not applicable to Oracle VM Server for SPARC environment) Oracle Database Generic (custom) applications The following topics describe how to use the Symantec High Availability wizard to configure each supported application for IMF-enabled monitoring: Configuring application monitoring for Apache:AIX,Linux (KVM),Linux (VMware), andSolaris. Configuring application monitoring for DB2:AIX,Linux (KVM),and(Linux (VMware). Configuring application monitoring for Oracle:AIX,Linux (KVM),(Linux (VMware), andSolaris. Configuring application monitoring for generic applications:AIX,Linux (KVM),(Linux (VMware), andSolaris. You can use Symantec Cluster Server (VCS) commands to perform more advanced IMF actions. ApplicationHA and VCS documentation is available on the SORTwebsite.467Views2likes0CommentsSFHA Solutions 6.1: Using AdaptiveHA to select the largest system for failover
Symantec Cluster Server (VCS) service groups are virtual containers that manage groups of resources required to run a managed application. The FailOverPolicy service group attribute governs how VCS determines the target system for failover. For more information, see About service groups Service group attributes Cluster attributes About defining failover policies When you set FailOverPolicy to BiggestAvailable, AdaptiveHA enables VCS to dynamically select the cluster node with the most available resources to fail over an application. VCS monitors and forecasts the unused capacity of systems in terms of CPU, Memory, and Swap, to select the largest available system. If you set FailOverPolicy to BiggestAvailable for a service group, you must specify the load values in terms such as, 1 CPU, 1GB RAM, and 1GB SWAP, in the Load service group attribute.You only need to specify those resources that are used by the service group. For example, if the service group does not use the Swap resource, only specify the CPU and Memory resources in the Load attribute. Note: The Load FailOverPolicy isbeingdeprecated after this release. Symantec recommends that you change to theBiggestAvailableFailOverPolicy for enabling AdaptiveHA. For more information, see About AdaptiveHA Enabling AdaptiveHA for a service group If you upgrade VCS manually, ensure that you update the VCS configuration file (main.cf) to enable AdaptiveHA. When you upgrade from an older version of VCS using the installer, the main.cf file gets automatically upgraded. For more information, seeManually upgrading the VCS configuration file to the latest version VCS documentation for other platforms and releases can be found on theSORTwebsite.489Views2likes0CommentsSFHA Solutions 6.0.1: About GAB seeding and its role in VCS and other SFHA products
Group Membership and Atomic Broadcast (GAB) is a kernel component of Veritas Cluster Server (VCS) that provides globally-ordered messages that keep nodes synchronized. GAB maintains the cluster state information and the correct membership on the cluster. However, GAB needs another kernel component, Low Latency Transport (LLT), to send messages to the nodes and keep the cluster nodes connected. How GAB and LLT function together in a VCS cluster? VCS uses GAB and LLT to share data among nodes over private networks. LLT is the transport protocol responsible for fast kernel-to-kernel communications. GAB carries the state of the cluster and the cluster configuration to all the nodes on the cluster. These components provide the performance and reliability that VCS requires. In a cluster, nodes must share the groups, resources and the resource states. LLT and GAB help the nodes communicate. For information on LLT, GAB, and private networks, see: About LLT and GAB About network channels for heartbeating GAB seeding The GAB seeding function ensures that a new cluster starts with an accurate membership count of the number of nodes in the cluster. It prevents your cluster from a preexisting network partition upon initial start-up. A preexisting network partition refers to the failure in the communication channels that occurs while the nodes are down and VCS cannot respond. When the nodes start, GAB seeding reduces the vulnerability to network partitioning, regardless of the cause of the failure. GAB services are used by all Veritas Storage Foundation and High Availability (SFHA) products. For information about preexisting network partitions, and how seeding functions in VCS, see: About preexisting network partitions About VCS seeding Enabling automatic seeding of GAB If I/O fencing is configured in the enabled mode, you can edit the /etc/vxfenmode file to enable automatic seeding of GAB. If the cluster is stuck with a preexisting split-brain condition, I/O fencing allows automatic seeding of GAB. You can set the minimum number of nodes to form a cluster for GAB to seed by configuring the Control port seed and Quorum flag parameters in the /etc/gabtab file. Quorum is the number of nodes that need to join a cluster for GAB to complete seeding. For information on configuring the autoseed_gab_timeout parameter in the /etc/vxfenmode file, see: About I/O fencing configuration files For information on configuring the control port seed and the Quorum flag parameters in GAB, see: About GAB run-time or dynamic tunable parameters For information on split-brain conditions, see: About the Steward process: Split-brain in two-cluster global clusters How I/O fencing works in different event scenarios Example of a preexisting network partition (split-brain) Role of GAB seeding in cluster membership For information on how the nodes gain cluster membership, seeding a cluster, and manual seeding of a cluster, see: About cluster membership Initial joining of systems to cluster membership Seeding a new cluster Seeding a cluster using the GAB auto-seed parameter through I/O fencing Manual seeding of a cluster Troubleshooting issues that are related to GAB seeding and preexisting network partitions For information on the issues that you may encounter when GAB seeds a cluster and preexisting network partitions, see: Examining GAB seed membership Manual GAB membership seeding Waiting for cluster membership after VCS start-up Summary of best practices for cluster communications System panics to prevent potential data corruption Fencing startup reports preexisting split-brain Clearing preexisting split-brain condition Recovering from a preexisting network partition (split-brain) Example Scenario I – Recovering from a preexisting network partition Example Scenario II – Recovering from a preexisting network partition Example Scenario III – Recovering from a preexisting network partition gabconfig (1M) 6.0.1 manual pages: AIX Solaris For more information on seeding clusters to prevent preexisting network partitions, see: Veritas Cluster Server Administrator's Guide Veritas Cluster Server Installation Guide Veritas Cluster Server documentation for other releases and platforms can be found on the SORT website.739Views2likes0CommentsAdding Shared Storage - Possible Inconsistancy?
Hi Folks, Can you clarify my understandinging? I have added an extra disk into a server, however when I view it frommylittleexpensiveserver02 I get a different view see below DEVICE TYPE DISK GROUP STATUS disk_0 auto:none - - online invalid fas31400_1 auto:cdsdisk netbackup02 netbackup online thinrclm nohotuse fas31400_2 auto:cdsdisk netbackup01 netbackup online thinrclm nohotuse fas31400_3 auto:cdsdisk netbackup03 netbackup online thinrclm fas31400_4 auto - - error [root@server01 ~]# DEVICE TYPE DISK GROUP STATUS disk_0 auto:none - - online invalid fas31400_1 auto:cdsdisk - (netbackup) online thinrclm fas31400_2 auto:cdsdisk - (netbackup) online thinrclm fas31400_3 auto - - error fas31400_4 auto:cdsdisk - (netbackup) online thinrclm [root@server02 ~]# group resource system message --------------- -------------------- -------------------- -------------------- server01 RUNNING server02 RUNNING nbu server01 ONLINE nbu server02 OFFLINE ------------------------------------------------------------------------- nbu_dg server01 ONLINE nbu_dg server02 OFFLINE nbu_ip server01 ONLINE nbu_ip server02 OFFLINE nbu_mount server01 ONLINE ------------------------------------------------------------------------- nbu_mount server02 OFFLINE nbu_server server01 ONLINE nbu_server server02 OFFLINE nbu_CISN-STOR-UNIX_proxy server01 ONLINE nbu_CISN-STOR-UNIX_proxy server02 ONLINE ------------------------------------------------------------------------- nbu_bond0_proxy server01 ONLINE nbu_bond0_proxy server02 ONLINE nbu_ie1csnap002_proxy server01 ONLINE nbu_ie1csnap002_proxy server02 ONLINE nbu_vol server01 ONLINE ------------------------------------------------------------------------- nbu_vol server02 OFFLINE I dont understand why I am seeing different views of the storage unless server02 is looking at it down a different scsi bus? Any help appreciated --Steve920Views1like4CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information aboutconfiguring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0CommentsSFHA Solutions 6.0.1: About managing Virtual Business Services using VOM and the VBS command line interface
Virtual Business Services (VBS) is a feature that represents a multi-tier application as a single consolidated entity in Veritas Operations Manager (VOM). It builds on the high availability and disaster recovery features provided by Symantec products, such as, Veritas Cluster Server (VCS) and Symantec ApplicationHA. VBS enables administrators to improve operational efficiency of managing a heterogeneous multi-tier application. You can control VBS from the VOM graphical user interface and the VBS command line interface (CLI). When you install SFHA, the VBS installation packages, VRTSvbs and VRTSsfmh, are automatically installed on the nodes. From the VOM interface, you can define a VBS that consists of service groups from multiple clusters. You can also use the VBS CLI to performcommand line operations on that VBS. The clustering solutions that are offered today can only manage applications running on the same operating system. So, deploying the clustering solutions for a multi-tier, cross-platform setup can be difficult to manage. VBS can work across a heterogeneous environment to enable IT organizations to ensure that the applications across tiers can be made highly available. A typical multi-tier environment comprises of a database on a UNIX server, applications running in a Kernel-based Virtual Machine (KVM) on a Linux server, and a Web server on a VMware virtual machine. VBS works across the heterogeneous environment to communicate between local operating systems to see the end-to-end state of multi-tier applications and to control start and stop ordering of the applications. With VBS there are relationships between tiers that you can customize to fit your environment. You can set up policies for the events that result in a failure or for the specific events that happen on tiers. For example, you can set up a policy that restarts the application service groups when the database service group fails over to another node. For more information about VBS features, components, and workflow, see: Features of Virtual Business Services Sample Virtual Business Service configuration Virtualization support in Virtual Business Services About the Veritas Operations Manager policy checks for Virtual Business Services About the Virtual Business Services components Virtual Business Services workflow Support matrix for VBS Prerequisites for using VBS Availability Add-on You can configure and manage a VBS created in VOM by using the VOM VBS Availability Add-on utility. You can also control a VBS from the VBS CLI, but you cannot create a VBS from the VBS CLI. The VBS Availability Add-on utility enables you to: Start or stop service groups associated to a VBS. Establish service group relationships that decide the order in which service groups are brought online or taken offline. Decide the reaction of application components in each tier when an event fault occurs on a tier. Recover a VBS from a remote site when a disaster occurs. For more information about installing the VBS add-on, packages, and configuring a VBS using VOM, see: Installing Veritas Operations Manager Virtual Business Services Availability Add-on Installing the VRTSvbs package using Veritas Operations Manager Configuring a Virtual Business Service For more information on managing VBS using VOM and the VBS command-line, see: Operations using Veritas Operations Manager and command line Starting and stopping Virtual Business Services Viewing the overview of a Virtual Business Service Viewing the Virtual Business Service status from the command line Enabling fault management for a Virtual Business Service Disabling fault management for a Virtual Business Service Fault management overview For more information on VBS commands, troubleshooting issues, and recovery operations, see: Virtual Business Services commands Troubleshooting Virtual Business Services Virtual Business Services log files For more information on managing VBS using VOM and the VBS command line, see: Virtual Business Service-Availability User's Guide Virtual Business Services documentation for other SFHA releases can be found on the SORT website.597Views1like0CommentsSFHA Solutions 6.0.1: About Veritas Cluster Server service groups
Veritas Cluster Server service groups are virtual containers that manage groups of resources required to run a managed application. You can create multiple service groups on a single node and they can function independently of each other. Moreover, you can also assign dependencies among the service groups depending on the complexity of your managed application. For more information, see About service groups. VCS supports three types of service groups: Failover service groups are configured for applications that do not support simultaneous access from multiple systems. A failover service group requires fewer resources but you must factor in some down time when the service group fails over to another node. Parallel service groups are configured if applications can run on multiple nodes without data corruption and also allow simultaneous access from multiple machines. A parallel service group is especially useful for high availability applications as there is no down time. A hybrid service group is for replicated data clusters and is a combination of the failover and parallel service groups. It behaves as a failover group within a system zone and as a parallel group across system zones. A hybrid service group cannot fail over across system zones. VCS allows a switch operation on a hybrid group only if both systems are within the same system zone. If no systems exist within a zone for failover, VCS calls the nofailover trigger on the lowest numbered node. Hybrid service groups adhere to the same rules governing group dependencies as do parallel groups. For more information on failover service groups, parallel service groups, and hybrid service groups, see: About failover service groups About parallel service groups About hybrid service groups For more information on campus clusters, see Setting up campus clusters. VCS components are configured using attributes. Attributes contain data about the cluster, systems, service groups, resources, resource types, agent, and heartbeats if you use global clusters. Moreover, the value assigned to the parallel attribute determines whether the service group is failover, parallel, or hybrid. For more information, see About VCS attributes and Service group attributes. Administering service groups You can administer service groups using the Java console, VCS command line interface, or Veritas Operations Manager (VOM). Symantec recommends using VOM to manage Cluster Server environments. The following are the links to the topics addressing administration of service groups using either method. Components for administering VCS About Veritas Operations Manager Administering service groups from Veritas Operations Manager (VOM) To administer service groups from VOM, refer to Managing service groups. Administering service groups from the command line Adding and deleting service groups Bringing service groups online Taking service groups offline Switching service groups Freezing and unfreezing service groups Enabling and disabling service groups Clearing faulted resources in a service group Flushing service groups Linking and unlinking service groups Administering service groups from the Java Console Adding a service group Deleting a service group Bringing a service group online Taking a service group offline Switching a service group Freezing a service group Unfreezing a service group Enabling a service group Disabling a service group Autoenabling a service group Flushing a service group Linking service groups Unlinking service groups Veritas Cluster Server documentation for other platforms and releases can be found on the SORTwebsite.1KViews1like0Comments