adding new volumes to a DG that has a RVG under VCS cluster
hi, i am having a VCS cluster with GCO and VVR. on each node of the cluster i have a DG with an associated RVG, this RVG contains 11 data volume for Oracle database, these volumes are getting full so i am going to add new disks to the DG and create new volumes and mount points to be used by the Oracle Database. my question:can i add the disks to the DG and volumes to RVGwhile the database is UP and the replication is ON? if the answer is no, please let me know what should be performed on the RVG and rlinkto add these volumes also what to perform on the database resource group to not failover. thanks in advance.Solved4.4KViews0likes14CommentsSmartIO blueprint and deployment guide for Solaris platform
SmartIO for Solaris was introduced in Storage Foundation HA 6.2. SmartIO enables data efficiency on your SSDs through I/O caching. Using SmartIO to improve efficiency, you can optimize the cost per Input/Output Operations Per Second (IOPS). SmartIO supports both read and write-back caching for the VxFS file systems that are mounted on VxVM volumes, in multiple caching modes and configurations. SmartIO also supports block-level read caching for applications running on VxVM volumes. The SmartIO Blueprint for Solaris give an overview of the benefits of using SmartIO technology, the underlying technology, and the essential configuration steps to configure it. In the SmartIO Deployment Guide for Solaris, multiple deployment scenarios of SmartIO and how to manage them are covered in detail. Let us know if you have any questions or feedback!457Views3likes0CommentsSFHA Solutions 6.2: New VCS configuration wizards introduced on AIX, Linux, and Solaris
The Symantec Cluster Server (VCS) Cluster Configuration Wizard and Symantec High Availability Configuration Wizard are introduced on all supported AIX, Linux and, Solaris distributions in this release. The two new wizards replace the Symantec High Availability Configuration Wizard. The earlier wizard provided a combined workflow for cluster configuration and application (high availability) configuration, and was supported only on Linux. You can launch the new wizards from the Symantec High Availability view. In a VMware virtual environment, you can launch the wizards from the Symantec High Availability view in the VMware vSphere Web Client. Symantec has recently launched an add-on for Veritas Operations Manager called Symantec HA Plug-in for vSphere Web Client. The add-on lets you integrate VCS and ApplicationHA tasks with VMware GUI, while eliminating the need to install the Symantec High Availability Console. For more information on the add-on, see the followingtechnical note. For steps to launch the VCS Cluster Configuration Wizard, see: Launching the VCS Cluster Configuration wizard (AIX,Linux,Solaris,) For steps to configure a cluster using the VCS Cluster Configuration wizard, see: Configuring a cluster by using the VCS Cluster configuration wizard (AIX,Linux,Solaris) For steps to launch the Symantec High Availability Configuration Wizard, see: Launching the Symantec High Availability Configuration wizard (AIX,Linux,Solaris) For steps to configure an application the wizard-based steps to configure the following applications for availability monitoring with VCS, see: Configuring application monitoring for generic applications (Linux,Solaris,AIX) Configuring the agent to monitor Oracle (Linux) Configuring the agent to monitor SAP (Linux) Configure the agent to monitor WebSphereMQ (Linux)444Views0likes0CommentsSFHA Solutions 6.2 (AIX and Solaris): Share local storage across the network using Flexible Storage Sharing
Cluster File System (CFS) 6.2 brings the Flexible Storage Sharing (FSS) feature to Solaris and AIX environments, enabling you to share Direct Attached Storage (DAS) across nodes in the cluster to run in SAN-free or hybrid modes. FSS takes advantage of high speed interconnects to allow sharedaccessto local storage enablingyou to create logical volumes in both shared and shared-nothing storage configurations, to create a high-performance, highly available shared namespace. With FSS, enterprises can use software to provide data redundancy, high availability, and disaster recovery capabilities, without requiring physically shared storage. For more information about FSS, see: Flexible Storage Sharing use cases Limitations of Flexible Storage Sharing Optimizing storage with Flexible Storage Sharing Installing Storage Foundation Cluster File System High Availability (SFCFSHA) or Storage Foundation for Oracle RAC (SF Oracle RAC) automatically enables the FSS feature. No additional installation steps are required. The fencing coordination points can either be SCSI-3 PR capable shared storage or CP servers. For information on administering FSS, see: Administering Flexible Storage Sharing About Flexible Storage Sharing disk support About the volume layout for Flexible Storage Sharing disk groups Setting the host prefix Exporting a disk for Flexible Storage Sharing Setting the Flexible Storage Sharing attribute on a disk group Using the host disk class and allocating storage Administering mirrored volumes using vxassist Displaying exported disks and network shared disk group Installing Storage Foundation Cluster File System High Availability (SFCFSHA) or Storage Foundation for Oracle RAC (SF Oracle RAC) automatically enables the FSS feature. No additional installation steps are required. The fencing coordination points can either be SCSI-3 PR capable shared storage or CP servers. For more information on Flexible Storage Sharing, see the following related Symantec Connect articles: Flexible Storage Sharing: DAS Cluster Demo Demo: Adding COmpute Nodes with Flexible Storage Sharing Clustered NFS on DAS Storage Remove the Rust: Unlock DAS and go SAN-Free High Availability and Performance Oracle Configuration with Flexible Storage Sharing in a SAN-Free Environment using Intel SSDs Commoditizing High Availability and Storage using Flexible Storage Sharing Growing my Commoditized Storage and HA Environment with an Extra Node Building Application and Data Availability without SAN Veritas Operations Manager 6.1: Managing Flexible Storage Sharing Configure Flexible Storage Sharing using Veritas Operations Manager 6.1 Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on theSORT website.812Views0likes0CommentsSFHA Solutions 6.2: VCS support for SmartIO
The SmartIO feature on Linux was introduced in Storage Foundation and High Availability (SFHA) 6.1. Beginning in this release, SmartIO is also supported on AIX and Solaris.SmartIO enables data efficiency on your solid state drives (SSDs) through I/O caching. For information about administering SmartIO, see the Symantec Storage Foundation and High Availability Solutions SmartIO for Solid State Drives Solutions Guide. In an SFHA environment, applications can failover to another node. On AIX, Linux, and Solaris, beginning in this release,the SFCache agent allows you to enable caching for an application if there are caching devices.The SFCache agent also allows you to failover the application to a node that does not have caching devices. The SFCache agent monitors: Read caching for Veritas Volume Manager (VxVM) cache Read and writeback caching for Veritas File System (VxFS) cache For volume-level caching, the cache objects are disk groups and volumes. For file system level caching, the cache object is the mount point. You can: Modify the caching mode at runtime Set the default caching mode when you mount the VxFS file system Configure the MountOpt attribute of the Mount agent to specify the default caching mode using the smartiomode option For more information about the smartiomode option, see the mount_vxfs(1m) manual page. If the cache faults, the application still runs without any issues on the same system, but with degraded I/O performance.You can configure the SFCache agent’s CacheFaultPolicy attribute and choose to either ignore or initiate failover. If SmartIO is not enabled on a node, the SFCache resource acts as a dummy resource and is reported as ONLINE or OFFLINE depending on the group state, but caching-related operations are not performed. For more information, see: SFCache agent Mount agent Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.564Views0likes0CommentsSFW HA 6.1: Support for SmartIO
SmartIO is a new feature introduced in Symantec Storage Foundation and High Availability Solutions (SFW HA) 6.1 for Windows. SmartIO improves I/O performance of applications and Hyper-V virtual machines by using Solid State Devices (SSDs) as a caching location for read-only I/O caching. Traditional disks are often an I/O bottleneck for high transaction applications. To compensate for this, administrators usually either increase the in-RAM cache size or buy expensive storage. To address this issue, SmartIO uses an SSD-based cache to drive high performance applications. SSDs are available in many sizes and connectivity types. This adds a new layer of complexity and decentralization of the storage. SmartIO adds a central management layer between the physical SSDs and the applications that need to access them. SmartIO lets you use the SSDs to maximize application performance without requiring in-depth knowledge of the technologies. SmartIO supports volume-level read-only caching as SSDs are primarily beneficial in high-read environments. To use SmartIO, you create a cache area (storage space allocated on the SSDs for caching) using one or more non-shared SSDs and link volumes to the cache area to enable caching for the volumes. Using SmartIO, you can also disable caching and grow, shrink, or delete a cache area. In a clustered environment, you may create auto cache areas on all cluster nodes. After failover, the implicitly linked volumes use the auto cache area on the failover node. If the auto cache area is not present on the failover node, then caching is not performed on the failover node. If the data volume is disconnected, caching for that volume is stopped. Caching is restarted once the volume is reconnected and brought online. If the cache area is disconnected, the cache area is taken offline and stops caching of all the volumes linked with it. SmartIO has the following limitations: You cannot reserve a cache area for a particular volume. You can create a new cache area and link the volume with it. File pinning or block pinning is not supported. The cache is volatile and does not persist after the system is restarted. For more information on the SmartIO feature, see the following sections of the "SmartIO" chapter in the Symantec Storage Foundation Administrator's Guide: About SmartIO Administering SmartIO through GUI Administering SmartIO through CLI Symantec Storage Foundation and High Availability Solutions for Windows (SFW HA) documentation for other releases and platforms can be found on the SORT website.487Views1like0CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information aboutconfiguring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0CommentsHow to verify VCS installation on a system
Symantec recommends that you verify your installation of Symantec Cluster Server (VCS) on a system before you install or upgrade VCS. This allows you to know about the product prerequisites, installed product version, and configuration. You can verify installation of VCS on a system using the following techniques: Operating System (OS) commands Script-based Installer Symantec Operations Readiness Tools (SORT) checks VCS command validation OS commands You can run native OS commands on the system to verify whether VCS is installed. The following table lists the commands to verify the VCS installation and the VCS version and patches installed on the system. Use cases AIX HP-UX Linux Solaris Verifying VCS installation lslpp -l VRTSvcs swlist VRTSvcs rpm –qi VRTSvcs For Solaris 10: pkginfo –l VRTSvcs For Solaris 11: pkg info VRTSvcs Verifying VCS version and patches lslpp -l VRTSvcs swlist VRTSvcs rpm –qi VRTSvcs showrev –p | grep VRTSvcs You can use these commands to verify which product packages are installed on the system.To get a complete list of required and optional packages for VCS, see the product release notes on theSORTwebsite. Note:On Linux, there is no sparse patch or patch ID. Therefore, the package version itself indicates the patch version of the installed VCS. Advantage of using the OS command technique By default, native commands are available on a system and can be used with ease. Limitations of using the OS command technique You must run OS commands as root on the cluster nodes. OS commands are useful for package and patch validation. However, these commands do not provide complete information about the VCS product installation. You need to run multiple commands to validate whether the required packages are installed on the system Script-based Installer Symantec recommends that you use the script-based installer to install Symantec products. The script-based installercan be used to identify which products from the Storage Foundation and High Availability (SFHA) family are installed on the system. The installer script can be executed to get a list of VCS packages and their versions installed on the system. These commands can be executed on AIX, HP-UX, Linux, and Solaris. The installer also allows you to configure the product, verify the pre-installation requisites, and view the description of the product. The following command provides the major version of the product and packages installed on the system. However, it does not provide details such as join version, build date, and patches installed on the other nodes in the cluster. To use this command, VCS must be already installed on the system. To use the script-based installer to verify the version of VCS installed on the system Run the following command: #/opt/VRTS/install/installvcs<version> –version Whereversionis the specific release version. For example, to validate the VCS 6.1 installation on the system, run the following command: #/opt/VRTS/install/installvcs61 –version To initiate the VCS installation validation using the product DVD media provided by Symantec, run the following installer script: #<dvd-media-path>/installer -version The installer script lists the Symantec products installed on the system along with the version details of the products. You can also use this script to perform a pre-check of the required package dependencies to install the product. If the product is already installed on the system and you want to validate the list of packages and patches along with their version, run the following command: #/opt/VRTS/install/showversion This command provides details of the product installed on all the nodes in a cluster. This information includes the product name, required and optional packages installed on the system, installed and available product updates, version, and product license key. Advantage of using script-based installer A single script validates all nodes in the cluster. Therefore, it does not need any platform-specific commands for performing validation. Limitation of using script-based installer The VRTSsfcpi package must be installed on the systems. Note: The VRTSsfcpi package was first released in VCS 6.0 and is available in the later versions. For earlier versions, use the installer from the DVD media. As an alternative, you can launch theinstaller from the DVD provided by Symantec, regardless of the product version. For more information about installing VCS using installer, seeInstalling VCS using the installer. SORT checks SORT provides a set of web-based tools to automate and simplify time-consuming administrator tasks. For example, the data collector tool gathers system-related information and generates web-based and text-based custom reports. These reports capture the system and platform-related configuration details and list the Symantec products installed on the system. SORT generates the following custom reports: Installation and Upgrade Risk Assessment License/Deployment You can generate and view custom reports to check which Symantec products are installed on a system. These reports list the passed and failed checks and other significant details you can use to assess the system. The checks and recommendations depend on the installed product. For SORT checks, see System Assessments. To generate a SORT custom report, On theData Collectortab, download the appropriate data collector for your environment. Follow the instructions in the README file to install the data collector. Run the data collector. It analyzes the nodes in the cluster and stores results in an XML file. On theUpload Reportstab, upload the XML file to the SORT website. SORT generates a custom report with recommendations and links to the related information. For more information about custom reports, visithttps://sort.symantec.com. Advantage of using the SORT checks SORT checks provide comprehensive information about the installed product. Limitation of using the SORT checks SORT data collector is not a part of product media and must be downloaded and installed on the system to generate reports. VCS command validation VCS provides a set of commands to validate and provide additional details of the components installed as a part of VCS product installation. For more information about verifying the VCS installation using VCS commands, seeSymantec™ Cluster Server 6.1 Administrator's Guide. The VCS command validation method allows you to check if VCS is correctly configured on the nodes in a cluster. To verify the status of the VCS components such as Low-Latency Transport (LLT), Group Membership Services/Atomic Broadcast (GAB), and the VCS engine, you can inspect the content of the key VCS configuration files or run the following VCS commands. Component Command Provides GAB #gabconfig -W GAB protocol version LLT #lltconfig -W LLT protocol version VCS engine #had -version HAD engine version and join version Cluster #hasys -state Cluster state Advantages of using VCS commands VCS commands provide comprehensive information about the cluster. VCS commands can be used for configuring the cluster. Limitation of using VCS commands VCS commands can be used only after the VCS product is completely installed and configured on the system. Frequently asked questions The following is a list of VCS installation-related frequently asked questions: Where do I check the availability of the CPI installer on a system? The installer script is located at /opt/VRTS/install. Where are the CPI installation logs located? The installation logs are located at /opt/VRTS/install. Where do I find information about SORT checks and reports? For information about SORT checks and reports, visithttps://sort.symantec.com. How do I validate a system before installing VCS? Before you install VCS, you must make sure the system is ready. To validate the system, use the installer script on the Symantec DVD. To start the pre-installation validation on the system and verify whether the system meets the product installation requirements, run the following command: #installer –precheck14KViews0likes1CommentSFRAC 6.1 on Solaris 10 Supportability
I have a customer who is upgrading a critical SFRACclusterwith global apps and hasa question about the supportability of 6.1 Sol 10 with u8 base. Our documentation states that that 6.1 is compatible with Sol 10 update 9 onward. Does this mean a Sol 10 U9 BASE build or a kernel patch that serves u9 and higher?Their current nodes are running a base Sol10 U8 build but will be patched to get to ~u11 release…but the base still remains u8…if referring to the /etc/release file. Need some clarification on this as soon as possible. Thank you!715Views0likes3CommentsSharing a SFHA/CFS filesystem between Physical and VM servers
Hello, and good afternoon to you all. For one of our cluster deployments, we've been requested to create a Cluster Shared Filesystem including 9 physical servers and 3 VMware VMs (vSphere 5.0 or 5.5, if this matters), all running RHEL6.5. The VMs would be using RDM-P and adhering to all the CFS-inside-a-VM stuff, as always. Performance and sizing issues aside, is this a supported configuration for SFCFSHA? I know using this for VCS is fully supported (and even advertised), but I can't seem to find the mention to this on CFS mounts. Thanks for the attention1KViews0likes1Comment