Suggestion Regarding Icons in Documentation
Hi team, I just wanted to share this link to a recent discussion in the SF Windows forum, which talked about icons and had a couple of suggestions for documentation: http://www.symantec.com/connect/forums/veritas-enteprise-administrator-icon-meanings Best, Kimberley283Views4likes0CommentsVeritas InfoScale 7.0 (Linux and Windows): Changes in Dynamic Multi-Pathing for VMware
With the introduction of the Veritas InfoScale 7.0 product family, Dynamic Multi-Pathing (DMP) for VMware is now included as a component in the Veritas InfoScale Foundation, Storage, and Enterprise products. The license for this component is included as a part of the InfoScale license on both Linux and Windows. You can either use the Linux license or the Windows license to enable DMP functionality on the ESXi hypervisor. The DMP for VMware component consists of the following: vSphere offline DMP bundle — DMP components installed on the ESXi hypervisor vSphere UI plug-in — Installed on a Windows physical machine or on a virtual machine and serves as an interface between ESXi and vCenter Remote CLI package — Optional command-line interface to manage ESXi hosts from a Linux system or a Windows system The components can be installed using the command line or the VMware vSphere Update Manager. To install the Dynamic Multi-Pathing for VMware component on ESXi hosts, use one of the following: Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.zip Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.iso For more information on installing the DMP for VMware component, refer to the Dynamic Multi-Pathing 7.0 Installation Guide - VMware ESXi. From this release onwards: 1) DMP for VMware supports SanDisk (FusionIO) PCIe attached SSD cards. 2)TheDMP plug-in for the VMware vCenter web client: Displays I/O rate as a line graph. Displays the mapping of LUNs and VMDKs used by a virtual machine. Has a SmartPool tab for a virtual machine for easy configuration of SmartPool assigned to it. No longer requires ESXi login credentials in the Manage Device Support page. For more information about the features and enhancements, refer to the Dynamic Multi-Pathing 7.0 Administrator's Guide - VMware ESXi.756Views2likes1CommentVeritas Operations Manager 6.1: Managing LUN classifications
Using Veritas Operations Manager Management Server, you can classify the LUNs created in your data center. You can classify them based on one or more parameters, such as: LUN name Vendor Product Array name Replication status of LUN: Whether or not the LUN is replicated LUN type: Thick or thin RAID group LUN RAID level When you define classifications for LUNs, Veritas Operations Manager stores these definitions as rules in theManagement Server. LUN classification is available only when theStorage Insight Add-on is installed and configuredon theManagement Server. To learn more about LUN classification, see: About LUN classification Using the storage perspective of theManagement Server console, you can create, modify, refresh, and delete LUN classifications. You must haveAdministrator privileges on the Storage perspectiveto perform this task. To learn more, see: Creating LUN classifications Modifying LUN classifications Deleting LUN classifications Refreshing LUN classifications Along with these operations, you can also modify the order in which the LUN classifications are applied to the LUNs. When multiple classifications are applicable to a LUN, Veritas Operations Manager uses the classification that appears first to classify the LUN. For more information about LUNclassification order, see: Modifying the order of the LUN classifications Storage Foundation and High Availability and Veritas Operations Manager documentation for other releases and platforms can be found on theSORT website.418Views2likes0CommentsSymantec Data Insight 4.5.1: Documentation Available
Symantec Data Insight 4.5.1 product guides(PDF and HTML pages) are now available on theSORT documentation page. TheSymantec Data Insight 4.5.1 documentation set includes the following manuals: Symantec Data Insight Release Notes Symantec Data Insight Self-Service Portal Quick Reference Guide Symantec Data Insight Installation Guide Symantec Data Insight User's Guide Symantec Data Insight Administrator's Guide Third-Party Legal Notices371Views2likes0CommentsSDRO 6.1: How to configure SQL Server for disaster recovery or migration to Microsoft Azure
Symantec Disaster Recovery Orchestrator 6.1 (SDRO) lets you configure your on-premises SQL Server 2008 R2 or 2012 instances for disaster recovery (DR) or migration to the Microsoft Azure cloud. You need to make identical SQL Server configurations, including users and privileges, on the on-premises and the cloud hosts. The Disaster Recovery Orchestrator Configuration Wizard lets you select the SQL Server instances that you want to configure for monitoring and the detail monitoring options as well. The wizard detects the application data folders and automatically selects them for replication. For more information, refer to the following guides: Symantec Disaster Recovery Orchestrator 6.1 Agent for Microsoft SQL Server 2008 R2 Configuration Guide Symantec Disaster Recovery Orchestrator 6.1 Agent for Microsoft SQL Server 2012 Configuration Guide Additional SDRO documentation can be found on the SORT website.519Views2likes0CommentsSFHA Solutions 6.0.1: Adding a node to SFHA clusters
After you install Storage Foundation and High Availability (SFHA) and create a cluster, you can add and remove nodes from the cluster. You can create clusters of up to 64 nodes. To add a node to an existing SFHA cluster: Complete the prerequisites and preparatory tasks before adding a node to the cluster. Before adding a node to an existing SFHA cluster, verify that you have met the hardware and software requirements. You also need to prepare the new node. For information on preparing to add a node to an existing cluster, see: Before adding a node to a cluster Add a new node to the cluster. There are three ways to add a new node to the cluster: Adding a node to a cluster using the SFHA installer Adding a node using the Web-based installer Adding the node to a cluster manually Update the repository database (only if you are using the Storage Foundation for Databases (SFDB) tools). If you are using Database Storage Checkpoints, Database FlashSnap, or SmartTier for Oracle in your configuration, you have to update the SFDB repository to enable access for the new node after it is added to the cluster. For more information on updating the SFDB repository, see: Updating the Storage Foundation for Databases (SFDB) repository after adding a node. Veritas Storage Foundation and High Availability documentation for other platforms and releases can be found on the SORT website.808Views2likes0CommentsSymantec Disaster Recovery Orchestrator 6.1.1 Documentation Available
Documentation for Symantec Disaster Recovery Orchestrator (SDRO) 6.1.1 for Amazon Web Services (AWS) is now available at the following locations: Product guides (PDFs and HTMLpages): SORT documentation page Software compatibility list: http://www.symantec.com/docs/TECH228292 Late breaking news: http://www.symantec.com/docs/TECH209084 The SDRO 6.1.1 for AWS documentation set includes the following manuals: SDRO Release Notes SDRO Getting Started Guide SDRO Configuration Guide for SQL Server 2008 R2 SDRO Configuration Guide for SQL Server 2012 SDRO Administration Guide SDRO Deployment Guide Third-Party Software License Agreements440Views1like0CommentsSFW HA 6.1: Support for SmartIO
SmartIO is a new feature introduced in Symantec Storage Foundation and High Availability Solutions (SFW HA) 6.1 for Windows. SmartIO improves I/O performance of applications and Hyper-V virtual machines by using Solid State Devices (SSDs) as a caching location for read-only I/O caching. Traditional disks are often an I/O bottleneck for high transaction applications. To compensate for this, administrators usually either increase the in-RAM cache size or buy expensive storage. To address this issue, SmartIO uses an SSD-based cache to drive high performance applications. SSDs are available in many sizes and connectivity types. This adds a new layer of complexity and decentralization of the storage. SmartIO adds a central management layer between the physical SSDs and the applications that need to access them. SmartIO lets you use the SSDs to maximize application performance without requiring in-depth knowledge of the technologies. SmartIO supports volume-level read-only caching as SSDs are primarily beneficial in high-read environments. To use SmartIO, you create a cache area (storage space allocated on the SSDs for caching) using one or more non-shared SSDs and link volumes to the cache area to enable caching for the volumes. Using SmartIO, you can also disable caching and grow, shrink, or delete a cache area. In a clustered environment, you may create auto cache areas on all cluster nodes. After failover, the implicitly linked volumes use the auto cache area on the failover node. If the auto cache area is not present on the failover node, then caching is not performed on the failover node. If the data volume is disconnected, caching for that volume is stopped. Caching is restarted once the volume is reconnected and brought online. If the cache area is disconnected, the cache area is taken offline and stops caching of all the volumes linked with it. SmartIO has the following limitations: You cannot reserve a cache area for a particular volume. You can create a new cache area and link the volume with it. File pinning or block pinning is not supported. The cache is volatile and does not persist after the system is restarted. For more information on the SmartIO feature, see the following sections of the "SmartIO" chapter in the Symantec Storage Foundation Administrator's Guide: About SmartIO Administering SmartIO through GUI Administering SmartIO through CLI Symantec Storage Foundation and High Availability Solutions for Windows (SFW HA) documentation for other releases and platforms can be found on the SORT website.487Views1like0CommentsSFW 6.1: Support for Cluster Volume Manager (CVM)
Cluster Volume Manager (CVM) is a new feature introduced in Symantec Storage Foundation for Windows (SFW) 6.1. CVM is a new way to manage storage in a clustered environment. With CVM, failover capabilities are available at the volume level. Volumes under CVM allow exclusive write access across multiple nodes of a cluster. In a Microsoft Failover Clustering environment, you can create clustered storage out of shared disks, which lets you share volume configurations and enable fast failover support at the volume level. Each node recognizes the same logical volume layout and, more importantly, the same state of all volume resources. Each node has the same logical view of the disk configuration as well as any changes to this view. Note: CVM (and related cluster-shared disk groups) is supported only in a Microsoft Hyper-V environment. It is not supported for a physical environment. CVM is based on a "Master and Slave" architecture pattern. One node of the cluster acts as a Master, while the rest of the nodes are Slaves. The Master node maintains the configuration information. The Master node uses Global Atomic Broadcast (GAB) and Low Latency Transport (LLT) to transport its configuration data. Each time a Master node fails, a new Master node is selected from the surviving nodes. With CVM, storage services on a per virtual machine (VM) basis for Hyper-V virtual machines protects VM data from single LUN/array failures, helping maintain availability of the critical VM data. CVM helps you achieve the following: Live migration of Hyper-V virtual machines, which is supported with the following: Virtual Hard Disks (VHDs) of virtual machine lying on one or more SFW volumes Coexistence with Cluster Shared Volumes (CSV) Mapping of one cluster-shared volume to one virtual machine only Seamless migration between arrays Migration of volumes (hosting VHDs) from any array to another array Easy administration using the Storage Migration Wizard Moving of the selected virtual machines’ storage to new target LUNs Copying of only those NTFS blocks that contain user data using SmartMove Availability of all the volume management functionality The following are the main features supported in CVM: New cluster-shared disk group (CSDG) and cluster-shared volumes Disk group accessibility from multiple nodes in a cluster where volumes remain exclusively accessible from only one node in the cluster Failover at a volume level All the SFW storage management features, such as: SmartIO Thin provisioning and storage reclamation Symantec Dynamic Multi-Pathing for Windows (DMPW) Site-aware allocation using the site-aware read policy Storage migration Standard features for fault tolerance: mirroring across arrays, hot relocation, dirty region logging (DRL), and dynamic relayout Microsoft Failover Clustering integrated I/O fencing New Volume Manager Shared Volume resource for Microsoft failover cluster New GUI elements in VEA related to the new disk group and volume CVM does not support: Active/Passive (A/P) arrays Storage migration on volumes that are offline in the cluster Volume replication on CVM volumes using Symantec Storage Foundation Volume Replicator For information aboutconfiguring a CVM cluster, refer to the quick start guide at: www.symantec.com/docs/DOC8119 The Storage Foundation for Windows documentation for other releases and platforms can be found on the SORT website.1.1KViews1like0CommentsSDRO 6.1: Hotfix available for using SFW or VSS COW snapshots
A new hotfix for Symantec Disaster Recovery Orchestrator (SDRO) 6.1 lets you correctly handle snapshots. Without this hotfix, SDRO cannot correctly handle snapshots created using Symantec Storage Foundation for Windows (SFW) or Microsoft Volume Shadow Copy (VSS) Copy-on-Write (COW). You can use SFW or VSS COW to create snapshots of the application data volumes or the replication volumes that are associated with an application recovery configuration. However, after you create the snapshots, if you start, stop, pause, or resume the replication activity, the system crashes. Hotfix_6_1_00001_129_3504056 addresses this issue by updating the kernel component that manages the file replication. The VxRep.sys file is updated to version 6.1.1.129. After you apply this hotfix, SDRO 6.1 can correctly handle the snapshots of application data volumes and replication volumes that are created using SFW or VSS COW. For information about the pre-installation, installation, and post-installation procedures, refer to the Readme file that is available with the hotfix at: https://sort.symantec.com/patch/detail/8784 For information about SDRO 6.1, refer to the product guides (PDFs and HTMLpages) that are available on the SORT documentation page.367Views1like0Comments