Veritas InfoScale 7.1: Documentation available
The documentation for Veritas InfoScale 7.1 is now available at the following locations: PDF and HTML versions:SORT documentation page Late Breaking News: https://www.veritas.com/support/en_US/article.0001072139 Hardware Compatibility List: https://www.veritas.com/support/en_US/article.000107677 Software Compatibility: https://www.veritas.com/support/en_US/article.000107212 Manual pages: AIX,Linux,Solaris The Veritas InfoScale 7.1 documentation set includes the following manuals: Getting Started Veritas InfoScale What's New Veritas InfoScale Solutions Getting Started Guide Veritas InfoScale Readme First Release Notes Veritas InfoScale Release notes Installation guide Veritas InfoScale Installation guide Configuration and Upgrade guides Storage Foundation Configuration and Upgrade guide Storage Foundation and High Availability Configuration and Upgrade guide Storage Foundation Cluster File System High Availability Configuration and Upgrade guide Storage Foundation for Oracle RAC Configuration and Upgrade guide Storage Foundation for Sybase ASE CE Configuration and Upgrade guide Cluster Server Configuration and Upgrade guide Legal Notices Veritas InfoScale Third-party Software License Agreements For the complete Veritas InfoScale documentation set, see the SORT documentation page.2KViews1like0CommentsUnderstanding NFS and NFSRestart agent
Network File System (NFS) allows network users to access shared files stored on an NFS server. NFS allows you to manageshared files transparently as if the files were on a local disk.For more information about NFS and configuring NFS service groups, refer to the following sections: About NFS Configuring NFS service groups The NFSRestart agent is aCluster Server (VCS)bundled agent.This agent is installed on the system when you install VCS. The NFSRestart agent is configured in conjunction with theNFS agent for failover type shared service groups. The NFSRestart agent manages essential NFS locking services, network status manager, and network lock manager. The agent also manages NFS lock recovery service by recovering the NFS record locks after sudden server crash. The agent prevents potential NFS ACK storms by terminating NFS server services before closing all Transmission Control Protocol (TCP) connections with the NFS client. Refer to the following sections for more information about the NFSRestart agent: Dependencies Agent functions State definitions Attributes Resource type definition Notes for NFSRestart agent Sample configurations Debug log levels In a Veritas Cluster File System environment, instead of the NFSRestart agent, you must configure preonline and postonline triggers for managing NFS services. For more information, refer to the following section: Understanding how Clustered NFS works The links in this article are specific to the Linuxplatform.Veritas InfoScale documentation for other releases and platforms can be found on the SORT website.1.7KViews0likes0CommentsUnderstanding the V-16-1-40130 error message for UNIX
Understanding the V-16-1-40130 error message for UNIX The hares command administers resources in a Veritas Cluster Server (VCS) cluster. Resources are individual representations of the elements that are required for a service group to be available, such as a volume, a database, or an IP address. Among the many hares options, -display and –state are frequently used to display resource information: # hares –display resource_name # hares -state resource_name Error conditions Sometimes, when you run theses commands, you get the following message: VCS WARNING V-16-1-40130 Resource name does not exist in the local cluster The error occurs when the resource is not configured in the local cluster. When the resource is configured in a remote cluster, the message is also displayed, but is followed with the resource state in the remote cluster. Investigating the error If you want to check the resources that are configured in the local cluster or in a global group, execute the following command: # hares –list Then you can examine the information of any resources appear in the output by re-entering either of the previous commands: For resources in the local cluster: # hares –display resource_name # hares -state resource_name For resources in a global group: # hares –display resource_name –clus local_cluster_name/remote_cluster_name # hares -state resource_name –clus local_cluster_name/remote_cluster_name If you need to, you can also administer the resources by: Adding resources Deleting resources Modifying resources Linking or unlinking resources Bringing resources online Taking resources offline You can find more Veritas InfoScale and SFHA Solutions documentation on the SORT website. 18711.2KViews0likes0CommentsUsing Dynamic Multi-Pathing (DMP) devices with Oracle Automatic Storage Management (ASM)
DMP supports Oracle ASM. You can: Enable DMP devices for use with Oracle ASM—Make DMP devices visible to ASM as available disks to enable their use with Oracle ASM. DMPsupport for ASM is available for char devices (/dev/vx/rdmp/*). For more information, see Enabling DMP devices for use with Oracle ASM. Migrate Oracle ASM disk groups from operating system devices to DMP devices—If an existing ASM disk group uses operating system native devices as disks, you can migrate these devices to DMP control. If the OS devices are controlled by other multi-pathing drivers, this operation requires system downtime to migrate the devices to DMP control. For more information, see Migrating Oracle ASM disk groups fromoperating system devices to DMP devices. Remove DMP devices from the listing of Oracle ASM disks—Remove DMP devices from the listing of ASM disks by disabling DMP support for ASM from the device. You cannot remove DMP support for ASM from a device that is in an ASM disk group. For more information, see Removing DMP devices from the listing of Oracle ASM disks. The links in this article are specific to the AIX platform. Veritas InfoScale documentation for other releases and platforms can be found on the SORT website.582Views0likes0CommentsSFHA Solutions/InfoScale: How to clean up the operating system device tree
A device tree is a data structure that describes the LUNs in your environment. When you remove LUNs dynamically from an existing target ID, the operating system SCSI device tree must be cleaned up. This is to release the SCSI target ID for reuse if a new LUN is added to the host later. In Storage Foundation/InfoScale, there are two ways to do this: one if you are using Storage Foundation and High Availability (SFHA) version 6.0 or earlier you must clean up the device tree manually. In SFHA 6.0.1 and later, the device tree is cleaned up dynamically. For SFHA 6.0 and earlier For SFHA 6.0 and earlier, you need to clean up the operation system device tree manually after you remove LUNs from an existing target ID. To completely remove LUNs from an existing target ID, you need to perform the following steps: Remove LUNs. Clean up the operating system device tree. Scan the operating system device tree. The following diagram shows the process of removing LUNs dynamically from an existing target ID. For SFHA 6.0.1 and later, and InfoScale Since SFHA 6.0.1, the Dynamic Reconfiguration operations option (DR tool) is introduced in the vxdiskadm menu to simplify the removal of LUNs from an existing target ID. It saves the manual work of device scanning and cleaning up. Through the DR tool, Dynamic Multi-pathing (DMP) updates and cleans up the operating system device tree automatically. For more information about using DR tool to remove LUNs, please see:Removing LUNs dynamically from an existing target ID. The following diagram shows the process of using DR tool to remove LUNs dynamically from an existing target ID. The process is simplified by using this tool. Related links: Reconfiguring a LUN online that is under DMP control. (SFHA 6.0.1 and later) Reconfiguring a LUN online that is under DMP control. (SFHA 6.0 and earlier) The links in this article are specific to Linux platform. You can find SFHA Solutions/InfoScaledocuments for other versions and platforms on the SORT documents page.585Views0likes0CommentsVeritas InfoScale 7.0.1 (for AIX/Linux/Solaris): Documentation Available
The Veritas InfoScale 7.0.1 patch for AIX/Linux/Solaris is a maintenance-level release, which provides several updates on top of the InfoScale 7.0 products. The highlights of this release are: Upgrade bundled OpenSSL from 1.0.1m to 1.0.1p IPM protocol is no longer supported for CP server and InfoScale Availability clusters. The following platforms are newly supported: Linux: SLES11SP4 and RHEL6u7, OL 6u7 Solaris: Solaris10x64 is supported on InfoScale Availability Virtualization: VMware vSphere 6.0 u1 The documentation for Veritas InfoScale 7.0.1 is now available at the following locations: PDF and HTML versions: SORT documentation page Late Breaking News: http://www.symantec.com/docs/TECH230620 Hardware Compatibility List: http://www.symantec.com/docs/TECH230646 Software Compatibility List: http://www.symantec.com/docs/TECH230619509Views0likes0CommentsUnderstanding V-16-2-13074: The monitoring program for resource name has consistently failed to determine the resource status within the expected time. Agent is restarting (attempt number number of total) the resource
In Veritas Cluster Server (VCS), an agent monitors the status of resources. A resource can be in one of the following states: ONLINE OFFLINE UNKNOWN However, sometimes a problem can occur. If the agent cannot determine the status of the resource within a predetermined time, the agent restarts the resource. Problems can occur, when: • If an ONLINE resource is reported OFFLINE without any offline operation performed from within VCS, or the resource is offlined from outside of VCS control, also known as unexpected Offline. • If FaultOnMonitorTimeouts (FOMT) is reached i.e., the monitor entry point has timed out FOMT times. • If the OFFLINE entry point is ineffective and the monitor entry point timed out after that. This is not the case of unexpected Offline. The error message V-16-2-13074 shows in the log if all the following conditions are met in order: • Monitored resource is faulted. • ManageFaults != NONE. • Clean entry point is executed successfully. (If clean fails then clean entry point is called only until the CleanRetryLimit attribute is reached). • It is not the case of unexpected Offline. • The agent restarts the resource. (It happens only if the RestartLimit attribute is not reached) This error applies to all platforms and releases. To solve this issue: • Check whether the return status of the monitor returns OFFLINE. • Check whether the monitor is timed out. This is because the ineffective Offline or the system heavily loaded results in multiple monitor timeouts. For details about monitoring the resources, see: How VCS handles resource faults Controlling VCS behavior For more information, go to the SORT website.686Views0likes0CommentsVeritas InfoScale 7.0: Storage management feature support matrix for Veritas InfoScale products on Oracle
In the Veritas InfoScale 7.0 release, the Storage Foundation High Availability (SFHA) product portfolio has been replaced with a more simplified, robust, and comprehensive Veritas InfoScaleportfolio that includes the following four products: Veritas InfoScale Foundation Veritas InfoScale Storage Veritas InfoScale Availability Veritas InfoScale Enterprise The core features of SFHA, storage management and availability, are consolidated in the Veritas InfoScale portfolio. This simplifies product licensing, installation, and performance. With this release, the storage management feature support map for Veritas InfoScale products on Oracle has been rearranged to suit the new Veritas InfoScale product model. The following table provides a mapping of the features for each product: For information on the use cases of the storage management features, see the Veritas InfoScale™ 7.0 Storage and Availability Management for Oracle Databases document. For more information on Veritas InfoScale products, see the following sections in the Veritas InfoScale What's new in this release document: About the Veritas InfoScale product suite About Veritas InfoScale Foundation About Veritas InfoScale Storage About Veritas InfoScale Availability About Veritas InfoScale Enterprise Veritas InfoScale documentation can be found on the SORT website.475Views2likes0CommentsSFHA Solutions 6.2: VCS support for SmartIO
The SmartIO feature on Linux was introduced in Storage Foundation and High Availability (SFHA) 6.1. Beginning in this release, SmartIO is also supported on AIX and Solaris.SmartIO enables data efficiency on your solid state drives (SSDs) through I/O caching. For information about administering SmartIO, see the Symantec Storage Foundation and High Availability Solutions SmartIO for Solid State Drives Solutions Guide. In an SFHA environment, applications can failover to another node. On AIX, Linux, and Solaris, beginning in this release,the SFCache agent allows you to enable caching for an application if there are caching devices.The SFCache agent also allows you to failover the application to a node that does not have caching devices. The SFCache agent monitors: Read caching for Veritas Volume Manager (VxVM) cache Read and writeback caching for Veritas File System (VxFS) cache For volume-level caching, the cache objects are disk groups and volumes. For file system level caching, the cache object is the mount point. You can: Modify the caching mode at runtime Set the default caching mode when you mount the VxFS file system Configure the MountOpt attribute of the Mount agent to specify the default caching mode using the smartiomode option For more information about the smartiomode option, see the mount_vxfs(1m) manual page. If the cache faults, the application still runs without any issues on the same system, but with degraded I/O performance.You can configure the SFCache agent’s CacheFaultPolicy attribute and choose to either ignore or initiate failover. If SmartIO is not enabled on a node, the SFCache resource acts as a dummy resource and is reported as ONLINE or OFFLINE depending on the group state, but caching-related operations are not performed. For more information, see: SFCache agent Mount agent Symantec Storage Foundation and High Availability documentation for other releases and platforms can be found on the SORT website.564Views0likes0Comments