Veritas InfoScale 7.0 (Linux and Windows): Changes in Dynamic Multi-Pathing for VMware
With the introduction of the Veritas InfoScale 7.0 product family, Dynamic Multi-Pathing (DMP) for VMware is now included as a component in the Veritas InfoScale Foundation, Storage, and Enterprise products. The license for this component is included as a part of the InfoScale license on both Linux and Windows. You can either use the Linux license or the Windows license to enable DMP functionality on the ESXi hypervisor. The DMP for VMware component consists of the following: vSphere offline DMP bundle — DMP components installed on the ESXi hypervisor vSphere UI plug-in — Installed on a Windows physical machine or on a virtual machine and serves as an interface between ESXi and vCenter Remote CLI package — Optional command-line interface to manage ESXi hosts from a Linux system or a Windows system The components can be installed using the command line or the VMware vSphere Update Manager. To install the Dynamic Multi-Pathing for VMware component on ESXi hosts, use one of the following: Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.zip Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.iso For more information on installing the DMP for VMware component, refer to the Dynamic Multi-Pathing 7.0 Installation Guide - VMware ESXi. From this release onwards: 1) DMP for VMware supports SanDisk (FusionIO) PCIe attached SSD cards. 2) The DMP plug-in for the VMware vCenter web client: Displays I/O rate as a line graph. Displays the mapping of LUNs and VMDKs used by a virtual machine. Has a SmartPool tab for a virtual machine for easy configuration of SmartPool assigned to it. No longer requires ESXi login credentials in the Manage Device Support page. For more information about the features and enhancements, refer to the Dynamic Multi-Pathing 7.0 Administrator's Guide - VMware ESXi.756Views2likes1CommentHow To Manage Multiple VCS Clusters from a Central Location
What is the preferred tool/process used to manage multiple VCS clusters? We have several clusters in our environment and to date, the administrators RDP to one of the nodes in the cluster, open the Veritas Cluster Manager - Java Console. What is the solution to see all clusters in a single window or dashboard and also get notification of cluster events? Where is the documentation of how to setup the solution? Are there any videos that demonstrate how to setup or how the solution works?961Views2likes1Comment‘vxdmppr’ utility information
Hello, With VxVM, we get ‘vxdmppr’ utility which performs SCSI 3 PR operations on the disks similar to sg_persist on Linux. But we don’t find much documentation around this utility. In one of the blogs we saw that its unsupported utility. Can someone throw light on it. Has someone used it in the past? Or does anyone know how this utility is getting used in VxVM? How to know if this is supported or not? Rafiq1.8KViews1like7CommentsSFHA Solutions 6.0.1: About the Veritas Cluster Server (VCS) startup sequence
Communication among VCS components When you install VCS, user-space components and kernel-space components are installed on a system. The VCS engine, also known as the high availability daemon (HAD) exists in the user space. The HA daemon contains the decision logic for the cluster and maintains a view of the cluster. The VCS engine on each system in the cluster maintains a synchronized view of the cluster. For example, when you take a resource offline, or bring a system from the cluster online, VCS on each system updates the view of the cluster. The kernel-space components consist of the Group Atomic and Broadcast (GAB) and Low Latency Transport (LLT) modules. Each system that has the VCS engine installed on it communicates through GAB and LLT. GAB maintains the cluster membership and cluster communications. LLT maintains the traffic on the network and communicates heartbeat signal information of each system to GAB. About Group Membership Services and Atomic Broadcast (GAB) About Low Latency Transport (LLT) About the VCS startup sequence The start and stop variables for the Asynchronous Monitoring Framework (AMF), LLT, GAB, I/O fencing (VxFEN), and VCS engine modules define the default behavior of these modules during a system restart or a system shutdown. For a clean VCS startup or shutdown, you must either enable or disable the startup and shutdown modes for all these modules. VCS startup depends on the kernel-space modules and other user-space modules starting in a specific order. The VCS startup sequence is as follows: LLT GAB I/O fencing AMF VCS For more information on setting the start and stop environment variables, VCS modules, and starting and stopping VCS, see: Environment variables to start and stop VCS modules About the I/O fencing module About the IMF notification module About the high availability daemon (HAD) Administering the AMF kernel driver Starting VCS Stopping VCS Stopping the VCS engine and related processes In a single-node cluster, you can disable the start and stop environment variables for LLT, GAB, and VxFEN, if you have not configured these kernel modules. If you disable LLT and GAB, set the ONENODE variable to Yes in the/etc/default/vcs file. The following topics provide information on troubleshooting startup issues: VCS:10622 local configuration missing VCS:10623 local configuration invalid VCS:11032 registration failed. Exiting Waiting for cluster membership Enabling debug logs for the VCS engine LLT startup script displays errors Fencing startup reports preexisting split-brain Issues during fencing startup on VCS cluster nodes set up for server-based fencing VCS documentation for other releases and platforms can be found on the SORT website.842Views1like2CommentsSFHA Solutions 6.0.1: Displaying disk group information with vxprint and vxinfo
The vxprint and vxinfo utilities let you display information about your Veritas Volume Manager (VxVM) disk group configurations. vxprint The vxprint utility displays complete or partial information from records in disk group configurations. You can select records by name or with special search expressions. vxprint can display disk group, disk media, volume, plex, subdisk/subvolume, data change object (DCO), link object, and snap object records. vxprint is often used to display information about objects before or after performing a task. Common vxprint tasks include: Displaying an individual data volume Displaying a volume set You can also use vxprint for troubleshooting VxVM. These task include: Investigating disk failures Displaying volume and plex states Recovering from an incomplete disk group move Recovering from failure of a DCO move Note: vxprint cannot display disk access records. Use the vxdisk list command to display disk access records, or physical disk information. vxprint (1M) manual pages for SFHA 6.0.1: AIX HP-UX Linux Solaris vxinfo The vxinfo utility displays the accessibility and usability one or more volumes in a disk group. You specify a volume operand to identify which volumes to report on. If you do not specify a volume operand, a volume condition report is provided for each volume in the selected disk group. You can only run vxinfo on one disk group at a time. vxinfo is useful for troubleshooting. For more information, see: Listing unstartable volumes vxinfo (1M) manual pages for SFHA 6.0.1: AIX HP-UX Linux Solaris You can find the manual pages for other releases can be found on the SORT website. You also can use the Veritas Volume Manager (VxVM) vxdisk list command to display the devices visible to VxVM, their current disk formats, corresponding disk groups if any, and their status, whether online or failing. For more information, see the forum post: SFHA 6.0.1 Solutions: Using the vxdisk list command to display status and to recover from errors on Veritas Volume Manager disks503Views1like0Comments