Veritas cluster issue
Hi We have a VCS of two node. Two file systems are under VCS configured from VXVM. One file system got 100% full.Now we have rebooted the cluster node & the mount points started to show,but after some time that file system got disappear,Checked with concerned DG that also got disabed,we tried to manually import & got succeded.But after starting the volumes we tried to mount the filesystem under the specified mount point. But the below error giving--- # mount /dev/vx/dsk/bgw1dg/vol01 /var/opt/BGw/Server1 mount: /dev/vx/dsk/bgw1dg/vol01 is not this fstype # Kindly suggest what could be the cause. regards...ArupSolved2.2KViews4likes5CommentsVCS verificaiton and Automation
Hello Guys, We do configure many VCS cluster and we would like to verify all the confiugraion before we hand over to customer. We would like to automate the verification. here is my plan to do basic verificaiton - Verify LLT config -Verify GAB config - Check main.cf , and see volume group or VIP is not configured in boot up. Anything else I need to check ? feell free to share any ideaSolved942Views3likes2CommentsSQL Server Service Group can't bring online on the second node
Hello everyone, Please I need a solution about my SQL cluster in test environnement. I have issues with the service group that I can"t bring it online on the second node. But in the first node its can be online without errors. For testing the intergity of the DATA on the ISCSI partition, I deleted the service group. And i mounted the partition on the second node, I recrated the service group, I can bring it online easily. When I switch it to the other node (first node on the cluster), same error, its can't be bring in online. Below some error on the event viewer about this problem : Thank you in advance for your helpSolved1.5KViews3likes5CommentsVeritas InfoScale 7.0 (Linux and Windows): Changes in Dynamic Multi-Pathing for VMware
With the introduction of the Veritas InfoScale 7.0 product family, Dynamic Multi-Pathing (DMP) for VMware is now included as a component in the Veritas InfoScale Foundation, Storage, and Enterprise products. The license for this component is included as a part of the InfoScale license on both Linux and Windows. You can either use the Linux license or the Windows license to enable DMP functionality on the ESXi hypervisor. The DMP for VMware component consists of the following: vSphere offline DMP bundle — DMP components installed on the ESXi hypervisor vSphere UI plug-in — Installed on a Windows physical machine or on a virtual machine and serves as an interface between ESXi and vCenter Remote CLI package — Optional command-line interface to manage ESXi hosts from a Linux system or a Windows system The components can be installed using the command line or the VMware vSphere Update Manager. To install the Dynamic Multi-Pathing for VMware component on ESXi hosts, use one of the following: Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.zip Veritas_InfoScale_Dynamic_Multi-Pathing_7.0_VMware.iso For more information on installing the DMP for VMware component, refer to the Dynamic Multi-Pathing 7.0 Installation Guide - VMware ESXi. From this release onwards: 1) DMP for VMware supports SanDisk (FusionIO) PCIe attached SSD cards. 2)TheDMP plug-in for the VMware vCenter web client: Displays I/O rate as a line graph. Displays the mapping of LUNs and VMDKs used by a virtual machine. Has a SmartPool tab for a virtual machine for easy configuration of SmartPool assigned to it. No longer requires ESXi login credentials in the Manage Device Support page. For more information about the features and enhancements, refer to the Dynamic Multi-Pathing 7.0 Administrator's Guide - VMware ESXi.756Views2likes1CommentRegarding resource online operations
Hi, Suppose, a resouce got faulted in a SG and I need to make it online. Shall I have to do in this way? hagrp -clear <group> [-sys] <host> <sys> and then online the service group hagrp -online <group> -sys <sys> OR by this way? 1.Flush the SG, hagrp -flush <group> -sys <system> 2.Clear the faulted resource hares -clear <resource> [-sys] 3.Online it, hares -online <resource> [-sys] Do flushing of the SG required? Kindly assist how should i have to proceed for these cases.Solved4.4KViews2likes3CommentsWelcome to the Veritas Relisiency Platform Forum - Join the Conversation!
Welcome to the Veritas Relisiency Platform Forum! Here you can share thoughts, ask questions, and access product experts. Join an existing discussion or start your own.Below you will find some basic informationfor your reading pleasure. Resources: Blog: Transforming IT Service Continuity for the Enterprise with Veritas Resiliency Platform Data Sheet:Veritas Resiliency Platform Video: Simplifying IT Service Continuity with Veritas Resiliency Platform Video: Transforming IT Service Continuity for the Enterprise with Veritas Resiliency Platform Thank you for your interest in the Veritas Resiliency Platformand we look forward to your ongoing questions and discussion.1.3KViews2likes0CommentsSQL Server 2008 on Veritas Cluster
Hello, On the purpose of testing the new version of Veritas cluster, I created 2 nodes that contains SQL server binaries. This 2 nodes are identical on configuration, and attachement to the SAN over ISCSI. On the official documentation, they recommand to make the DATA of the instance from the first node on the shared storage. And for the second node, they DATA can be reside on the local disk. I do this, but when I was configuring the service group, and when selecting the systems (nodes) that can participate on the cluster service group. An error message appears telling man that the nodes are not identical, and the seconde node can't find the instance. Can you please help me about this problem. Thank you.Solved1.4KViews2likes8CommentsProxy Vs Phantom
I am trying to understand how Proxy and Phanton works. From my understanding , if we want to use NIC resources in diffrent SGs, You create a dedicated SG for the NIC and use that Resource in other applications as dependency. Am I right? How about Phantom works?Solved4.4KViews2likes7CommentsVCS - Error with the MultiNIC resource
Hi, Im got this error when I restarted the a box and it entered in the cluster: 2014/01/28 21:26:37 VCS ERROR V-16-10001-6505 MultiNICB:MultiNICB_Pub:monitor:The mpathd process (/usr/lib/inet/in.mpathd) does not exist 2014/01/28 21:26:37 VCS WARNING V-16-10001-6506 MultiNICB:MultiNICB_Pub:monitor:Will try to restart mpathd with (/usr/lib/inet/in.mpathd) Just want to check if this error is relevant and what would be the cause for it. This node in question is in the cluster but currently no SG is running in it.. only a parallel "nic" SG that has the Multinic appls. In the mainc.cf looks like this: group nic ( SystemList = { DP-node4 = 0, DP-node5 = 1, DP-node6 = 2, DP-node8 = 3, dp-node9 = 4 } Parallel = 1 ) MultiNICB MultiNICB_Pub ( UseMpathd = 1 ConfigCheck = 0 Device @DP-node4 = { nxge0 = 0, nxge4 = 1 } Device @DP-node5 = { nxge0 = 0, nxge4 = 1 } Device @DP-node6 = { nxge0 = 0, nxge4 = 1 } Device @DP-node8 = { nxge0 = 0, bge0 = 0 } Device @dp-node9 = { igb0 = 0 } IgnoreLinkStatus = 0 NetworkTimeout = 300 GroupName = Public_Network ) Phantom nic_phantom ( Critical = 0 ) Currently its Online at the node with the error (9). Any suggestion? Tks, JoaoSolved1.5KViews2likes7Comments