Unable to configure multile application for monitoring.
Hello all. I used Application HA 6.1 on windows server 2012 R2. I read this document( http://www.symantec.com/docs/TECH159846 ), then I performed the commands to add an application to an existing configuration. However, commandline put error messesage. How can I resolve this error? --------------- >XprtlC.exe -l https://localhost:5634/vcs/admin/createAppMonHBSG.pl -d params="<Cmd><ID>CreateVMWHBSG</ID><ServiceGroups><Name>SQLServer2012_SG</Name></ServiceGroups></Cmd>" <Cmd><ID>createAppMonHBSG</ID><ReturnCode>11</ReturnCode><Message>[SQLServer2012_SG] Group is not available in configuration. Failed to configure application heartbeat ervice group(s).</Message></Cmd> --------------- Thank you.Solved3.4KViews0likes1Commentfailed to bring up HeartBeat interface with GAB config
Dear Experts, I'm facing an issue here in cluster setup 8+2 I have managed to create 1+1 cluster and when trying to add the 3rd node I got stuck in gabconfig step below is the /var/adm/messages Jun 21 22:49:50 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Starting hardware (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy capabilities (1) (supported 0x00000f50 0x00000000) (link_config 0x01000000 0x00000000) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.warning] WARNING: bnxe3: Phy 10000fdx requested but not supported Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy requesting link config of 0x00000006 Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: 10Gb Full Duplex Rx Flow ON Tx Flow ON Link Up Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Hardware started (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: L2 started (clients L2) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: Stopping L2 (clients L2) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Stopping hardware (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Link Down Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Hardware stopped (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: L2 stopped (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: Starting L2 (clients None) Jun 21 22:49:51 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Starting hardware (clients None) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy capabilities (1) (supported 0x00000f50 0x00000000) (link_config 0x01000000 0x00000000) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.warning] WARNING: bnxe3: Phy 10000fdx requested but not supported Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Phy requesting link config of 0x00000006 Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: 10Gb Full Duplex Rx Flow ON Tx Flow ON Link Up Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartCore: Hardware started (clients None) Jun 21 22:49:52 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStartL2: L2 started (clients L2) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: Stopping L2 (clients L2) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Stopping hardware (clients None) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: Link Down Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopCore: Hardware stopped (clients None) Jun 21 22:49:53 MM-BL2 bnxe: [ID 801725 kern.info] NOTICE: bnxe3: BnxeHwStopL2: L2 stopped (clients None) knowing that I have the same network profiles interfaces for all blades ( HP Proliant blades ) any ideas how this can be fixed ? BR Mahmoud2KViews0likes1Commentmissing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedVVR paused due to network disconnection
Hi all. I have a global cluster with 2 minicluster systems with solaris 10 installed (SPARC) primary: MIVDB01S - 172.22.8.132 secondary: MILDB08S - 10.66.11.148 I stopped the secondary (init 0) for 3 days, and after that I startup the secondary (boot from ok-prompt) and after 1 day I checked for the situation of the replication but: MIVDB01S root vradmin -g datadg printrvg datarvg Replicated Data Set: datarvg Primary: HostName: 172.22.8.132 <localhost> RvgName: datarvg DgName: datadg Secondary: HostName: 10.66.11.148 RvgName: datarvg DgName: datadg vxrlink -g datadg status datarlk Wed Jan 21 09:51:08 2015 VxVM VVR vxrlink INFO V-5-1-12887 DCM is in use on rlink datarlk. DCM contains 874432 Kbytes (1%) of the Data Volume(s). vradmin -g datadg repstatus datarvg Replicated Data Set: datarvg Primary: Host name: 172.22.8.132 RVG name: datarvg DG name: datadg RVG state: enabled for I/O Data volumes: 1 VSets: 0 SRL name: srl_vol SRL size: 1.00 G Total secondaries: 1 Secondary: Host name: 10.66.11.148 RVG name: datarvg DG name: datadg Data status: consistent, behind Replication status: paused due to network disconnection (dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 874432 Kbytes) (SRL protection logging) Timestamp Information: N/A vxprint -Pl | grep flags flags: write enabled attached consistent disconnected asynchronous dcm_logging resync_paused ----- 1st solution MIVDB01S root vradmin -g datadg resync datarvg ----- 2nd solution Stop vradmin on secondary then on primary # /usr/sbin/vxstart_vvr stop Start vradmin on secondary then on primary # /usr/sbin/vxstart_vvr start Can you help me ?Solved6.9KViews0likes8CommentsNetwork adaptor unavailable
Hi All, After upgrading a cluster server from windows 2008 R1 to windows 2008 R2 the server cannot connect to the network anymore. The adaptor is still there and the heartbeats are working , but the principal network cannot be reached. The server is an HP and the veritas cluster software is 5.1 SP2. Any suggestions how to fix ? Since the portal from Symantec is unavailable to log a case , i need some help ?Solved4.7KViews1like2CommentsAppHA, SRM, VOM VBS Orchestration
Hello All, I'm working through a complex implementation and have hit a couple of snags along the way. Maybe people have worked on verious bits along the way and may be able to contribute to an overall solution The object of this endeavor is to create a flexible environment between 2 regional datacenters that gives us the ability to bring up a complex multi-tiered application pod at either location. Physicals meeting this HA criteria run VCS-Geo Cluster, some VMs run VCS-Geo Cluster with VVR while other VMs run AppHA..using SRM. They are all logically grouped within 2 VOM VBS groups. The Home VBS consists of the AppHA VMs, Service Groups from the VCS nodes in the home datacenter while the DR VBS consists of the AppHA VMs and the matching DR geo cluster Service groups. AppHA seems to work as planned. VMs move through SRM to the datacenters and register fine in the SymHA consoles and report through VCenter with current status. My primary hurdle seems to be the control of those AppHA VMs from VOM VBS. The VBS control relies on a control host scan; and when that scan takes upwards of 6 hrs to run and update status. That time is better spent doing a manual startup sequence. Again VOM communication and dynamic updating of the Virtualization environment seem to be an issue. The VCS based elements are not an issue with my belief that they are based on a static infrastructure. I have successfully started up and shut down the home VBS, but once it moves to the opposing Datacenter...VOM can't find the VM and AppHA Consoles cant tie updates to records Has anyone done VBS orchistration of AppHA assets withe SRM handling transfers? VOM 5.0 VMWare 5.1 SRM 5.1 AppHA 6.0 VCS 5.1 SP2\ 6.0.1 Many Thanks, Many thanksSolved1.5KViews0likes1CommentVxDMP and SCSI ALUA handler (scsi_dh_alua)
Hi, I have an question on scsi alua handler and VxDMP on Linux. we have a scsi_dh_alua handler on linux, which can handle the ALUA related check conditions sent from target controllers. so this will be handled by scsi layer itself and will not be propogated to the upper layers. So do we have anything simalar with VxDMP to handle the ALUA related errors reported by scsi layer or does it depend on scsi_dh_alua handler to handle the ALUA related check condions from target and retry at the scsi layer it self ??? Thanks, Inbaraj.Solvedafter mirroring plex, ssh service are not running
hi all, i have some productions server that are using SFHA Cluster. i need to mirroring volume to new storage using. (#vxassist -g DGname mirror VOLname alloc="vxdisk1 vxdisk2 . . . . vxdisk13") after 190 minutes ssh service cannot be access. i aborted the process, because im afraid it will affected another service group and make server get panic and then reboot. if there anyone have some resolve plan like we mirroring in subdisk level?1.1KViews0likes1CommentHostmode setting on SunSTK 6180 after upgrade no longer supported
Goodafternoon, We are in the progress of doing an upgrade to our STKSUN 6180 disk array to firmware version 07.84.44.10. In the release notes of this firmware we run into the following challenge : Solaris Issues Solaris with Veritas DMP or other host type Bug 15840516—With the release of firmware 07.84.44.10, the host type ‘Solaris (with Veritas DMP or other)’ is no longer a valid host type. Workaround—If you are using Veritas with DMP, refer to Veritas support (http://www.symantec.com/support/contact_techsupp_static.jsp) for a recommended host type. What hosttype should we choose after the upgrade? Systems connected are running Veritas cluster with DMP. Please advise, RemcoSolvedvxddladm show DMP state as not active
Goodmorning, I have issue that I can't seem to solve and I'm dire need of assistance. I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!). Controller port 1A is connected to host A controller port 1B is connected to host A controller port 2A is connected to host B controller port 2B is connected to host B. DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active: Output fromt the vxddladm listsupport libname=libvxlsiall.so : LIB_NAME ASL_VERSION Min. VXVM version ========================================================== Libvxlsiall.so vm-5.1.100-rev-1 5.1 The output of the vxdmpadm list dmpEngenio : Filename: dmpEngenio APM name: dmpEngenio APM version: 1 Feature : VxVM VxVM version 51 Array Types Supporred: A/PF-LSI Depending Array Types A/P State : Not-Ative Output from vxdctl mode: mode : enabled. Both hosts show the same result state : Not-Active So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case. Can someone assist me? Many thanks! Remco