cannot configure vxfen after reboot
Hello, We move physically a server, and after reboot, we cannot configure vxfen. # vxfenconfig -c VXFEN vxfenconfig ERROR V-11-2-1002 Open failed for device: /dev/vxfen with error 2 my vxfen.log : Wed Aug 19 13:17:09 CEST 2015 Invoked vxfen. Starting Wed Aug 19 13:17:23 CEST 2015 return value from above operation is 1 Wed Aug 19 13:17:23 CEST 2015 output was VXFEN vxfenconfig ERROR V-11-2-1041 Snapshot for this node is different from that of the running cluster. Log Buffer: 0xffffffffa0c928a0 VXFEN vxfenconfig NOTICE Driver will use customized fencing - mechanism cps Wed Aug 19 13:17:23 CEST 2015 exiting with 1 Engine version 6.0.10.0 RHEL 6.3 any idea to help me running the vxfen (and the had after ... ) ?6.1KViews0likes7Commentsmissing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedSFHACFS vs Hitachi-VSP & Host mode option 22
Hi! I'm installing several new SFHACFS clusters and during the failover testing I run into annoying problem - when I fence one node of the cluster, DMP logs high number of path down/path up events which in the end causes the disk to disconnect even on other active nodes. We've found out that our disks were exported without the host mode option 22, so we fixed this on storage. Even after this the clusters bahaved the same. Later I've read somewhere on internet, that it's good idea to relabel the disks, so I've requested new disks from storage and did vxevac to new disks. This fixed just two clusters we have, but the other two are behaving still the same. Have anybody experienced anything similar? Do you know anything I can test / check on the servers to determine the difference? The only difference between the enviroment is that the not-working clusters have the disks mirrored from two storage systems, while the working ones have the data disks only from one storage.SolvedApplication HA clustering has dropped a disk... :-/
Hi, After I completed the Application HA clustering of SQL2008 across two Windows 2008 R2 nodes, I found that the SQL installation's virtual Backup disk (M: for reference) remained on the first node after I'd initiated a switch to the second node via the High Availability tab in the vSphere Client. The other disks re-attached to the second node ok. On closer inspection in Cluster Explorer on one of the nodes I discovered that the Mount Point for that disk was completely missing! I attempted to manually create the Mount Point and brought the resource online in the cluster on the first node, but attempting the switch operation again failed as the virtual disk failed to re-attach to the second node. How to fix without blatting the clustering / SQL installations? Thanks!Solved2.9KViews0likes10CommentsRunning HA as nobody:nobody
I am running 5.1 HA on a RHEL setup. Right now I have a working cluster with application failover. All our application are set to be run as the nobody:nobody account on the system. As part of the HA integration a requirement came up to that we need to be able to start and stop services and run commands with the nobody:nobody account. Because VCS is all setup to run as root, what it is the best way to accomplish this?Solved1.6KViews0likes6CommentsVCS WARNING V-16-10031-8503 NotifierMngr:notifier:monitor:Expected correct SNMP and | or SMTP options
I've got this Warning message in my engine_A.log and NotifierMngr_A.log . My ClusterService is OFFLINE since I've reboot the server. And not possible to PROBE the ressource on any off my two server. any suggestion to remove the warning and/or restart the service will be appreciate. hastatus -sum -- SYSTEM STATE -- System State Frozen A tol031 RUNNING 0 A tol032 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService tol031 N N OFFLINE B ClusterService tol032 N N OFFLINE B lan tol031 Y N ONLINE B lan tol032 Y N ONLINE B touapf tol031 Y N PARTIAL B touapf tol032 Y N OFFLINE B vxfen tol031 Y N ONLINE B vxfen tol032 Y N ONLINE -- RESOURCES NOT PROBED -- Group Type Resource System E ClusterService NotifierMngr notifier tol031 E ClusterService NotifierMngr notifier tol032 E touapf Application touapfxcamApp tol031 E touapf Application touapfxcamApp tol032 E vxfen CoordPoint coordpoint tol031 E vxfen CoordPoint coordpoint tol032 ps the PARTIAL on may touapf Group is known, nothing to do with my issue.1.5KViews0likes2CommentsProblem with the Oracle VCS 5.1 resource and DetailMonitor = 1
Hi, I am attempting to configure the DetailMonitor = 1 option of the Oracle resource in a service group but am running into LogonUser failed error messages in the Oracle_A.txt file. The issue is with the user / domain and password I am using and, I assume, the inability of VCS to connect to the installed Oracle database I have specified under SID. In an attempt to check this I have attempted to use my own administration account to get this to work but am having no luck, the account in question does work when running "sqlplus / as sysdba" and I am able to run the sql select * from v$database; query that VCS is attempting to execute on the Oracle database. My first question is does anyone know what command is being run by VCS to connect to the database as its not shown in the log files? My second question is does anyone know what I am doing wrong and what permissions will be required by the account VCS uses as I am in a highly locked down environment so have to work within the security framework. main.cf Oracle XXX_SG-Oracle { ServiceName = OracleServiceXXX DelayAfterOnline = 30 DelayAfterOffline = 30 DetailMonitor = 1 Domain = "xxx.xxx.xxx.xxx" SID = xxx UserName = xxxxxxxxxx EncryptedPasswd = xxxxxxxxxxx SQLFile = "\"C:\\Program Files\\Veritas\\Cluster Server\\bin\\Oracle\\Check.sql\"" SQLTimeOut = 30 } It should be noted that the resource will work fine with the DetailMonitor = 0. Help :) Regards, PaulSolved1.5KViews0likes4CommentsGuest - Guest clustering on single Hardware
Hi Community, I found a document mentioning about the limitations of guest to guest clustering on a virtualized environment (i.e. vmware ESXi) because of the I/O fencing problems in Veritas Cluster Suite 5.1 My question is do we still have this same limitation on v 6.1? You can find the article below; https://www.google.com.tr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CBsQFjAA&url=http%3A%2F%2Fwww.symantec.com%2Fbusiness%2Fsupport%2Fresources%2Fsites%2FBUSINESS%2Fcontent%2Flive%2FTECHNICAL_SOLUTION%2F169000%2FTECH169366%2Fen_US%2FTechNote_VMware_IOFencing.pdf&ei=T0sIVNXKDOXNygPimoHQCw&usg=AFQjCNFmH2mS-AfGYlJ9WPGzoZwDo4yl7Q&bvm=bv.74649129,d.bGQSolved1.5KViews0likes1CommentVxDMP on top of RDAC/MPP
Hiya, I have a setup that I need to 'fix'... Currently it appears the servers have RDAC/MPP as the primary multipathing , however for what ever reason, it also has Veritas Volume Manager sitting atop of this! MPP is dealing with the pathing and presenting VxDMP with one 'virtual' path. Below is a vxdisk list example. Device: disk_0 devicetag: disk_0 type: auto clusterid: x_1 disk: name=x1-1 id=x2 group: name=xdg id=x1 info: format=cdsdisk,privoffset=256,pubslice=3,privslice=3 flags: online ready private autoconfig shared autoimport imported pubpaths: block=/dev/vx/dmp/disk_0s3 char=/dev/vx/rdmp/disk_0s3 guid: - udid: IBM%5FVirtualDisk%5FDISKS%5F600A0B80006E0140000035F24C090009 site: - version: 3.1 iosize: min=512 (bytes) max=2048 (blocks) public: slice=3 offset=65792 len=1140661648 disk_offset=0 private: slice=3 offset=256 len=65536 disk_offset=0 update: time=1299520130 seqno=0.29 ssb: actual_seqno=0.0 headers: 0 240 configs: count=1 len=51360 logs: count=1 len=4096 Defined regions: config priv 000048-000239[000192]: copy=01 offset=000000 enabled config priv 000256-051423[051168]: copy=01 offset=000192 enabled log priv 051424-055519[004096]: copy=01 offset=000000 enabled lockrgn priv 055520-055663[000144]: part=00 offset=000000 Multipathing information: numpaths: 1 sdb state=enabled I going to be removing the MPP layer to allow VxDMP to take over, however my question is how will this effect my VxVM config? I'm hoping VxVM upon reboot will pickup the new paths etc, but will this effect in any way the DG's that sit on top of this? Note these are shared luns across a cluster. Do I need to worry about the dmppolicy+disk.info files etc ? I hope the question makes sense! ThanksSolved