missing disks and reboot wont solve it
I am very new to veritas. We have AIX 7.1 server using veritas DMP. When I look at the VIO all of the virtual fibre channel adapters are logged in, but on the lpar it is failing to see any disks on fscsi0 and fscsi1. I have been going back and forth with IBM and symantec and cannot get this resolved, so decided to pick your brains here. # lsdev| grep fscsi fscsi0 Available 01-T1-01 FC SCSI I/O Controller Protocol Device fscsi1 Available 02-T1-01 FC SCSI I/O Controller Protocol Device fscsi2 Available 03-T1-01 FC SCSI I/O Controller Protocol Device fscsi3 Available 04-T1-01 FC SCSI I/O Controller Protocol Device fscsi4 Available 05-T1-01 FC SCSI I/O Controller Protocol Device fscsi5 Available 06-T1-01 FC SCSI I/O Controller Protocol Device fscsi6 Available 07-T1-01 FC SCSI I/O Controller Protocol Device fscsi7 Available 08-T1-01 FC SCSI I/O Controller Protocol Device # vxdmpadm listctlr CTLR_NAME ENCLR_TYPE STATE ENCLR_NAME PATH_COUNT ========================================================================= fscsi2 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi3 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi4 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi5 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi6 Hitachi_VSP ENABLED hitachi_vsp0 44 fscsi7 Hitachi_VSP ENABLED hitachi_vsp0 44 ^ Above you will see that fscsiX seen by OS is not being seen by veritas. How can I force them into veritas? I have already tried rebooting the VIO and LPAR and that doesnt seem to help. FWIW, i deleted the disks that were in defined state. Usually when MPIO is being used and we lose path, deleting the disks and virtual fibrechannel adapter and running cfgmgr solves the issue, but that doesnt seem to help here.SolvedMissing VCS "VVRTypes.cf"Agents
Need help/ advice on how to recover missing VVRTypes.cf System Info OS RHEL 6.1 VCS Version 5.1 rp3 Objective To enable VVR in VCS-GCO environment During the process of creating RVG for VVR Service Group; I found that Done the following Checks (i) VVRTypes.cf is MISSING from the /etc/VRTSvcs/conf/config directory (ii) /etc/VRTSvcs/conf/sample_vvr/addVVRTypes.sh is NOT there (iii) re-install VCS5.1-rp3 patch DOESNT work . Message prompted "SFHA ver 5.1.133.000 is already installed. No rpm will be installed". Anyone can provide advice / help. SOS. Thanks in advance.Solved1.6KViews0likes2CommentsVCS AutoStartList ungracefully failover part2
I have two system cms-app-49-51 and cms-app-49-52 that I have been testing an ungraceful shutdown between systems (hard power off) After failing over from cms-app-49-52 to cms-app-49-51 I brought cms-app-49-52 back up and cleared all error (or so I thought.) I checked the system and everything looked to be fine all faults cleared. [root@cms-app-49-52 ~]# hastatus -sum -- SYSTEM STATE -- System State Frozen A cms-app-49-51 FAULTED 0 A cms-app-49-52 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B CMSApp_Cluster cms-app-49-52 Y N OFFLINE B CMSApp_Notifier cms-app-49-52 Y N ONLINE [root@cms-app-49-52 ~]# hagrp -display -attribute AutoDisabled #Group Attribute System Value CMSApp_Cluster AutoDisabled cms-app-49-51 0 CMSApp_Cluster AutoDisabled cms-app-49-52 0 CMSApp_Notifier AutoDisabled cms-app-49-51 0 CMSApp_Notifier AutoDisabled cms-app-49-52 0 However CMSApp_Cluster on cms-app-49-52 never started automatically until I "hagrp -online CMSApp_Cluster -sys cms-app-49-52" Attached is the logfile from both systems, please look around 2012/12/06 14:58:38Solved2KViews1like3CommentsFailover to secondary system upon ungraceful shutdown
I have configured a cluster with monitored services and it works as intended with one exception. If I hard/ungraceful power down, cluster does not fail over to the working system and start the services automatically. Below is the script that I use to create the cluster. I am hoping I just missed a setting. hastatus displayes the following last message App_Cluster SYSTEM_1 *FAULTED* OFFLINE If i bring the downed system backup up then the system decides to fail over the the other box. This is the script that I use to configure the cluster haconf -makerw haclus -modify PrintMsg 0 haclus -modify UserNames admin xxxxxxxxxxxxx haclus -modify ClusterAddress "127.0.0.1" haclus -modify Administrators admin haclus -modify SourceFile "./main.cf" haclus -modify ClusterName CMSApp hasys -add SYSTEM_1 hasys -modify SYSTEM_1 SourceFile "./main.cf" hasys -add SYSTEM_2 hasys -modify SYSTEM_2 SourceFile "./main.cf" hagrp -add APP_CLUSTER hagrp -modify APP_CLUSTER SystemList SYSTEM_1 0 SYSTEM_2 1 hagrp -modify APP_CLUSTER VCSi3Info "" "" -sys SYSTEM_1 hagrp -modify APP_CLUSTER AutoStartList SYSTEM_1 SYSTEM_2 hares -add Virtual_IP IP APP_CLUSTER hares -local Virtual_IP Device hares -modify Virtual_IP Device eth0 -sys SYSTEM_1 hares -modify Virtual_IP Device eth0 -sys SYSTEM_2 hares -modify Virtual_IP Address "192.168.0.3" hares -modify Virtual_IP NetMask "255.255.255.0" hares -modify Virtual_IP PrefixLen 1000 hares -modify Virtual_IP Enabled 1 hares -add Network_Card NIC APP_CLUSTER hares -local Network_Card Device hares -modify Network_Card Device eth0 -sys SYSTEM_1 hares -modify Network_Card Device eth0 -sys SYSTEM_2 hares -modify Network_Card PingOptimize 1 hares -modify Network_Card Mii 1 hares -modify Network_Card Enabled 1 thxSolved1.8KViews1like8Comments