deleting rlink that's having "secondary_config_err" flag
hello, in my VCS global cluster my ORAGrp resource group is partially online since my rvgres is offline, i am suspecting the issue in the below rlink. i am trying to dissociate thi rlink(below:rlk_sys1-DB-rep_DB_r) and dettach it in order to delete it but i am not able to succeed. below are some output from the system. root@sys2# vxprint -P Disk group: DBhrDG TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl rlk_sys1-DB-rep_DB_r DB_rvg CONNECT - - ACTIVE - - rl rlk_sys1-rep_DB-rvg DB-rvg ENABLED - - PAUSE - - root@sys2# vxrlink -g DBhrDG dis rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-3520 Rlink rlk_sys1-rep_DB-rvg can not be dissociated if it is attached root@sys2# vxrlink -g DBhrDG det rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-10128 Operation not allowed with attached rlinks root@sys2# vxedit -g DBhrDG rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3540 Rlink rlk_sys1-rep_DB-rvg is not disabled, use -f flag root@sys2# vxedit -g DBhrDG -f rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3541 Rlink rlk_sys1-rep_DB-rvg is not dissociated root@sys2# vxprint -Vl Disk group: DBhrDG Rvg: DB-rvg info: rid=0.1317 version=0 rvg_version=41 last_tag=11 state: state=CLEAN kernel=DISABLED assoc: datavols=(none) srl=(none) rlinks=rlk_sys1-rep_DB-rvg exports=(none) vsets=(none) att: rlinks=rlk_sys1-rep_DB-rvg flags: closed secondary disabled detached passthru logging device: minor=26012 bdev=343/26012 cdev=343/26012 path=/dev/vx/dsk/DBhrDG/DB-rvg perms: user=root group=root mode=0600 Rvg: DB_rvg info: rid=0.1386 version=13 rvg_version=41 last_tag=12 state: state=ACTIVE kernel=ENABLED assoc: datavols=sys1_DB_Process,sys1_DB_Script,... srl=sys1_DB_SRL rlinks=rlk_sys1-DB-rep_DB_r exports=(none) vsets=(none) att: rlinks=rlk_sys1-DB-rep_DB_r flags: closed secondary enabled attached logging device: minor=26014 bdev=343/26014 cdev=343/26014 path=/dev/vx/dsk/DBhrDG/DB_rvg perms: user=root group=root mode=0600 please advise regards1.5KViews0likes4CommentsLinux host shows more number of paths than actual
my linux host RHEL 6.4 , has EMC Symmetrix DMX800 luns provisioned. 2 HBA ports are seeing luns on the 2 array ports , thus making it 4 paths per lun. Instead OS sees it as 36 paths. Same luns are also assigned to another host, there i see it correctly as 4 paths. have checked the zoning, masking several times. Reimaged the host. this has not resolved the issue. any setting on the symmetrix?Solved2.1KViews0likes8CommentsPatch missing for some components
My query. I was updating SFHA 5.0 MP3RP3 with the rolling patch of MP4RP1. The patch I downloaded did not contain any instalmp or installrp script. There were respective directories in which rpms were available. I had to to rpm -Uvh *.rpm to install them which even led me in the confusion for should i install all the rpms in all the directories or should I install repsective to products I am using. My usage is, Global Cluster Server and Replication - 2 Node Primary & 1 Node DR Anyhow what I did was just installed the rpms which were in the directories of Storage Foundation and Veritas Cluster Server, though while I tried to install rpms from Cluster Server directory after installing rpms from Storage Foundation directory I was received with a message of rpms already installed. Please update me on the above queries where I am leading myself in confusion Furher, I carry a similar replica enviorment with another client who is already on SFHA 5.0 MP4RP1. I compared the result of rpm -qa | grep VRTS to see if I have missed any of the rpms which should be installed and found 4 Rpms which were of older version in my new enviorment. I manually started finding it with find command in the entire patch directories but was unable to find them. The rpms are <!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } --> VRTSmapro-common-5.0.3.0-RHEL4 VRTSvcsmg-5.0.40.00-MP4_GENERIC VRTSvcsmn-5.0.40.00-MP4_GENERIC VRTSvcsvr-5.0.40.00-MP4_GENERIC I have similar packages in my enviorment but they are of different versions, kindly inform me where I will be able to find these rpms ? and why were they not available in the 5.0 MP4RP1 rolling patch which I downloaded from Sort ThanksSolved1.1KViews0likes4CommentsWhat is the difference between sfha, vm & fs titled rolling patch for SFHA
I have one of the enviorment of SFHA 5.0MP3, I am planning to upgrade 5.0MP3RP3 on RedHat 5.3 64 Bit while searching for patch of RP3, i found three rolling patch i.e. sfha-rhel5_x86_64-5.0MP3RP3 vm-rhel5_x86_64-5.0MP3RP3 fs-rhel5_x86_64-5.0MP3RP3 Please guide me the difference between them, further will I have to patch all three of them ? P.S - I know NEW versions are already avaialble, but i have specific requirement of this version therefore need to perform it urgently. Please guide me as soon as possible experts out there ThanksSolved1.6KViews0likes8CommentsCorrect Failover Mode for Clariion CX, RHEL6.2 & Volume Manager 6.0
Hello, as mentioned in the title we are using Veritas Cluster 6.0 an therefore Veritas Volume Manger 6.0 on our Redhat 6.2 systems. A Clariion CX4-240 storage system provides some LUNs' for the servers. Everything is working fine apart from several I/O errros in the messages: Aug 29 16:17:44 nwj8cnp001 kernel: end_request: I/O error, dev sdp, sector 0 Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Device not ready Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Sense Key : Not Ready [current] Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Add. Sense: Logical unit not ready, manual intervention required Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] CDB: Read(10): 28 00 3e 7f ff f8 00 00 08 00 All devices listed in the messages concern the secondary paths of the LUNs'. We are using Failover Mode 1, does anyone know if this is causing those error messages? I have found the following information in an EMC document: Failovermode may be set to 4 for ALUA behavior, or Failovermode may be set to 1 for active/passive behavior Do we have to change to Failovermode 4? Regards, MartinSolved1KViews0likes1Commentrhel 5.6 and vrts 5.1
Hi, I did last weekendyum updates on 7 RHEL ( Enterprise Server) 2.6.18-238.12.1.el5 to version 2.6.18-308.8.2.el5 . All of this servers has Veritas 5.6 and Veritas Cluster 5.1 on top. The problem we run in was on the vrts level. For example , a simple vxdisk list command takes 5 to 10 minutes to come with his output. From the Symantec SORT site , we've found the patches for the version 5.6 and installed this. The patch VRTSvxvm-5.1.132.300-SP1RP2P3_RHEL5; VRTSvxfen-5.1.132.200-SP1RP2P2_RHEL5; VRTSvxfs-5.1.101.000-RP1_RHEL5. But with no results. At the END of the night, we've booted the RHEL servers with the Previous kernel 2.6.18-238.12.1.el5 . Than, the Veritas disk commands runs as before . Is the kernel in the Compatibility Matrix or has someone a brilliant idea? (I need a solutionis more asI would like to have a solution(°_°)) Regards Ivo908Views0likes4CommentsCFSumount hangs on RHEL6.1 with veritas 5.1SP1PR2
Hello, I'm using two RHEL6.1 servers with veritas 5.1SP1PR2 cluster file system. Cluster starts ok and I can switch services to other node etc but when I execute commands hastop -local or hastop -all then the cluster hangs. It even makes one of the nodes to hang totally and no commands cannot be executed as root. What I can see is that it hangs when it does the cfsumount for shared mount points. Some of them (changes timeto time so not the same mount points) are hanging and appead again to the mount list but not as real mounts but as copies of root /. example below -bash-4.1# df -k Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/VolGroup00-LogVol00 8256952 5710140 2127384 73% / tmpfs 18503412 424 18502988 1% /dev/shm /dev/sda1 253871 37102 203662 16% /boot /dev/mapper/VolGroup00-LogVol04 12385456 863508 10892804 8% /opt /dev/mapper/VolGroup00-LogVol02 8256952 428260 7409264 6% /tmp /dev/mapper/VolGroup00-LogVol03 12385456 5429820 6326492 47% /var tmpfs 4 0 4 0% /dev/vx /dev/vx/dsk/mountdg/mymount 8256952 5710140 2127384 73% /mymount From here you can see that the data values are the same between my cfsmount (mymount) and root /. I can survive from this using command umount/mymount It throws an error umount: /mymount: not mounted but still ater this cluster continues to go down. This is justa workaround and I do not want to leave it like this. Any ideas how to fix this? Is there a patch for this or should I change something on RHEL or in veritas? br, JP434Views0likes1CommentAnnouncing Storage Foundation High Availability Customer Forum 2012
We are excited to announce Storage Foundation High Availability Customer Forum 2012, a free learning event by the engineers for the engineers. Registration is open, register now to get on the priority list. The forum will take place at our Mountain View, CA offices on March 14th and 15th 2012. Join your peers and our engineers for two days of learning and knowledge sharing. The event features highly technical sessions tohelp you get more out of your days. Learn and share best practices from your peers in the industry and build a long lasting support network in the process Become a Power User by significantly increasing your troubleshooting and diagnostic skills as well as your product knowledge Engage with the engineers who architected and wrote the code for the products Please see here for event agenda and sessions details. More questions? See our events page.676Views1like0CommentsSF CFS for RAC without GCO option
Hi Everyone, I have a cluster setup where I have a 2 node primary SF CFS for RAC cluster (for RHEL) and a single node as DR for the same. I am suing VVR to replicate the volumes accross to DR. I do not have the GCO option license. My question is, what should be the recommended VCS resources to be added to the configuration if I am not using GCO? I have attached my main.cf file that shows my current config. Any suggstions would be appreciated. Thank you. WBRSolved990Views0likes8CommentsVeritas Storage Foundation Cluster File System
I have 2 linux node on that i need to install Veritas Storage Foundation Cluster File System and after that created the share file system /data1 should be on shared storage and be available on two cluster nodes simultaneously as shared file systems. it is possibel ? if it possible please send me the configuration doc aur any useful link so i can configured the same on my serverSolved1.3KViews0likes5Comments