Linux host shows more number of paths than actual
my linux host RHEL 6.4 , has EMC Symmetrix DMX800 luns provisioned. 2 HBA ports are seeing luns on the 2 array ports , thus making it 4 paths per lun. Instead OS sees it as 36 paths. Same luns are also assigned to another host, there i see it correctly as 4 paths. have checked the zoning, masking several times. Reimaged the host. this has not resolved the issue. any setting on the symmetrix?Solved2.1KViews0likes8CommentsWhat is the difference between sfha, vm & fs titled rolling patch for SFHA
I have one of the enviorment of SFHA 5.0MP3, I am planning to upgrade 5.0MP3RP3 on RedHat 5.3 64 Bit while searching for patch of RP3, i found three rolling patch i.e. sfha-rhel5_x86_64-5.0MP3RP3 vm-rhel5_x86_64-5.0MP3RP3 fs-rhel5_x86_64-5.0MP3RP3 Please guide me the difference between them, further will I have to patch all three of them ? P.S - I know NEW versions are already avaialble, but i have specific requirement of this version therefore need to perform it urgently. Please guide me as soon as possible experts out there ThanksSolved1.6KViews0likes8Commentsdeleting rlink that's having "secondary_config_err" flag
hello, in my VCS global cluster my ORAGrp resource group is partially online since my rvgres is offline, i am suspecting the issue in the below rlink. i am trying to dissociate thi rlink(below:rlk_sys1-DB-rep_DB_r) and dettach it in order to delete it but i am not able to succeed. below are some output from the system. root@sys2# vxprint -P Disk group: DBhrDG TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl rlk_sys1-DB-rep_DB_r DB_rvg CONNECT - - ACTIVE - - rl rlk_sys1-rep_DB-rvg DB-rvg ENABLED - - PAUSE - - root@sys2# vxrlink -g DBhrDG dis rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-3520 Rlink rlk_sys1-rep_DB-rvg can not be dissociated if it is attached root@sys2# vxrlink -g DBhrDG det rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-10128 Operation not allowed with attached rlinks root@sys2# vxedit -g DBhrDG rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3540 Rlink rlk_sys1-rep_DB-rvg is not disabled, use -f flag root@sys2# vxedit -g DBhrDG -f rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3541 Rlink rlk_sys1-rep_DB-rvg is not dissociated root@sys2# vxprint -Vl Disk group: DBhrDG Rvg: DB-rvg info: rid=0.1317 version=0 rvg_version=41 last_tag=11 state: state=CLEAN kernel=DISABLED assoc: datavols=(none) srl=(none) rlinks=rlk_sys1-rep_DB-rvg exports=(none) vsets=(none) att: rlinks=rlk_sys1-rep_DB-rvg flags: closed secondary disabled detached passthru logging device: minor=26012 bdev=343/26012 cdev=343/26012 path=/dev/vx/dsk/DBhrDG/DB-rvg perms: user=root group=root mode=0600 Rvg: DB_rvg info: rid=0.1386 version=13 rvg_version=41 last_tag=12 state: state=ACTIVE kernel=ENABLED assoc: datavols=sys1_DB_Process,sys1_DB_Script,... srl=sys1_DB_SRL rlinks=rlk_sys1-DB-rep_DB_r exports=(none) vsets=(none) att: rlinks=rlk_sys1-DB-rep_DB_r flags: closed secondary enabled attached logging device: minor=26014 bdev=343/26014 cdev=343/26014 path=/dev/vx/dsk/DBhrDG/DB_rvg perms: user=root group=root mode=0600 please advise regards1.4KViews0likes4CommentsVeritas Storage Foundation Cluster File System
I have 2 linux node on that i need to install Veritas Storage Foundation Cluster File System and after that created the share file system /data1 should be on shared storage and be available on two cluster nodes simultaneously as shared file systems. it is possibel ? if it possible please send me the configuration doc aur any useful link so i can configured the same on my serverSolved1.3KViews0likes5CommentsPatch missing for some components
My query. I was updating SFHA 5.0 MP3RP3 with the rolling patch of MP4RP1. The patch I downloaded did not contain any instalmp or installrp script. There were respective directories in which rpms were available. I had to to rpm -Uvh *.rpm to install them which even led me in the confusion for should i install all the rpms in all the directories or should I install repsective to products I am using. My usage is, Global Cluster Server and Replication - 2 Node Primary & 1 Node DR Anyhow what I did was just installed the rpms which were in the directories of Storage Foundation and Veritas Cluster Server, though while I tried to install rpms from Cluster Server directory after installing rpms from Storage Foundation directory I was received with a message of rpms already installed. Please update me on the above queries where I am leading myself in confusion Furher, I carry a similar replica enviorment with another client who is already on SFHA 5.0 MP4RP1. I compared the result of rpm -qa | grep VRTS to see if I have missed any of the rpms which should be installed and found 4 Rpms which were of older version in my new enviorment. I manually started finding it with find command in the entire patch directories but was unable to find them. The rpms are <!-- @page { margin: 0.79in } P { margin-bottom: 0.08in } --> VRTSmapro-common-5.0.3.0-RHEL4 VRTSvcsmg-5.0.40.00-MP4_GENERIC VRTSvcsmn-5.0.40.00-MP4_GENERIC VRTSvcsvr-5.0.40.00-MP4_GENERIC I have similar packages in my enviorment but they are of different versions, kindly inform me where I will be able to find these rpms ? and why were they not available in the 5.0 MP4RP1 rolling patch which I downloaded from Sort ThanksSolved1.1KViews0likes4CommentsCorrect Failover Mode for Clariion CX, RHEL6.2 & Volume Manager 6.0
Hello, as mentioned in the title we are using Veritas Cluster 6.0 an therefore Veritas Volume Manger 6.0 on our Redhat 6.2 systems. A Clariion CX4-240 storage system provides some LUNs' for the servers. Everything is working fine apart from several I/O errros in the messages: Aug 29 16:17:44 nwj8cnp001 kernel: end_request: I/O error, dev sdp, sector 0 Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Device not ready Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Sense Key : Not Ready [current] Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] Add. Sense: Logical unit not ready, manual intervention required Aug 29 16:17:44 nwj8cnp001 kernel: sd 8:0:5:0: [sdp] CDB: Read(10): 28 00 3e 7f ff f8 00 00 08 00 All devices listed in the messages concern the secondary paths of the LUNs'. We are using Failover Mode 1, does anyone know if this is causing those error messages? I have found the following information in an EMC document: Failovermode may be set to 4 for ALUA behavior, or Failovermode may be set to 1 for active/passive behavior Do we have to change to Failovermode 4? Regards, MartinSolved1KViews0likes1CommentSF CFS for RAC without GCO option
Hi Everyone, I have a cluster setup where I have a 2 node primary SF CFS for RAC cluster (for RHEL) and a single node as DR for the same. I am suing VVR to replicate the volumes accross to DR. I do not have the GCO option license. My question is, what should be the recommended VCS resources to be added to the configuration if I am not using GCO? I have attached my main.cf file that shows my current config. Any suggstions would be appreciated. Thank you. WBRSolved989Views0likes8Commentsrhel 5.6 and vrts 5.1
Hi, I did last weekendyum updates on 7 RHEL ( Enterprise Server) 2.6.18-238.12.1.el5 to version 2.6.18-308.8.2.el5 . All of this servers has Veritas 5.6 and Veritas Cluster 5.1 on top. The problem we run in was on the vrts level. For example , a simple vxdisk list command takes 5 to 10 minutes to come with his output. From the Symantec SORT site , we've found the patches for the version 5.6 and installed this. The patch VRTSvxvm-5.1.132.300-SP1RP2P3_RHEL5; VRTSvxfen-5.1.132.200-SP1RP2P2_RHEL5; VRTSvxfs-5.1.101.000-RP1_RHEL5. But with no results. At the END of the night, we've booted the RHEL servers with the Previous kernel 2.6.18-238.12.1.el5 . Than, the Veritas disk commands runs as before . Is the kernel in the Compatibility Matrix or has someone a brilliant idea? (I need a solutionis more asI would like to have a solution(°_°)) Regards Ivo906Views0likes4CommentsI need VxFS5.0 & VxVM 5.0 for RHEL5 i686
Hi, I am getting frustrated with the 'Storage Foundation and Veritas Cluster Server High Availability Solutions' Version and it's OS and Platform compatibility. I need you kind help. I have the following infrastructure: OS: RHEL 5 #uname -a Linux vcsnode1.cts.com 2.6.18-53.el5 #1 SMP Wed Oct 10 16:34:02 EDT 2007 i686 athlon i386 GNU/Linux # cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.1 (Tikanga) I want to download the appropriate/conpatiable version of ''Storage Foundation and Veritas Cluster Server High Availability Solutions' where I will be able to install VCS, VXFS & VXVM. A lot of confusing downloading options are available in the Symantec site. Please help me getting the exact version by sending the exact link. Note, I have the following folders in the installation folder: rhel4_i686 rhel4_x86_64 rhel5_i686 rhel5_x86_64 I successfully installed VCS 5.0 from rhel5_i686 foler. But I am not getting VxFS and VxVM. I can see that VxFS and VxVM is there in folder rhel4_x86_64 & rhel5_x86_64, but I am not able to install it as the pre-installation check is failing due to incompatiable kernel version. Please help. Thanks and Regards, Prasenjit Basak.Solved781Views0likes1CommentCVM 4.1 Master Role
Is there any way to choose the next CVM master? Who would become the next CVM master when vsfcbe4 stops it's CVM SG? Can I change a config file to make, let us say, sjmemfs07 the next CVM master? Here is my nidmap as fed from vxclustadm: # vxclustadm nidmap Name CVM Nid CM Nid State vfscbe1-prod 0 0 Joined: Slave vfscbe2-prod 1 1 Joined: Slave vfscbe3-prod 11 2 Out of Cluster vfscbe4-prod 2 3 Joined: Master vfscbe5-prod 3 4 Joined: Slave sjmemfs01 4 5 Joined: Slave sjmemfs02 6 6 Joined: Slave sjmemfs03 8 7 Joined: Slave sjmemfs04 7 8 Out of Cluster sjmemfs05 5 9 Joined: Slave sjmemfs06 10 10 Joined: Slave sjmemfs07 9 11 Joined: Slave sjmemfs08 12 12 Out of Cluster sjmemfs09 13 13 Out of ClusterSolved753Views0likes2Comments