VVR Replication configuration
HI Guys, I am configuring the replication with our 2 node cluster using cluster. but when adding the node is giving the error. i am not able to get proper logs also in engine.log and RVG.log. please help. [root@AOSCEDA01 ~]# vradmin -g jceda_dg addsec jceda_rvg AOSCEDA01 AOSCEDA02 prlink=to_AOSCEDA01 srlink=to_AOSCEDA02 VxVM VVR vradmin ERROR V-5-52-417 RVG jceda_rvg already exists in disk group jceda_dg. VxVM VVR vradmin ERROR V-5-52-802 Cannot start command execution on Secondary. [root@AOSCEDA01 ~]#Solved5.7KViews0likes33CommentsCVM won't start on remote node with an FSS diskgroup
I am testing FSS (Flexible Shared Storage) on SF 6.1 on RH 5.5 in a Virtual Box VM and when I try to start CVM on the remote node I get: VCS ERROR V-16-20006-1005 (r55v61b) CVMCluster:cvm_clus:monitor:node - state: out of clusterreason: Disk for disk group not found: retry to add a node failed Here is my setup: Node A is master with a local (sdd) and remote disk (B_sdd) [root@r55v61a ~]# vxdctl -c mode mode: enabled: cluster active - MASTER master: r55v61a [root@r55v61a ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS B_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported Node B is the slave, and seeslocal (sdd) and remote disk (A_sdd) [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported On node A, I add an FSS diskgroup, so on node A the disk is local [root@r55v61a ~]# vxdg -s -o fss=on init fss-dg fd1_La=sdd [root@r55v61a ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS B_sdd auto:cdsdisk - - online remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk fd1_La fss-dg online exported shared And on node B the disk in fss-dg is remote [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk fd1_La fss-dg online shared remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported I then stop and start VCS on node B which is when I see the issue: 2014/05/13 12:05:23 VCS INFO V-16-2-13716 (r55v61b) Resource(cvm_clus): Output of the completed operation (online) ============================================== ERROR: ============================================== 2014/05/13 12:05:24 VCS ERROR V-16-20006-1005 (r55v61b) CVMCluster:cvm_clus:monitor:node - state: out of cluster reason: Disk for disk group not found: retry to add a node failed If I destroy fss-dg diskgroup on node A, then CVM will start on node B, so issue is the FSS diskgroup where it seems CVM cannot find the remote disk in the diskgroup I can also get round issue by stopping VCS on node A and then CVM will start on node B: [root@r55v61b ~]# hagrp -online cvm -sys r55v61b [root@r55v61b ~]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported If I then start VCS on node A, then B is able to see the FSS diskgroup: [root@r55v61b ~]# vxdisk list DEVICE TYPE DISK GROUP STATUS A_sdd auto:cdsdisk fd1_La fss-dg online shared remote sda auto:none - - online invalid sdb auto:none - - online invalid sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online exported I can stop and start VCS on each node when disks are just exported and VCS is able to see disk from other node, but when I create the FSS diskgroup, CVM won't start on the system that has the remote disk - does anybody have any ideas as to why? MikeSolved5.4KViews1like21CommentsSF 6.0 does not recognize SF 4.1 Version 120 simple disk?
I am currently preparing to exchange two old SLES 9 systems by new SLES 11 machines. These new ones have SF 6.0 Basic and are able to see and read (dd) the disks currently being in production by the SLES9 systems (SAN FC ones). The disks are Version 120 originally created and in use by SF4.1: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: simple hostid: example4 disk: name=isar1_sas_2 id=1341261625.7.riser5 group: name=varemadg id=1339445883.17.riser5 flags: online ready private foreign autoimport imported pubpaths: block=/dev/disk/by-name/isar1_sas_2 char=/dev/disk/by-name/isar1_sas_2 version: 2.1 iosize: min=512 (bytes) max=1024 (blocks) public: slice=0 offset=2049 len=33552383 disk_offset=0 private: slice=0 offset=1 len=2048 disk_offset=0 update: time=1372290815 seqno=0.83 ssb: actual_seqno=0.0 headers: 0 248 configs: count=1 len=1481 logs: count=1 len=224 Defined regions: config priv 000017-000247[000231]: copy=01 offset=000000 enabled config priv 000249-001498[001250]: copy=01 offset=000231 enabled log priv 001499-001722[000224]: copy=01 offset=000000 enabled # vxdg list varemadg | grep version version: 120 But when I look in the new systems SF6.0 does not recognize the diskgroups at all: # vxdisk list isar1_sas_2 Device: isar1_sas_2 devicetag: isar1_sas_2 type: auto info: format=none flags: online ready private autoconfig invalid pubpaths: block=/dev/vx/dmp/isar1_sas_2 char=/dev/vx/rdmp/isar1_sas_2 guid: - udid: Promise%5FVTrak%20E610f%5F49534520000000000000%5F22C90001557951EC site: - When I am doing a hexdump of the first few sectors it looks pretty much the same on both machines. According to articles like TECH174882 SF6.0 should be more than happy to recognize any disk layout between Version 20 and 170. Any hints what I might be doing wrong?Solved4.6KViews1like18CommentsERROR V-5-52-12
Environment SFHA = 6.1 OS = RHEL 6.3 Error # vradmin -g DG createpri RVG VOL1 VOL1-SRL VxVM VVR vradmin ERROR V-5-52-12 vradmind server not running on this system I tried the below but no success. # /usr/sbin/vxstart_vvr stop # /usr/sbin/vxstart_vvr start #/etc/init.d/vras-vradmind.sh stop #/etc/init.d/vras-vradmind.sh startSolved4.4KViews0likes23Commentsvxlicrep ERROR V-21-3-1015 Failed to prepare report for key
Dear all, we got a INFOSCALE FOUNDATION LNX 1 CORE ONPREMISE STANDARD PERPETUAL LICENSE CORPORATE. I have installed key using the vxlicinst -k <key> command. But when I want to check it using vxlicrep I'm getting this error for the given key: vxlicrep ERROR V-21-3-1015 Failed to prepare report for key = <key> We have Veritas Volume Manager 5.1 (VRTSvxvm-5.1.100.000-SP1_RHEL5 and VRTSvlic-3.02.51.010-0) running on RHEL 5.7 on 64 bits. I've read that the next step is to run vxkeyless set NONE, but I'm afraid to run this until I cannot see the license reported correctly by vxlicrep. What can I do to fix it? Thank you in advance. Kind regards, Laszlo4.2KViews0likes7CommentsNeed to download Veritas Storage foundation suite for linux
Hello, I have been trying to install Veritas Storage Foundation HA suite on my Linux VMs. The tar ball which I downloaded fromhttps://sort.veritas.com/agentshas the RPMS but it does not give a menu based installation wizard, which resolves the dependancies during the installation process. Installing the RPMs individually is a backbreaking task! Would somebody please provide me the right URL, name of the package that I should be downloading for RHEL6 x86-64? Any help would be much appreciated. Thank you. Regards, SeanSolved4.2KViews0likes7CommentsWhat is the difference between SFCFS and SFCFSHA ?
Hi all, we are thinking of cluster file system. I used SF for Oracle before (about 2 years ago) and have a look at cluster filesystem solutions from Symantec. I found that there are two solutions with cluster file system (am I right there are two ?): Symantec Storage Foundation Cluster File System (SFCFS) Symantec Storage Foundation Cluster File System HA (SFCFSHA) And a few questions: what is the difference between this two ? is veritas cluster filesystem active-active or active-passive ? Thanks in advanceSolved3.9KViews0likes6CommentsWhy it happens - logging to DCM (needs dcm resynchronization)
Replicated Data Set: RVG Primary: Host name: X.X.X.X RVG name: RVG DG name: DG RVG state: enabled for I/O Data volumes: 1 VSets: 0 SRL name: SRL SRL size: 100.00 G Total secondaries: 1 Secondary: Host name: X.X.X.X RVG name: RVG DG name: DG Data status: consistent, behind Replication status: logging to DCM (needs dcm resynchronization) Current mode: asynchronous Logging to: DCM (contains 89718720 Kbytes) (SRL protection logging) Timestamp Information: N/A I know the problem will be resolved via running the below command. But why it happen ? cant this be automated ? vxrvg -g dg resync rvg (Client dont have any disconnection between Primary and DR site)Solved3.4KViews0likes2CommentsSFHA Solutions 6.2: Symantec Storage plug-in for OEM 12c
Symantec Storage plug-in for OEM12c enables you to view and manage the Storage Foundation and VCS objects through the Oracle Enterprise Manager 12c (OEM), which has a graphical interface. It works with the Symantec Storage Foundation and High Availability 6.2 product suite. The Symantec Storage plug-in allows you to: SmartIO: manage Oracle database objects using SmartIO. Snapshot: create point-in-time copies (Storage Checkpoint, Database FlashSnap, Space-optimized Snapshot, and FileSnap) of Oracle databases using SFDB features Cluster: vew cluster-specific information. You can get the plug-in by downloading the attached file. For more information about installing and using the plug-in, download the attached Application Note. Terms of use for this information are found in Legal Notices.3.1KViews1like0CommentsInfoScale 7.1 and large disks (8Tb) with FSS
Hi everyone, I had been successfully running FSS with (thin) 8Tb disk drives on SFCFSHA 6.1 and 6.2.1 (see:http://vcojot.blogspot.ca/2015/01/storage-foundation-ha-61-and-flexible.html) I am trying to reproduce the same kind of setup with InfoScale 7.1 and it seems to have issues with 8Tb drives. Here's the full setup description: 2 * RHEL6.8 hosts with 16gb RAM. 4 LSI virtual adapters, each with 15 drives. c0* and c1* have 2Tb drives. c2* and c3* have 8Tb drives. Both 2tb and 8tb drives are 'exported' and the cluster is stable. Here's what I noticed.. Creating an FSS DG works on 2tb drives but not on 8tb drives (it used to on 6.1 and 6.2.1): [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_2T_00 [root@vcs18 ~]# vxdg list FSS00dg Group: FSS00dg dgid: 1466522672.427.vcs18 import-id: 33792.426 flags: shared cds version: 220 alignment: 8192 (bytes) local-activation: shared-write cluster-actv-modes: vcs18=sw vcs19=sw ssb: on autotagging: on detach-policy: local dg-fail-policy: obsolete ioship: on fss: on storage-sources: vcs18 copies: nconfig=default nlog=default config: seqno=0.1027 permlen=51360 free=51357 templen=2 loglen=4096 config disk ssd_2T_00 copy 1 len=51360 state=clean online log disk ssd_2T_00 copy 1 len=4096 On the 8Tb drives, it fails with: [root@vcs18 ~]# vxdg destroy FSS00dg [root@vcs18 ~]# /usr/sbin/vxdg -s -o fss=on init FSS00dg ssd_8T_00 VxVM vxdg ERROR V-5-1-585 Disk group FSS00dg: cannot create: Record not in disk group One thing that I noticed is that the 8Tb drives, even though exported, do -not- show up on the remote machine: [root@vcs18 ~]# vxdisk list|grep _00 ssd_2T_00 auto:cdsdisk - - online exported ssd_2T_00_1 auto:cdsdisk - - online remote ssd_8T_00 auto:cdsdisk - - online exported One other thing to note is that the 'connectivity' seems a bit messed up on the 8Tb drives: [root@vcs18 ~]# vxdisk list ssd_2T_00|grep conn connectivity: vcs18 [root@vcs18 ~]# vxdisk list ssd_2T_00_1|grep conn connectivity: vcs19 [root@vcs18 ~]# vxdisk list ssd_8T_00|grep conn connectivity: vcs18 vcs19 That's (IMHO) an error since those 'virtual'drives are local to each of the nodes and the SCSI busses aren't shared vcs18 and vcs19 are two fully independent VMWare machines. This looks like a bug to me but since I don't work for a company with a vx software support contrat anymore I cannot report the issue. Thanks for reading, Vincent3KViews0likes7Comments