Migrate ApplicationHA to new vCenter server
We're consolidating a number of vCenter servers into a single instance. One of the clusters we want to move has VMs managed by AppHA 6.0.1. Is there an easy way to migrate the cluster configuration from the old vCenter server to the new one without causing any cluster outages or losing the cluster configuration? I've found this article on how to deal with the permanent loss of a vCenter server, but an unsure what might happen if the existing vCenter server stays online. https://sort.symantec.com/public/documents/appha/6.0/windows/productguides/html/appha_userguide_60_win/apas11.htmSolved8.6KViews0likes9CommentsI want to switch the application to minos2,but I cannot select the system
I want to switch the application to minos2,but I cannot select the system No system,why? root@minos1(/)# hastatus -sum -- SYSTEM STATE -- System State Frozen A minos1 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService minos1 Y N ONLINE B EMS minos1 Y N ONLINE B replication minos1 Y N ONLINE -- REMOTE CLUSTER STATE -- Cluster State N minos2 RUNNING -- REMOTE SYSTEM STATE -- cluster:system State Frozen O minos2:minos2 RUNNING 01.5KViews0likes3CommentsAutomatic Failover
I would like to use Veritas Cluster Server to achieve high availability and automatic fail over for my applications. These application are java applications with some services with the backend databases using Sybase The java applications can be broken into 2 parts : web layer and application layer. The underlying infrastructure will be in RHEL Linux for java applications and database (Sybase) My question is : is VCS supporting seamless automatic failover for the java services including database services without requiring manual intervention ? What i want to achieve is : after i setup active-passive for application layer, i expect the active node to automatically failover to passive node and immediately the passive node become active nodeSolved998Views0likes1CommentImport disk group failure
Hello everyone! When I finished disk group configuration, I cannot find disk group until I imported by manual,butafter rebooting server,Icouldn't findthe disk group only if I imported again.following are my operations.Also the RVG was DISABLED until I started it,but it's still DISABLED after rebooting.Any help and suggestion would be appreciate [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) online [root@u31_host dsk]# vxdg import netnumendg [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk netnumendg01 netnumendg online sdc auto:cdsdisk netnumendg02 netnumendg online sdd auto:cdsdisk netnumendg03 netnumendg online [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 DISABLED CLEAN primary 2 srl_vol [root@u31_host dsk]# vxrvg -g netnumendg start netnumenrvg [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 ENABLED ACTIVE primary 2 srl_vol After reboot the server [root@u31_host Desktop]# vxprint [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxprint -rt |grep ^rv [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) onlineSolved1KViews1like1Commentsetting llt (low prio net) in a bond configured in a bridge
Hi, I want to set a low prio network over a bond which is part of a bridge. I have a cluster with two nodes. This is the network configuration (it is the same for both nodes): node 2: [root@node2 ]# cat /proc/version Linux version 2.6.32-504.el6.x86_64 (mockbuild@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Tue Sep 16 01:56:35 EDT 2014 [root@node2 ]# ifconfig | head -n 24 bond0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:761626 errors:0 dropped:0 overruns:0 frame:0 TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB) br0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 inet addr:10.10.11.102 Bcast:10.10.11.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49678 errors:0 dropped:0 overruns:0 frame:0 TX packets:50264 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:44061727 (42.0 MiB) TX bytes:28800387 (27.4 MiB) eth0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 UP BROADCAST RUNNING PROMISC SLAVE MULTICAST MTU:1500 Metric:1 RX packets:761626 errors:0 dropped:0 overruns:0 frame:0 TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB) [root@node2 ]# brctl show bridge name bridge id STP enabled interfaces br0 8000.525400141321 no bond0 [root@node2 ]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:14:13:21 Slave queue ID: 0 node 1: [root@node1]# ifconfig | head -n 24 bond0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:816219 errors:0 dropped:0 overruns:0 frame:0 TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB) br0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 inet addr:10.10.11.101 Bcast:10.10.11.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:813068 errors:0 dropped:0 overruns:0 frame:0 TX packets:640374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1181039350 (1.0 GiB) TX bytes:604216197 (576.2 MiB) eth0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:816219 errors:0 dropped:0 overruns:0 frame:0 TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB) [root@node1]# brctl show bridge name bridge id STP enabled interfaces br0 8000.5254002e6d23 no bond0 [root@node1 ]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:2e:6d:23 Slave queue ID: 0 the llt configuration files are the following: [root@node1 ]# cat /etc/llttab set-node node2 set-cluster 1042 link eth3 eth3-52:54:00:c3:a0:55 - ether - - link eth2 eth2-52:54:00:35:f6:a5 - ether - - link-lowpri bond0 bond0 - ether - - [root@node1 ]# cat /etc/llttab set-node node1 set-cluster 1042 link eth3 eth3-52:54:00:bc:9b:e5 - ether - - link eth2 eth2-52:54:00:31:fb:31 - ether - - link-lowpri bond0 bond0 - ether - - However this seems that is not working. When I check the llt status, the node thinks the interface is down in the other one. [root@node2 ]# lltstat -nvv | head LLT node information: Node State Link Status Address 0 node1 OPEN eth3 UP 52:54:00:BC:9B:E5 eth2 UP 52:54:00:31:FB:31 bond0 DOWN * 1 node2 OPEN eth3 UP 52:54:00:C3:A0:55 eth2 UP 52:54:00:35:F6:A5 bond0 UP 52:54:00:14:13:21 [root@node2 ]# lltstat -nvv | head LLT node information: Node State Link Status Address * 0 node1 OPEN eth3 UP 52:54:00:BC:9B:E5 eth2 UP 52:54:00:31:FB:31 bond0 UP 52:54:00:2E:6D:23 1 node2 OPEN eth3 UP 52:54:00:C3:A0:55 eth2 UP 52:54:00:35:F6:A5 bond0 DOWN Do you know if I have something worng? Is this a valid configuration? Thanks, Javier1.5KViews0likes4CommentsReplacing node
hi, I am using VCS on two node cluster having OS Solaris 10 Sparc. One primary node is having 13 CPU, 52 GB RAM and secondarynode is having 4 CPU, 20 GB RAM. In case of primary node failure, secondary node is unable to handle the load. So we want to replace the secondary node with the same capacity as primary. The new node which I am preparing is from same hardware family having 16 CPU, 64 GB RAM. I have installed OS on it. My question is about the VCS installation and configuration on new node.What should be correct process of VCS installation to make new node asexact replica of secondary node with minimal downtime? Please revert ASAP. Thanks AmitSolved868Views0likes2Commentscannot configure vxfen after reboot
Hello, We move physically a server, and after reboot, we cannot configure vxfen. # vxfenconfig -c VXFEN vxfenconfig ERROR V-11-2-1002 Open failed for device: /dev/vxfen with error 2 my vxfen.log : Wed Aug 19 13:17:09 CEST 2015 Invoked vxfen. Starting Wed Aug 19 13:17:23 CEST 2015 return value from above operation is 1 Wed Aug 19 13:17:23 CEST 2015 output was VXFEN vxfenconfig ERROR V-11-2-1041 Snapshot for this node is different from that of the running cluster. Log Buffer: 0xffffffffa0c928a0 VXFEN vxfenconfig NOTICE Driver will use customized fencing - mechanism cps Wed Aug 19 13:17:23 CEST 2015 exiting with 1 Engine version 6.0.10.0 RHEL 6.3 any idea to help me running the vxfen (and the had after ... ) ?6KViews0likes7CommentsVCS 6.2 Agents for Adabas and MariaDB
Hi colleagues, For a customer I have to integrate two seperate databases into VCS 6.2 running on Oracle Linux. However I have no experience with both of them. And I could not find anything on SORT. Could someone point me to the right direction regarding the VCS Agents? MariaDB The first one is MariaDB. However I can not find anything about MariaDB together with VCS. As I found out MariaDB is a spinoff from MySQL since the Oracle/Sun merge, it's very simular to MySQL. Does this mean I can use the same datbase resource as one would use for MySQL? Or is there a sperate one? I assume MariaDB works seemless with VCS 6.2, since I saw this Release Note of SFHA 6.2 stating "Performance improvements forMySQL and MariaDB applications". So I would guess it works great, but I can not find any Agent. Source: https://www-secure.symantec.com/connect/sfha62 Adabas The other database to integrate is Adabas (from Software AG). From what I learned from Google, I see there was a Partner Agent for VCS 4.1 MP1, but only for Solaris on SPARC or Solaris on x86. But no Linux. That is confirmed in the lit of Partner Agents in the "vcs_agent_support_matrix" spreadsheet (only Solaris on SPARC or x86, but no Linux). I also found a Symantec Article about an issue with the Adabas Agent for VCS 6.1 on SPARC/Solaris, but I can not find the Linux Agent on SORT. According to this Symantec Article the Agent should be released under the STEP (Symantec™ Technology Enabled Program). Source: https://support.symantec.com/en_US/article.TECH60665.html But when I look on SORT at the Agents for STEP, no Agents are listed. Source: https://sort.symantec.com/agents/detail/1142 The manufacturer, Software AG, also has a Wiki page dedicated to Adabas with VCS. But no info regarding the Agent. Source: https://techcommunity.softwareag.com/pwiki/-/wiki/Main/Using+Veritas+to+Enable+Adabas+High+Availability Any tips or links to the correct Agents? Thanks in advance, Sander FiersSolved1.1KViews0likes2Commentsdeleting rlink that's having "secondary_config_err" flag
hello, in my VCS global cluster my ORAGrp resource group is partially online since my rvgres is offline, i am suspecting the issue in the below rlink. i am trying to dissociate thi rlink(below:rlk_sys1-DB-rep_DB_r) and dettach it in order to delete it but i am not able to succeed. below are some output from the system. root@sys2# vxprint -P Disk group: DBhrDG TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 rl rlk_sys1-DB-rep_DB_r DB_rvg CONNECT - - ACTIVE - - rl rlk_sys1-rep_DB-rvg DB-rvg ENABLED - - PAUSE - - root@sys2# vxrlink -g DBhrDG dis rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-3520 Rlink rlk_sys1-rep_DB-rvg can not be dissociated if it is attached root@sys2# vxrlink -g DBhrDG det rlk_sys1-rep_DB-rvg VxVM VVR vxrlink ERROR V-5-1-10128 Operation not allowed with attached rlinks root@sys2# vxedit -g DBhrDG rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3540 Rlink rlk_sys1-rep_DB-rvg is not disabled, use -f flag root@sys2# vxedit -g DBhrDG -f rm rlk_sys1-rep_DB-rvg VxVM vxedit ERROR V-5-1-3541 Rlink rlk_sys1-rep_DB-rvg is not dissociated root@sys2# vxprint -Vl Disk group: DBhrDG Rvg: DB-rvg info: rid=0.1317 version=0 rvg_version=41 last_tag=11 state: state=CLEAN kernel=DISABLED assoc: datavols=(none) srl=(none) rlinks=rlk_sys1-rep_DB-rvg exports=(none) vsets=(none) att: rlinks=rlk_sys1-rep_DB-rvg flags: closed secondary disabled detached passthru logging device: minor=26012 bdev=343/26012 cdev=343/26012 path=/dev/vx/dsk/DBhrDG/DB-rvg perms: user=root group=root mode=0600 Rvg: DB_rvg info: rid=0.1386 version=13 rvg_version=41 last_tag=12 state: state=ACTIVE kernel=ENABLED assoc: datavols=sys1_DB_Process,sys1_DB_Script,... srl=sys1_DB_SRL rlinks=rlk_sys1-DB-rep_DB_r exports=(none) vsets=(none) att: rlinks=rlk_sys1-DB-rep_DB_r flags: closed secondary enabled attached logging device: minor=26014 bdev=343/26014 cdev=343/26014 path=/dev/vx/dsk/DBhrDG/DB_rvg perms: user=root group=root mode=0600 please advise regards1.4KViews0likes4Comments