Requirements for my Master Server (RAM memory and processors)
Hi all. I have great doubt I am implementing a new platform in which there are 15 clients (Sun X4270). the questions is: The master server with netbackup to manage clients, that memory (RAM) should have? the ram of the master server depends on the clients? And talking about processors, also depends on customer? Obviously already taking into account the necessities of the installed applications Thank you and good daySolved6KViews1like9CommentsMSMQ Outgoing Queues referencing failed EV server - using Building Blocks for High Availability
Hi Everyone We make use of Building Blocks to build High Availability in our EV environment. We have come across a peculiar problem and wanted to know if anyone here has noted a similar issue. Scenario: We are in the archiving schedule and there are (say) 5 EV Mailbox Archiving Servers archiving 5 Exchange Servers. There are cross-references across EV and Exchange Servers i.e. the Storage Service for an archive is on an EV server other than the one which is running the Archival Task. Problem: If an EV Mailbox Archiving Servers fails, we use Building Blocks to quickly bring up services on a standby server by running Update Service Locations. However, we notice that items are still in archive pending state in user mailboxes being archived by other EV Servers. New items are also going into archive pending state across all user mailboxes. Analysis: We have already run a flushdns (or Clear-DNSClientCache in Powershell) across all EV servers. In the MSMQ Outgoing Queues, we noticed that there are queues which still resolve to the old failed servers and are in Failed to Connect or Waiting to Connect state, since MSMQ does not resolve to the EV aliases but the EV server hostnames. Solution: To resolve the issue, we need to add a host file entry resolving thehostname of the failed server to the IP address of the now active server and restart the storage/taskcontroller service. Has anyone of you come across a similar situation? We wish to avoid making host file entries and make thefailover process seamless.852Views0likes4CommentsMigrate ApplicationHA to new vCenter server
We're consolidating a number of vCenter servers into a single instance. One of the clusters we want to move has VMs managed by AppHA 6.0.1. Is there an easy way to migrate the cluster configuration from the old vCenter server to the new one without causing any cluster outages or losing the cluster configuration? I've found this article on how to deal with the permanent loss of a vCenter server, but an unsure what might happen if the existing vCenter server stays online. https://sort.symantec.com/public/documents/appha/6.0/windows/productguides/html/appha_userguide_60_win/apas11.htmSolved8.6KViews0likes9CommentsI want to switch the application to minos2,but I cannot select the system
I want to switch the application to minos2,but I cannot select the system No system,why? root@minos1(/)# hastatus -sum -- SYSTEM STATE -- System State Frozen A minos1 RUNNING 0 -- GROUP STATE -- Group System Probed AutoDisabled State B ClusterService minos1 Y N ONLINE B EMS minos1 Y N ONLINE B replication minos1 Y N ONLINE -- REMOTE CLUSTER STATE -- Cluster State N minos2 RUNNING -- REMOTE SYSTEM STATE -- cluster:system State Frozen O minos2:minos2 RUNNING 01.5KViews0likes3CommentsAutomatic Failover
I would like to use Veritas Cluster Server to achieve high availability and automatic fail over for my applications. These application are java applications with some services with the backend databases using Sybase The java applications can be broken into 2 parts : web layer and application layer. The underlying infrastructure will be in RHEL Linux for java applications and database (Sybase) My question is : is VCS supporting seamless automatic failover for the java services including database services without requiring manual intervention ? What i want to achieve is : after i setup active-passive for application layer, i expect the active node to automatically failover to passive node and immediately the passive node become active nodeSolved1KViews0likes1CommentImport disk group failure
Hello everyone! When I finished disk group configuration, I cannot find disk group until I imported by manual,butafter rebooting server,Icouldn't findthe disk group only if I imported again.following are my operations.Also the RVG was DISABLED until I started it,but it's still DISABLED after rebooting.Any help and suggestion would be appreciate [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) online [root@u31_host dsk]# vxdg import netnumendg [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk netnumendg01 netnumendg online sdc auto:cdsdisk netnumendg02 netnumendg online sdd auto:cdsdisk netnumendg03 netnumendg online [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 DISABLED CLEAN primary 2 srl_vol [root@u31_host dsk]# vxrvg -g netnumendg start netnumenrvg [root@u31_host dsk]# vxprint -rt |grep ^rv rv netnumenrvg 1 ENABLED ACTIVE primary 2 srl_vol After reboot the server [root@u31_host Desktop]# vxprint [root@u31_host Desktop]# vxdg list NAME STATE ID [root@u31_host Desktop]# vxdisk list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - - online sdc auto:cdsdisk - - online sdd auto:cdsdisk - - online [root@u31_host Desktop]# vxprint -rt |grep ^rv [root@u31_host Desktop]# cd /dev/vx/dsk/ [root@u31_host dsk]# ls [root@u31_host dsk]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS sda auto:none - - online invalid sdb auto:cdsdisk - (netnumendg) online sdc auto:cdsdisk - (netnumendg) online sdd auto:cdsdisk - (netnumendg) onlineSolved1KViews1like1Commentsetting llt (low prio net) in a bond configured in a bridge
Hi, I want to set a low prio network over a bond which is part of a bridge. I have a cluster with two nodes. This is the network configuration (it is the same for both nodes): node 2: [root@node2 ]# cat /proc/version Linux version 2.6.32-504.el6.x86_64 (mockbuild@x86-023.build.eng.bos.redhat.com) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-11) (GCC) ) #1 SMP Tue Sep 16 01:56:35 EDT 2014 [root@node2 ]# ifconfig | head -n 24 bond0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:761626 errors:0 dropped:0 overruns:0 frame:0 TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB) br0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 inet addr:10.10.11.102 Bcast:10.10.11.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe14:1321/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49678 errors:0 dropped:0 overruns:0 frame:0 TX packets:50264 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:44061727 (42.0 MiB) TX bytes:28800387 (27.4 MiB) eth0 Link encap:Ethernet HWaddr 52:54:00:14:13:21 UP BROADCAST RUNNING PROMISC SLAVE MULTICAST MTU:1500 Metric:1 RX packets:761626 errors:0 dropped:0 overruns:0 frame:0 TX packets:605968 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1188025449 (1.1 GiB) TX bytes:582093867 (555.1 MiB) [root@node2 ]# brctl show bridge name bridge id STP enabled interfaces br0 8000.525400141321 no bond0 [root@node2 ]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:14:13:21 Slave queue ID: 0 node 1: [root@node1]# ifconfig | head -n 24 bond0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:816219 errors:0 dropped:0 overruns:0 frame:0 TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB) br0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 inet addr:10.10.11.101 Bcast:10.10.11.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fe2e:6d23/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:813068 errors:0 dropped:0 overruns:0 frame:0 TX packets:640374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1181039350 (1.0 GiB) TX bytes:604216197 (576.2 MiB) eth0 Link encap:Ethernet HWaddr 52:54:00:2E:6D:23 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:816219 errors:0 dropped:0 overruns:0 frame:0 TX packets:668207 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1194971130 (1.1 GiB) TX bytes:607831273 (579.6 MiB) [root@node1]# brctl show bridge name bridge id STP enabled interfaces br0 8000.5254002e6d23 no bond0 [root@node1 ]# cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009) Bonding Mode: fault-tolerance (active-backup) Primary Slave: None Currently Active Slave: eth0 MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 Slave Interface: eth0 MII Status: up Speed: Unknown Duplex: Unknown Link Failure Count: 0 Permanent HW addr: 52:54:00:2e:6d:23 Slave queue ID: 0 the llt configuration files are the following: [root@node1 ]# cat /etc/llttab set-node node2 set-cluster 1042 link eth3 eth3-52:54:00:c3:a0:55 - ether - - link eth2 eth2-52:54:00:35:f6:a5 - ether - - link-lowpri bond0 bond0 - ether - - [root@node1 ]# cat /etc/llttab set-node node1 set-cluster 1042 link eth3 eth3-52:54:00:bc:9b:e5 - ether - - link eth2 eth2-52:54:00:31:fb:31 - ether - - link-lowpri bond0 bond0 - ether - - However this seems that is not working. When I check the llt status, the node thinks the interface is down in the other one. [root@node2 ]# lltstat -nvv | head LLT node information: Node State Link Status Address 0 node1 OPEN eth3 UP 52:54:00:BC:9B:E5 eth2 UP 52:54:00:31:FB:31 bond0 DOWN * 1 node2 OPEN eth3 UP 52:54:00:C3:A0:55 eth2 UP 52:54:00:35:F6:A5 bond0 UP 52:54:00:14:13:21 [root@node2 ]# lltstat -nvv | head LLT node information: Node State Link Status Address * 0 node1 OPEN eth3 UP 52:54:00:BC:9B:E5 eth2 UP 52:54:00:31:FB:31 bond0 UP 52:54:00:2E:6D:23 1 node2 OPEN eth3 UP 52:54:00:C3:A0:55 eth2 UP 52:54:00:35:F6:A5 bond0 DOWN Do you know if I have something worng? Is this a valid configuration? Thanks, Javier1.6KViews0likes4CommentsReplacing node
hi, I am using VCS on two node cluster having OS Solaris 10 Sparc. One primary node is having 13 CPU, 52 GB RAM and secondarynode is having 4 CPU, 20 GB RAM. In case of primary node failure, secondary node is unable to handle the load. So we want to replace the secondary node with the same capacity as primary. The new node which I am preparing is from same hardware family having 16 CPU, 64 GB RAM. I have installed OS on it. My question is about the VCS installation and configuration on new node.What should be correct process of VCS installation to make new node asexact replica of secondary node with minimal downtime? Please revert ASAP. Thanks AmitSolved869Views0likes2CommentsHigh availability Diagram review requested
Dear Tech Gurus, Can you please review attached diagram of High availabilty for Enterprise Vault. I wanted you professionals to review it once, before I present it. I thank you in advance for your assistance and feedback. Regards, Gautam954Views0likes5Comments