NEW RELEASE: Introducing Symantec Storage Foundation High Availability and Symantec ApplicationHA 6.1 for Windows
April 7th 2014: Symantec today announces the release of Symantec Storage Foundation High Availability for Windows 6.1 and Symantec ApplicationHA 6.1 for Windows en617Views2likes1CommentWhen life give you lemons, make cliff notes!
At Symantec, we are always looking for ways to empower you to do more with less! What better ways to do that than create cliff notes to quickly guide you through some Storage Foundation (SF) and Veritas Cluster Server (VCS) tasks! Thanks to the Symantec Education team, attached are four quick reference guides! Go ahead and use them and you will have plenty of time to make that4.5KViews0likes7Commentsshutdown and restart Cluster with VEA
Hi all, I have to shutdown our Hardware for a while. Now I'm looking for the best way to do that. I have two storage SAN's connected to two hardware micosoft clusternodes(Windows Server 2003) with VEA 3.2. I will shutdown the sytsems this way: 1. passive Cluster node 2. aktiv cluster node 3. first san controller 4. second san controller 5. both storage's 6. FB switche and the restart this way: 1. FB switche 2. both storage's 3. both storage san controller 4. both cluster node's Are there any things i have to regard? Do the VEA start the to resync after the restart for hours? or it will only reconnect. I hope you can help me. Thank you!1.8KViews0likes3CommentsAdding New Node Veritas Cluster Server with different hardware specification
Dear Experts, I need your suggestion on the below: Currently we have Two Node Veritas Cluster 6.2 running Windows 2008 R2 hosted on HPE DL380 G7 Servers. We are planning to refresh the hardware and want to move all workloads to new HPE DL380 G9/G10 Servers with Veritas Cluster 6.2 being deployed on Windows 2008 R2. It will only hardware refresh without any Application OR OS Upgrade. Currently Oracle 10gR2 is configured in Failover cluster mode. Application binaries are installed in C:\ drives on all cluster nodes. Would like to know whether I can deploy New VCS 6.2 node on New HPE DL380 G9/G10 Server and Add to existing cluster? If possible, what is the way around. OR this will not work? I tried to search articles, but no luck. Since the hardware architecture will be different, what will be the consequences when we do failover manually OR if we shutdown the Resource Group and start on Newly deployed server? Appreciate you feedback, answers, and any ideas with new approach. Thanks RaneSolved2.5KViews0likes5CommentsNew to the Storage Management Community? Start Here...
The Symantec Connect site consists of six communities, including Storage and Clustering. This blog provides an overview of some of the key information about navigating and participating in the Connect Storage and Clustering community and navigating through the Symantec Connect community site.17KViews1like4CommentsOn windows, how to import a diskgroup made up of cloned disks on the same host as the original diskgroup
On unix, we can do: vxdg -n newckdg -o useclonedev=on -o updateid import ckdg, or writing a new UDID to a disk using "vxdisk updateudid" or "vxdisk set clone=on" But I do not see these command options for Windows. So how do you import a disk group containing cloned disks on Windows? I did not find any posting on this topics. Thanks.Solved1.6KViews0likes1CommentQuestions about Volume Manager
Hello everyone, I have two questions about the Volume Manager within InfoScale Storage Foundation: 1) Is the Volume Manager supported on a Windows Server 2012 R2 with Oracle Database? 2) Did I understand it correctly, that if you make a volume from e.g. 8 x SAN LUNs that you can expand the volume size by expanding those 8 LUNs in the SAN without having to recreate the whole Volume? (In other words expanding the volume by increasing the size of the current LUNs?) Thanks in advance! regards, VilleSolved1.2KViews0likes1CommentHow to add virtual fencing disk to KVM guest
Hi, I'm trying to install Veritas Storage Foundation Cluster File System HA 6.0.3 to 6 KVM RHEL guests. I installed RHEL6.4 and created 3 virtual fencing disks using qemu-img. I added them as SCSI disks to the node and did the vxdisksetup for them. Now when I tried to configure fencing, I get the error below. I'm wondering if anyone has installed cluster file system into KVM guest environment and could tell me how to add virtual fencing disks so that fencing configuration would be successful. This is what I tried on the KVM: <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/VM/VMImages/IOFencing1.img'/> <target dev='sda' bus='scsi'/> <shareable/> <alias name='scsi0-0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/VM/VMImages/IOFencing2.img'/> <target dev='sdb' bus='scsi'/> <shareable/> <alias name='scsi0-0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/home/VM/VMImages/IOFencing3.img'/> <target dev='sdc' bus='scsi'/> <shareable/> <alias name='scsi0-0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> Here is how the disks look like: [root@node1 installsfcfsha601-201605190528YQa]# vxdisk -o alldgs list DEVICE TYPE DISK GROUP STATUS disk_0 auto:cdsdisk - - online disk_1 auto:cdsdisk - - online disk_2 auto:cdsdisk - - online disk_3 auto:none - - online invalid disk_4 auto:none - - online invalid vda auto:none - - online invalid [root@node1 installsfcfsha601-201605190528YQa]# [root@node1 installsfcfsha601-201605190528YQa]# cat /etc/vxfentab # # /etc/vxfentab: # DO NOT MODIFY this file as it is generated by the # VXFEN rc script from the file /etc/vxfendg. # /dev/vx/rdmp/disk_0 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-0 /dev/vx/rdmp/disk_1 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-1 /dev/vx/rdmp/disk_2 QEMU%5FQEMU%20HARDDISK%5FDISKS%5Fdrive-scsi0-0-0-2 [root@node1 installsfcfsha601-201605190528YQa]# The error I get: kernel: I/O Fencing DISABLED!VXFEN INFO V-11-1-35 Fencing driver going into RUNNING state kernel: GAB INFO V-15-1-20032 Port b closed kernel: GAB INFO V-15-1-20229 Client VxFen deiniting GAB API kernel: VXFEN INFO V-11-1-36 VxFEN configured at protocol version 30 kernel: GAB INFO V-15-1-20230 Client VxFen inited GAB API with handle ffff8803b8288ac0 kernel: GAB INFO V-15-1-20036 Port b[VxFen (refcount 2)] gen 93aa25 membership 0 kernel: GAB INFO V-15-1-20038 Port b[VxFen (refcount 2)] gen 93aa25 k_jeopardy ;12345 kernel: GAB INFO V-15-1-20040 Port b[VxFen (refcount 2)] gen 93aa25 visible ;12345 kernel: VXFEN WARNING V-11-1-12 Potentially a kernel: preexisting split-brain. kernel: Dropping out of cluster. kernel: Refer to user documentation for kernel: steps required to clear preexisting kernel: split-brain.