Installation of Storage Foundation on Win 2003 Ent. R2
Hi, We have two servers with two non-standerd application running. The Operating System - Windows 2003 Ent. R2 FC based storage connected with both the servers. Now we wants a DR site with host to host replication ... Will this solution work ..........any caution need to take care... I came accross some docs..that the C drive need to be in adifferent arry... IS IT.. Is there any thing else need to take care... Can we also do through Backup Exec to take a backup and create a DR site ...or something.. Please, help.. :) Thanks.Installing Veritas Cluster Server 6.0 (with VVR and GCO) and Symantec Endpoint Protection Manager in the same server
Hi Everybody! I have 2 clusters A & B with a single node at each one ( I am using the components of VVR and GCO). So I am wonderabout thethe possibility of installing VCS 6.0 and SEPM 11(single server member of the cluster A), and install VCS 6.0 and SEP client mode 11in the other server (this server is member of cluster B). if so, what Ishouldtake care to avoid interference of each other? Best Regards Marlon975Views2likes3CommentsStorage Foundation vs VMWare and RDM's
Hi We need to share a disk between a physical server and a Virtual server. My Proposed solution - use Storage Foundation to share the RDM disk between the physical and virtual server. However our VMware team are completely against using RDM's. They say it’s not best practice and can brake VMware. Is that true. Maybe there is a better way sharing a disk between physical and virtual servers? Regards Marius Gordon1.3KViews1like3CommentsMSSQL 2012 cluster High Availability on VMware
Hi i would like to know when SQL 2012 will be supported with HA Console wizard. we need to implement VCS SQL 2012 in a VMware environment so we can use VCS with VMotion, DRS, etc... i know that i can configure VCS Cluster manually without a wizard but this require highly knowledge of the VCS and a hight level overview of the process, so currently this is not an option thanks!Solved1.4KViews0likes5Commentsveritas storage foundation basic 32 bit required
HI all, I am doing experiments with windows server 2003 EE 32 bit. I want to use the veritas storage foundation basic with 32 bit windows. Can anybody please let me know if 32 bit version of sfw basic is available? ThanksSolved1.3KViews5likes3CommentsVM backup failing after installing SF Basic on Hyper-V Core server
Hi all, We have Hyper-V server with 10 virtual machines. We are using NetBackup 7.1.0.4 as backup software. Before installing SF Basic 6.0 , backup of all VM's was going fine. Now the backup of all VM's that are in running state is failing with error: Snapshot error encountered (156). Backup of VM's that are not running is going fine. I've installed SF Basic on a second Hyper-V server and I got the same situation. After that I've uninstalled SF Basic from this server and all backups are working fine. In the VEA console I can see the snapshots for the VM volumes created and after 2 minutes they are deleted. (screenshot in attacment) This is the error in the bpfis log from the NetBackup client: 10:58:01.940 [996.2672] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 10:58:01.940000 <Thread id - 2672> VSS API ERROR:- API [QueryStatus] status = 42309 [VSS_S_ASYNC_PENDING] 10:58:31.332 [996.5336] <2> send_keep_alive: INF - sending keep alive 10:59:32.174 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:00:33.017 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:01:33.860 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:02:34.702 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:03:35.544 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:04:36.385 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 11:05:17.632000 <Thread id - 4896> VssNode::getSelectedWriterStatus: GetWriterStatus FAILED for Selected writer [Microsoft Hyper-V VSS Writer], writer is in state [5] [VSS_WS_WAITING_FOR_BACKUP_COMPLETE]. hrWriterFailure [800423f4] [-2147212300] 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 11:05:17.632000 <Thread id - 4896> VssNode::make getSelectedWriterStatus FAILED 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - vfm_freeze_commit: vfm_method_errno=[38] 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - VfMS error 11; see following messages: 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - Fatal method error was reported 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - vfm_freeze_commit: method: Hyper-V, type: FIM, function: Hyper-V_make Does someone know what could be the problem? Are there any debuglogs in Storage Foundation to help me with this problem and where can I enable/find them?2.7KViews0likes3CommentsMultiple Shares issue
We have four Windows 2008 R2 SP1 SFW 5.1 HA clusters haring Files. The all have same configuration in terms of service pack, version of SFW5.1 CP8 and Windows 2008 R2 SP1 however the way sharing works differ between these clusters. If SFW creates a share on Cluster 1 nodes, the share is available when addresses the node hostname. For example, we create virtual name with Lanman Resource \\VIRTUAL01\SHARE01 When the share resource is brought up on cluster #1 we can access SHARE01 using \\CLUSTER01\SHARE01 as well as \\VIRTUAL01\SHARE01 When the same share resoure is brought on Cluster #2 we cannot do it using \\CLUSTER02\SHARE02 and can only do it \\VIRTUAL01\SHARE01 CLUSTER01 and CLUSTER02 are physical nodes hostnames. There's also another difference. When Storage Foundation creaes a Lanman Vistual Name on Cluster #1, we can see all the root windows drives with that virtual name e.g. \\VIRTUAL01\c$ When the same virtual name is created on cluster 2, we cannot access trhe root drives. This essentially leads to interesting scenarios: on the node where the SFW does not mix with the Windows sharing we can create two identical sharenames and point them to different locations. e.g. on Cluster Node 2 we can do "SHARE01" -> pointing to c:\Dir1 from Windows and also bring a Vierual Cluster Resource and Create "SHARE01" pointing to D:\Dir2 but accessingle only via the Virtual Lanman Resoirce name, created by the Storage Foundation. When looking at the Share and Storage management from Windows - we can see both Shares identically named pointing to different locations. We cannot do so on Cluster 1 as creating the same share name will conflict with existing share name. We are trying to nail this down but so far went through the setu (which was done from same image of OS and enabled same features).. Server Policy, SFW 5.1 version and updates - all to no avail. We need to understand this behaviour and what it coudl be related to. Any feedback is appreciated. Thank you.Solved1.3KViews0likes4CommentsVeritas und windows iscsi initiator
Hi all, i have only one question, I have Veritas Enterprise Administrotor 3.2 running on my Windows 2003 Cluster with some volumes. Now i would like to install microsoft iscsi initiator to present a nas to the cluster, it is possible? or there are any issues between this both apps? Thank you718Views0likes1CommentV-76-58645-585: Failed to reserve a majority of disks in cluster dynamic disk group
Hi Everyone, Am getting the following error when trying to import a disk group as a cluster disk group: V-76-58645-585: Failed to reserve a majority of disks in cluster dynamic disk group Disk is able to import as a dynamic disk group but not as a cluster disk group. Platform is Windows Server 2003 SP2 32bit. SFWHA version is 5.1 SP2. Have following the steps on this technote (http://www.symantec.com/business/support/index?page=content&id=TECH88082) but issue still persists. The hardware is a small PC that has three seperate SATA hard disks (no RAID controller). OS resides on one disk while the rest of the two disks are in a single DG. To trouble shoot, also removed one disk from DG but the clustered import fails. Cluster has a single node only. Have enabled UseSystemBus attirbute as well. Tried force import option too. Is there a hardware limitation in place here? Does any one have any suggestions (to isolate the issue)? Thanks. WBR, SWSolved1.7KViews0likes9CommentsSFW Dynamic Multi Pathing Feature required for MSCS setup with MS MPIO?
Hi, having implemented a two-node Server 2008 R2-based MSCS with multipathing (MS MPIO) I experienced some problems to create dynamic disk cluster group while configuring SFW. Doing some tests, also with SCSIcmd tool, I was able to identify the multipathing as potential cause of the problems. I found the following page https://www-secure.symantec.com/connect/articles/veritas-storage-foundation-windows-dynamic-multi-pathing-competitive-comparisons but it doesn't state that SFW dynamic multipathing feature would be generally a REQUIREMENT to operate SFW on a MSCS. So: My first question is, if Symantec SFW dynamic multipathing feature is required to operate my setup. The second question would be what I can do if my disk arrays aren't listed in the DSM support list. Environment: Nodes: 2x Dell PowerEdge T710 servers with Broadcom 10G dual-port iSCSI HBAs; Windows Server 2008 R2 SP1 fully patched Storage: EonStor InforTrend S16E-G1240 disk arrays; each configured as RAID5; EonPath (InforTrend DSM for MPIO) is installed; node addressing by Microsoft iSCSI-Initiators. Both the Microsoft cluster and the MPIO setup works perfectly without SFW. SFW is required for storage virtualization and VVR. Thanks in advance! BobSolved1.1KViews0likes2Comments