VM backup failing after installing SF Basic on Hyper-V Core server
Hi all, We have Hyper-V server with 10 virtual machines. We are using NetBackup 7.1.0.4 as backup software. Before installing SF Basic 6.0 , backup of all VM's was going fine. Now the backup of all VM's that are in running state is failing with error: Snapshot error encountered (156). Backup of VM's that are not running is going fine. I've installed SF Basic on a second Hyper-V server and I got the same situation. After that I've uninstalled SF Basic from this server and all backups are working fine. In the VEA console I can see the snapshots for the VM volumes created and after 2 minutes they are deleted. (screenshot in attacment) This is the error in the bpfis log from the NetBackup client: 10:58:01.940 [996.2672] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 10:58:01.940000 <Thread id - 2672> VSS API ERROR:- API [QueryStatus] status = 42309 [VSS_S_ASYNC_PENDING] 10:58:31.332 [996.5336] <2> send_keep_alive: INF - sending keep alive 10:59:32.174 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:00:33.017 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:01:33.860 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:02:34.702 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:03:35.544 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:04:36.385 [996.5336] <2> send_keep_alive: INF - sending keep alive 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 11:05:17.632000 <Thread id - 4896> VssNode::getSelectedWriterStatus: GetWriterStatus FAILED for Selected writer [Microsoft Hyper-V VSS Writer], writer is in state [5] [VSS_WS_WAITING_FOR_BACKUP_COMPLETE]. hrWriterFailure [800423f4] [-2147212300] 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - snapshot services: vss:Thu May 10 2012 11:05:17.632000 <Thread id - 4896> VssNode::make getSelectedWriterStatus FAILED 11:05:17.632 [996.4896] <2> onlfi_vfms_logf: INF - vfm_freeze_commit: vfm_method_errno=[38] 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - VfMS error 11; see following messages: 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - Fatal method error was reported 11:05:17.632 [996.4896] <32> onlfi_fim_split: FTL - vfm_freeze_commit: method: Hyper-V, type: FIM, function: Hyper-V_make Does someone know what could be the problem? Are there any debuglogs in Storage Foundation to help me with this problem and where can I enable/find them?2.7KViews0likes3CommentsStorage Foundation 5.1 and MSCS not working
Hi. We want to use Storage Foundation 5.1 to manage our dual path disk. We are clustering MS BizTalk 2006 so we need to use MSCS to Cluster BizTalk. Problem is that MSCS cannot see the disk group created in SF. We are using Windows 2003 Enterprise Server R2 We installed SF first and the disk groups appear fine. Then we tried to configure MSCS and it could not see the disk. During some investigation we discovered we needed the MSCS Option, so we added our "VRTS STORAGE FOUNDATION OPTION MICROSOFT CLUSTER 5.1 WIN FOR OS TIER ENTERPRISE EDITION STD LIC" license key and it indicated that the MSCS option is available. However, when we go back to Add/Remove programs to select this option, it does not appear as a choice in the add/remove SF options. How should we proceed? Is there a seperate plug in we need to download or should it appear in the list of options in the SF installation screen? Do we need to have MSCS configured before we install SF so it can detect it? Any advice would be appreciated. WarrenSolved1.9KViews0likes4Commentsshutdown and restart Cluster with VEA
Hi all, I have to shutdown our Hardware for a while. Now I'm looking for the best way to do that. I have two storage SAN's connected to two hardware micosoft clusternodes(Windows Server 2003) with VEA 3.2. I will shutdown the sytsems this way: 1. passive Cluster node 2. aktiv cluster node 3. first san controller 4. second san controller 5. both storage's 6. FB switche and the restart this way: 1. FB switche 2. both storage's 3. both storage san controller 4. both cluster node's Are there any things i have to regard? Do the VEA start the to resync after the restart for hours? or it will only reconnect. I hope you can help me. Thank you!1.8KViews0likes3CommentsHow do I migrate from MSCS to SFWHA?
Hi, I'm in the middle of a proposal and the customer ask us to describe the way (step by step) to migrate their MSCS (2003) to SFWHA (VCS)? do you know if Symantec has a procedure? Any recommendation? The customer requires minimal impact and smooth process? Thanks for your help. Gonzalo GomezSolved1.5KViews0likes7CommentsHA/DR With Veritas Volume Replication for Windows 2003/2008
Hello House, I am planning to install VCS 6.0 HA and DR with VVR on windows in my office. We have two data centres (production and DR) and there will be replication from production to DR. I have implemented similar project before on UNIX (Solaris) Global Cluster with Veritas Volume replication configured too replicating the application diskgroup to DR cluster . Is there any difference when doing this on windows especially on the ares below: 1> VCS Compatibitly list 2.> MPxIO /DMP Know DMP is enabled by default in vcs 6.0. 3.> Provisioning shared storage. 4> heartbeat communication 5> Installation order eg SF, VCS, CFS/CVM, VVR and Configure only SF which package to be installed in which order Regrads,Solved1.3KViews0likes4CommentsMultiple Shares issue
We have four Windows 2008 R2 SP1 SFW 5.1 HA clusters haring Files. The all have same configuration in terms of service pack, version of SFW5.1 CP8 and Windows 2008 R2 SP1 however the way sharing works differ between these clusters. If SFW creates a share on Cluster 1 nodes, the share is available when addresses the node hostname. For example, we create virtual name with Lanman Resource \\VIRTUAL01\SHARE01 When the share resource is brought up on cluster #1 we can access SHARE01 using \\CLUSTER01\SHARE01 as well as \\VIRTUAL01\SHARE01 When the same share resoure is brought on Cluster #2 we cannot do it using \\CLUSTER02\SHARE02 and can only do it \\VIRTUAL01\SHARE01 CLUSTER01 and CLUSTER02 are physical nodes hostnames. There's also another difference. When Storage Foundation creaes a Lanman Vistual Name on Cluster #1, we can see all the root windows drives with that virtual name e.g. \\VIRTUAL01\c$ When the same virtual name is created on cluster 2, we cannot access trhe root drives. This essentially leads to interesting scenarios: on the node where the SFW does not mix with the Windows sharing we can create two identical sharenames and point them to different locations. e.g. on Cluster Node 2 we can do "SHARE01" -> pointing to c:\Dir1 from Windows and also bring a Vierual Cluster Resource and Create "SHARE01" pointing to D:\Dir2 but accessingle only via the Virtual Lanman Resoirce name, created by the Storage Foundation. When looking at the Share and Storage management from Windows - we can see both Shares identically named pointing to different locations. We cannot do so on Cluster 1 as creating the same share name will conflict with existing share name. We are trying to nail this down but so far went through the setu (which was done from same image of OS and enabled same features).. Server Policy, SFW 5.1 version and updates - all to no avail. We need to understand this behaviour and what it coudl be related to. Any feedback is appreciated. Thank you.Solved1.3KViews0likes4CommentsStorage Foundation Basic
Hi We have a temporary requirement to transfer large files in a short space of time between HP-UX and Windows. We have gigabit between the servers, but the transfer times are too long for a migration period we have. So I'm looking to see whether I can do something clever using VxFS. VxFS is the standard filesystem for HP-UX (but not VxVM - we use LVM). What I'd like to do, if possible is as follows: - * Create a VxFS filesystem on a HP-UX 11.31 server, using layout 7 which is supported by HP-UX and also I believe in VxFS 6.X. * Populate the filesystem * Unmount and unpublish the SAN lun, and re-publish to a Windows server * Mount the filesystem * Unmount and unpublish the SAN lun, and re-publish to the HP-UX server Because Storage Foundation Basic doesn't support LVM, we would create the filesystem to use the entire LUN. The question is, will Storage Foundation Basic be able to mount this filesystem direct from a complete SAN lun? HP-UX doesn't support MBR partitions etc, nor do we have VxVM, so whole-disk is the only solution. What I don't know is whether Windows will try and "stamp" on the disk because it's got no partition table, or whether Storage Foundation will be able to mount it. I did download Storage Foundation Basic, but the download is for version 6.0.2 which only supports Windows 2012. If there's any chance of this working, I need to get hold of 6.0.1 which I understand supports 2008. Is there a link for this somewhere? Thanks SimonSolved1.2KViews0likes5CommentsSFW Dynamic Multi Pathing Feature required for MSCS setup with MS MPIO?
Hi, having implemented a two-node Server 2008 R2-based MSCS with multipathing (MS MPIO) I experienced some problems to create dynamic disk cluster group while configuring SFW. Doing some tests, also with SCSIcmd tool, I was able to identify the multipathing as potential cause of the problems. I found the following page https://www-secure.symantec.com/connect/articles/veritas-storage-foundation-windows-dynamic-multi-pathing-competitive-comparisons but it doesn't state that SFW dynamic multipathing feature would be generally a REQUIREMENT to operate SFW on a MSCS. So: My first question is, if Symantec SFW dynamic multipathing feature is required to operate my setup. The second question would be what I can do if my disk arrays aren't listed in the DSM support list. Environment: Nodes: 2x Dell PowerEdge T710 servers with Broadcom 10G dual-port iSCSI HBAs; Windows Server 2008 R2 SP1 fully patched Storage: EonStor InforTrend S16E-G1240 disk arrays; each configured as RAID5; EonPath (InforTrend DSM for MPIO) is installed; node addressing by Microsoft iSCSI-Initiators. Both the Microsoft cluster and the MPIO setup works perfectly without SFW. SFW is required for storage virtualization and VVR. Thanks in advance! BobSolved1.1KViews0likes2CommentsStorage Foundation 5.1
Hi, I am using SFHA for windows i HP storage but i need to replace HP with NetApp for this i mirrored both volumes after successful mirrored i break the volume. when i break the HP volume from mirror now its added in a Volume now i want to move this OLD volume to another DG for this i need to SPLIT DG but when i am trying to split it gives me error. My question is before split the DG can i down the DG and then split the DG?? I need a procudure Regards. SajidSolved1.1KViews0likes4CommentsInstalling Veritas Cluster Server 6.0 (with VVR and GCO) and Symantec Endpoint Protection Manager in the same server
Hi Everybody! I have 2 clusters A & B with a single node at each one ( I am using the components of VVR and GCO). So I am wonderabout thethe possibility of installing VCS 6.0 and SEPM 11(single server member of the cluster A), and install VCS 6.0 and SEP client mode 11in the other server (this server is member of cluster B). if so, what Ishouldtake care to avoid interference of each other? Best Regards Marlon975Views2likes3Comments