[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!420Views0likes1CommentDoes Infoscale Storage (VVR) support cascaded space-optimized snapshot?
Configuration: Infoscale Storage 8.0 on Linux Infoscale storage foundation supports cascaded snapshot using vxsnap infrontof= to do cascaded snapshots Infoscale storage (with Volume replicator) documentation doesn't describe cascaded snapshot. I checked manpage for vxrvg. Does not have an infront of attribute. Does that mean cascaded space-optimized snapshots are not supported/permitted on RVG?1.2KViews0likes2CommentsDoes VVR support cascaded space-optimized snapshot?
Configuration: Infoscale Storage 8.0 on Linux Infoscale storage foundation supports cascaded snapshot using vxsnap infrontof= to do cascaded snapshots Infoscale storage (with Volume replicator) documentation doesn't describe cascaded snapshot. I checked manpage for vxrvg. Does not have an infront of attribute. Does that mean cascaded space-optimized snapshots are not supported/permitted on RVG?728Views0likes0CommentsFile Server - 1.5TB
The file server has the operating system disk and the files. In the selection it is not possible to choose only one, is it? A: Is it recommended to take a snapshot of a disk of this size? A: What would be the recommended maximum size to take a snapshot? A: One solution would be to create an agent-based backup, and create two JOBs: one job only for the files and one for the operating system, for a DR. What do you think? A:593Views0likes1CommentNetbackup error VM backup ESX 6.0
Hi, My VM backup is getting error code 4274. But i never seen this error "vim.fault.GenericVmConfigFault <33>", or this one "status=36 (SYM_VMC_TASK_REACHED_ERROR_STATE). Error Details: [A general system error occurred: vim.fault.GenericVmConfigFault]." Cannot find the exact error in Veritas or VMware forum. Anyone have experience with the same problem? 06/12/2017 21:12:52 - Info bpbrm (pid=14468) INF - vmwareLogger: WaitForTaskComplete: A general system error occurred: vim.fault.GenericVmConfigFault <33> 06/12/2017 21:12:52 - Info bpbrm (pid=14468) INF - vmwareLogger: WaitForTaskComplete: SYM_VMC_ERROR: TASK_REACHED_ERROR_STATE 06/12/2017 21:12:52 - Info bpbrm (pid=14468) INF - vmwareLogger: removeVirtualMachineSnapshot: SYM_VMC_ERROR: TASK_REACHED_ERROR_STATE 06/12/2017 21:12:52 - Critical bpbrm (pid=14468) from client PEPKPDB011: FTL - vSphere_freeze: Unable to remove virtual machine snapshot, status=36 (SYM_VMC_TASK_REACHED_ERROR_STATE). Error Details: [A general system error occurred: vim.fault.GenericVmConfigFault]. 06/12/2017 21:12:52 - Critical bpbrm (pid=14468) from client PEPKPDB011: FTL - vfm_freeze: method: VMware_v2, type: FIM, function: VMware_v2_freeze 06/12/2017 21:12:52 - Critical bpbrm (pid=14468) from client PEPKPDB011: FTL - vfm_freeze: method: VMware_v2, type: FIM, function: VMware_v2_freeze 06/12/2017 21:12:53 - Critical bpbrm (pid=14468) from client PEPKPDB011: FTL - snapshot processing failed, status 4274 06/12/2017 21:12:53 - Critical bpbrm (pid=14468) from client PEPKPDB011: FTL - snapshot creation failed, status 4274 06/12/2017 21:12:53 - Warning bpbrm (pid=14468) from client PEPKPDB011: WRN - ALL_LOCAL_DRIVES is not frozen 06/12/2017 21:12:53 - Info bpfis (pid=1544) done. status: 4274 06/12/2017 21:12:53 - end Application Snapshot: Create Snapshot; elapsed time 0:00:18 06/12/2017 21:12:53 - Info bpfis (pid=1544) done. status: 4274: Failed to remove virtual machine snapshot 06/12/2017 21:12:53 - end writing Operation Status: 4274 06/12/2017 21:12:53 - end Parent Job; elapsed time 0:00:18 06/12/2017 21:12:53 - begin Application Snapshot: Stop On Error Operation Status: 0 06/12/2017 21:12:53 - end Application Snapshot: Stop On Error; elapsed time 0:00:00 06/12/2017 21:12:53 - begin Application Snapshot: Cleanup Resources 06/12/2017 21:12:53 - end Application Snapshot: Cleanup Resources; elapsed time 0:00:00 06/12/2017 21:12:53 - begin Application Snapshot: Delete Snapshot 06/12/2017 21:12:54 - started process bpbrm (pid=16472) 06/12/2017 21:12:55 - Info bpbrm (pid=16472) Starting delete snapshot processing 06/12/2017 21:12:57 - Info bpfis (pid=11140) Backup started 06/12/2017 21:12:57 - Warning bpbrm (pid=16472) from client PEPKPDB011: cannot open C:\Program Files\Veritas\NetBackup\online_util\fi_cntl\bpfis.fim.PEPKPDB011_1497276755.1.0 06/12/2017 21:12:57 - Info bpfis (pid=11140) done. status: 4207 06/12/2017 21:12:57 - end Application Snapshot: Delete Snapshot; elapsed time 0:00:04 06/12/2017 21:12:57 - Info bpfis (pid=11140) done. status: 4207: Could not fetch snapshot metadata or state files 06/12/2017 21:12:57 - end writing Operation Status: 4207 Operation Status: 4274 Failed to remove virtual machine snapshot (4274)2.6KViews0likes1CommentBackup Exec 15 - FP5 - 2012 R2 Hyper-V Cluster - GRT Backups Stuck in Snapshot Processing
Hello all. We have a 2012 R2 Hyper-V Cluster, all patched to date. This cluster has 6 nodes, each one connected via FC to a 3PAR 7400c All-Flash array. Each cluster node hosts about 20 VMs and 2 or 3 CSV (so each VM role has direct access to their storage via FC as it is its own Coordinator Node). With that in mind, we installed BEX 15 FP5 in each cluster node so that we could backup the VMs directly through FC and not via Ethernet (it's way faster and we do have a lot of data). Since implementation, eveything is going smooth (with some issues sometimes) with all cluster nodes except one. Every cluster node has either a FC Tape Library or a SAS Drive (all LTO6/LTO5, all firmwares up-to-date). This cluster node, VHT06, shows erratic behavior. It starts the granular job, but hangs in a random VM in "Snapshot Processing". We can't cancel the job unless we stop the "vmms.exe" service or reboot the cluster node. If we manage to cancel the job, the VM gets stuck in "Backing up..." within Hyper-V manager and can't reboot. We thought it could be the tape library, so we changed it. Still the same issue. We even evicted and reprovisioned the cluster node two times. Backup through RAWSworks fine. Marco1.9KViews0likes2Comments