BE 2014 Slow Hyper -V Incremental Back ups
Hi All There seems to be a lot ofthreads regardingthis problem, however i've not found a difinitive solution. Environment Physical Backup Server - BE 2014, OS Windows 2008 R2 - Fully patched, 16GB RAM, 2 x 6 Core CPU's, backing up a mixture of physical andHyper-V Virtual Windows servers running Windows 2008/2012. We have recently upgraded to Backup Exec 2014, and it seems that our Hyper-V incremental backups run extremly slow. At around 200 MB/M. I understand that if I enable Microsoft incremental backups I need to upgrade my server to 2012 to get GRT backups. But i've not read anything that says this resolves the issue, and may impact on servers performance due to a snapshot always running. Any help would be great. Thanks Paul.330Views6likes1CommentBE2014 cannot restore NDMP data if that NDMP server no longer exists
I opened a case with Backup Exec on 9/3/2014 regarding restoring NDMP data from servers that no longer exist. In a nutshell we have lots and lots of historical Netapp/NDMP backups on tape of servers that have either been renamed ordecommissioned. When I load those tapes into BE2014 and perform an inventory and catalog it does notcreate any of the decommissioned NetApp servers, so I cannot select the server to perform the restore. Better yet, when I right click on the actual tape and select restore I get the message, "No Backup sets found". I can perform these same steps with the same tapesin BE2010R3 without issues. If anyone else out there has NDMP backups of servers that no longer exist i'd be curious if you witness the same problem.Symantec was able to replicate my issue in their lab and this was the response I got back when they closed the case. "I am writing you in reference to the open case mentioned above. Symantec Corporation has acknowledged the issue you are experiencing is present in the current version of the product. Symantec Corporation is committed to product quality and satisfied customers. This issue is currently under investigation by Symantec Corporation. Pending the outcome of the investigation, this issue may be resolved by way of a service pack in current or future revisions of the software. However, this particular issue is not currently scheduled for any release. If you feel this issue has a direct business impact for you and your continued use of the product, please contact your Symantec Sales representative or the Symantec Sales group to discuss these concerns. For information on how to contact Symantec Sales, please see http://www.symantec.com" I'm looking for any assistance I can get, for I cannot upgrade to BE2012 for the same reasons most of the community can't and I cannot purchase licenses for BE2010 because Symantec only supports thepriorversion being BE2012 at this point. Any help is appreciated. Chris StangerSolved962Views6likes3CommentsBackup Exec 2014: Application Error
Yesterday PM., I had upgraded my 2012 installation to 2014. Upgrade went fine. I manually started a VM backup job and then this morning one of the BE service was down. I looked at the event logs and found that the application has crash. Faulting application name: beserver.exe, version: 14.1.1786.0, time stamp: 0x5371d6c1 Faulting module name: beserver.exe, version: 14.1.1786.0, time stamp: 0x5371d6c1 Exception code: 0xc0000409 Fault offset: 0x0000000000660b64 Faulting process id: 0x1514 Faulting application start time: 0x01cf80bb86e30700 Faulting application path: D:\Program Files\Symantec\Backup Exec\beserver.exe Faulting module path: D:\Program Files\Symantec\Backup Exec\beserver.exe Report Id: dd7d0a26-ecae-11e3-b916-2c59e54739cb Also I had to swith the interface in english ( I was in french ). I couldnt edit my jobs with the french interface. Everything is offset when you edit the vCenter jobs .1.4KViews5likes6CommentsMy List of Problems With Backup Exec
I've used Backup Exec for years at numerous employers on numerous platforms, in both virtual and non-virtual environments, backing up everything from app servers to file servers, SQL servers to Exchange server, domain controllers and VM snapshots. I've used Backup Exec v7, v9, 2010, 2012, and 2014 across every major version of Windows Server from 2000 through 2012 R2. Backup Exec is the monster I love to hate. When it works it works, but when it doesn't it's infuriating. On one hand I've seen it perfectly restorecomplex applicationservers to pre-crash states. On the other, I've seen it completely unable to perform the simplest tasks. This article is NOT intended to be a rambling complaint. Instead, I am posting it as an official record of my own experiences with the product. The following issues are not isolated, but rather have been observed across many different platforms and clean installs, and are easily reproduced/encountered. Some are bugs. Others are just bad design. My hope is that the folks over at Veritas will take notice. At the very least, you may find you're not aloneregardingone or moreissues you've encountered yourself. NOTE: Having said the above, it's possible that some issues are unique to my environment (that is, my network, my Active Directory domain, etc...). Many of these problems have been observed across domains, and all of them have been observed across multiple backup servers, but I won't rule out the possibility that some of these are unique to my domain (if that's possible). GENERAL When modifying a backup job containing multiple target systems, even minor changes (such as job name or scheduling changes) often result in ‘Unable to create the Backup Items’, ‘The method Backup Items was called with invalid arguments’ error messages, forcing you to completely delete and recreate the backup job. This is unacceptable, and is half the reason I rarely group backups for different servers (if I can't update the job later then I'm only asking for trouble). Selecting ‘Do not run a delayed catalog operation’ suggests that a catalog operation won't be run. In fact, it merely ensures the catalog operation is run in parallel with the job itself.I understand the need to run the catalog, but thisis a highly misleading option. VSS provider selections are poorly-implemented, especially in regards toExchange backup jobs. When backing up (A) Exchange databases on (B) a virtual server, you must select ‘System – Use Microsoft Software Shadow Copy Provider’. If your server isn’t virtual, or if you’re backing up file system and System State data on a virtual server, you must select ‘Automatic – Allow VSS to select the snapshot provider’. This is an incredibly obscure setting that isn’t well-documented. The software knows the Exchange server is virtual, and it knows what I'm backing up. Why can't it just use that information to select the best VSS provider? The jobs don't offer nearly enough granular control over different resources. For instance, I can't set up one Exchange backup job to back up 5 databases in full, and another 3 as incrementals. It's all or nothing. Added flexibility in the job design would be nice. JOB MONITOR FILTERS Filters are a good idea, but they inevitably disappear without any explanation. This happens on both servers and client-side installations. The filters are poorly designed insofar as their categorization options. For instance,you can build a filter to show you either(A) active or(B) pending backup jobs, but you can't build one filter to show you both. MICROSOFT EXCHANGE You cannot perform granular Exchange restores from an incremental Exchange backup, even with GRT enabled. This seems to be a bug. [NOTE: Per comments added to this thread below, it has been pointed out that incremental GRT backups are not supported when using deduplication, but for B2D backups they should work fine. In my environment, while (A) I can verify that they do work when run against B2D storage devices, the (B) GRT processoperation often fails for one or more database, rendering those emails non-restorable from a GRT standpoint, and thus rendering the GRT function itself useless. After observing this over a long period of time in my environment, my best estimates are that while incremental GRT backups to B2D fail the GRT processing phase for between 40%-50% fo the databases backed up, whereas the failure rate drops to aroung 10% of databases when those same backup jobs are switched from incremental to full.] Incremental Exchange backups often hang backup jobs, specifically in regards to the cataloging operation. There are no safeguards in place to prevent two jobs from backing up the same Exchange database at the same time. While not a bug, this would be a welcome improvement. Exchange backup jobs often report as having completed successfully, but examination of the job logs shows that one or more backup selections were not processed for GRT. Why do the jobs report a successful completion if the GRT aspect failed? The option to use the search function to find objects backed up as part of an Exchange GRT backup(versus manually navigating databases/mailboxes/etc...) is not present after cataloging old media. Not sure if this is a limit tied to data restores from tape versus hard drives, or if it is related to reintroducing old media and cataloging. The Exchange search options are not very useful. You can search for message title, size, and modified time (I assume this is the date/time the message was created), but you cannot search for things like recipients (TO, CC, BCC), sender, or message body. As a result, finding that one email lost amidst tens of thousands can become a very tedious task. Expanding on the current search options would be a welcome addition. SCHEDULING Every incremental/differential job must be directly tied to a corresponding full job. This robs you of tremendous flexibility when it comes to running a full backup once a week, with multiple incremental backups on multiple schedules throughout the week. It also means that if you want to back up a specific piece of data on a server you must back it up fully and incrementally/differentially outside of the usual backup job. No way to link backup jobs outside of some fancy PowerShell commands, and even then, the GUI doesn’t indicate that the jobs are linked. The app does a great job of visualizing different stages of a single job, but it would be great if it included the ability to link completely unrelated jobs and present this visually as well. DUPLICATION LTO duplication jobs behave very erratically. Consider the following scenario: You have a tape library with three tapes (let's call them tapes A, B, and C). Each tape has been partially written-to but still has plenty of free space, and each is still appendable. Your goal is to copy the data off tape C over to either tape A or tape B, in order to consolidate data and use fewer tapes for that particular backup set. You browse the backup sets on tape C, highlight all the entries, then right-click and choose the duplication option. [PROBLEM 1] - Backup Exec doesn't allow you to target a specific LTO tape as the destination. Alright, I'll admitthis is more of an annoyance than a problem, but it still complicates the ordeal and leads directly into...[PROBLEM 2] - Backup Exec will load the source tape, but then it might grab *ANY* tape as the target tape. So you don't know if your backups are going to be written to tape A or tape B. Worse, it *MIGHT* even be written to another tape, possibly one not written to at all, even when you've told the job to append to an existing tape first. This logic results in lots of wasted tape space. DEDUPLICATION There's no way to enable client-side deduplication on a single job containing multiple servers. To use client-side deduplication you must separate every server into its own job. Deduplication databases have a habit of not mounting after reboots, forcing numerous service stops/restarts, each of which can take 30-40 minutes. Or they don’t mount at all. Either way, this introduces huge amounts of downtime during something as simple as a reboot. For those of you thinking the database or the backup server must be bad, I have observed this over several years across multiple backup servers and numerous deduplication databases. Client-side deduplication works intermittently at best. A job may utilize client-side deduplication one day, then fail the next, then run successfully the day after that. Veritas techs have told me to just use server-side deduplication, since client-side deduplication doesn't do much anyway and the errors aren't worth the effort, but if this is the case why does the app offer client-side dedup at all? After importing an old deduplication database, you cannot write to it. All you can do is read from it. Why? Does this mean every time you build a new backup server and migrate a deduplication database you have to start building a new database from scratch? That's a huge waste of space that some admins can't afford. Not only that, but because Backup Exec only allows you to have one deduplication database at a time, the act of importing an old database shoots you in the foot, preventing you from backing up to a new deduplication database while you reference an old database. RESTORES After cataloging old media, the old server name appears in the ‘Backup and Restore’ server list. However, you cannot initiate restores from here - the option is present but it's grayed-out. Instead, you must manually browse to the loaded media, find the backup set in question, and perform a restore from there. It makes no sense to add the server into the server list if you can’t use it, especially since when I'm done restoring I have to go in there and blow away the server anyway. It's a useless feature that adds more work and can lead you to believe you can't restore the server at all. The Exchange GRT search function is only available when you right-click a server from the server list page and perform a restore from there. If you right-click the backup media itself (a must if you've recently cataloged old media), the search function doesn't exist. Backup Exec provides no way of restoring Exchange mailbox data directly to a PST file. The only way to do this is to install the backup agent to a client system (probably a PC) with a copy of Outlook, adding the system into your servers list, then perform a restore from Exchange to that client system. Not only does this mean you need a secondary PC in play to achieve the export, but the requirements that the version of Outlook be 2010 or earlier means that you need a specific PC. In my case, my team of techs have Outlook 2010, 2013, and 2013, depending on which tech you're talking about, and that means my 2010 tech is the only person whose PC supports this function. So every time I restore to PST I have to make sure he's around and plugged in, with no intention of rebooting any time soon or leaving the office. It's stupid to have to involve any client PC in the first place, but to further limit us by restricting versions of Outlook is insane. SERVICES Backup Exec services don’t always start properly. Often you must configure them for a delayed start. This has been observed across multiple backup servers and multiple clean installs of Backup Exec. Oddly, this problem rarely arises immediately upon install, so there may be mitigating factors. STORAGE DEVICES Pending jobs claim available read operations against disk storage. For instance, let's say you have a 1 tape drive and 1 B2D store. The B2D store allows 4 read/write operations at a time. Now let's say you have 5 jobs queued up to copy data from the tape drive to the B2D store. You'd think that since you have just 1 tape drive, you'd only use up 1 of your read/write operations, allowing Backup Exec to run other jobs against the B2D store in the meantime. You'd be wrong. Instead, Backup Exec eats up all 4 of the read/write operations for the B2D store, claiming each for one of the pending tape-to-B2D jobs, even though only 1 can run at a time because you only have 1 tape drive. This bottlenecks all your other jobs. Admittedly this is probably a rare circumstance, but in my environment I have to copy data from tapes to a B2D store, and the read/write limits combined with Backup Exec's poor way to reserving the operations means that all my other jobs just sit there waiting until the tape drive finishes its jobs first. Backup Exec only allows you to create one B2D storage device per drive letter. This is a rediculous limitation. LICENSING When inputting new license keys into the software, the license count doesn’t appear next to the product name. Instead, you must proceed to the next screen of the wizard, then use the Back button to return to the first page. Only then can you verify that the new license keys incorporated the correct counts. This is just sloppy. If the wizard can immediately detect the new license types based on the key, it should be able to detect the count as well. Instead, it makes you think your keys are bad unless you happen to go back in the wizard after you've proceeded to the next step. License counts can easily get screwed up, even when you’re adding the proper keys. Backup servers can 'lose' licenses for no good reason. This is more common during upgrades, but it should never happen at all. BACKUP AGENTS Remote agents will occasionally become unresponsive, resulting in failed backup jobs. When backing up Exchange databases directly to LTO tapes, jobs will report as failed better than 90% of the time, resulting from a disconnect between the Backup Exec server and the remote agent. However, the job itself seems fine and the data is restorable. Backing up Exchange databases to non-LTO mediums don’t result in these errors.4.4KViews4likes5CommentsVMware based incrementals not working
Hi, I'm having issues with getting Incremental backups of VMware virtual machines working using the AVVI agent. V-79-57344-38324 - Incremental\\Differential backup methods are not supported for this VM. The backup job failed, since the option 'to fall back to a full backup' was not selected. V-79-57344-38366 - Backup Exec 2014. vSphere 5.5 I've confirmed that VM hardware version is appropriate and that CBT is enabled and working. Full backups have completed successfully. What can caused the above message and incrementals to not be possible? Thanks1.1KViews4likes4CommentsMoving Backup Exec from SQL 2008 R2 to local SQLExpress
Hello, System overview: BKP01 - Backup Server running WS2008 R2 Enterprise SP1 x64 with Backup Exec 2014 Version 14.1 Rev. 1786 (64-bit) SQL01 - SQL Server running WS2008 R2 Standard SP1 x64 with MS SQL 2008 R2 (Fully patched) Current SQL Server instance: SQL01\MSSQLSERVER I would very much like to move the BEDB to BKP01. Currently SQLExpress is not installed on BKP01. I also am planning an upgrade BKP01 to BE 15 if that will give me a path to move the DB. Any advice welcome.1.4KViews4likes7CommentsBackup Exec Upgrade 2012 to 2014 - Deduplication option error
Just an FYI for anyone running into this issue during an upgrade to BE-2014: Isaw this exact same problemwhen upgrading our previous edition of Backup Exec to BE-2012 and now again when upgrading 2012 to 2014 and the fix is the same. When running the installation for BE-2014 the wizard came to the point of selecting installation type (Full / Tial) -(the screens after entering Licensing) and a messagewindow appearedstating the following: "Deduplication Option is no longer licensed in this upgrade. You may not remove the license during the upgrade. To continue, either add a serial number for Deduplication Option, upgrade as trial, or remove the option in the previous version, then continue the upgrade." (Also outlined in this kb article:http://www.symantec.com/business/support/index?page=content&id=TECH185257 ) The fix was also the same - Open regedit, navigate to: HKLM\SOFTWARE\Symantec\... Delete the \Puredisk\ registry key and it's child\Puredisk\Agent\. (Inour case, since we neverinstalled, trialed, used or licenseddedupe at all these keys were completelyempty of values.) Quit and re-run the Installation. The wizard should continue past this point and straight to the options screen. Thisshould allow you to install/upgrade BE-2014 with licenses the first time, without having to go through the trial install first. Happy Upgrading.926Views4likes3Comments