Jobs stuck indefinitely in Queued Status
We have had an ongoing issue for about 2 months now, and since we have had a clean build (3 times now) for Backup Exec 2012. We have opened numerous cases with Symantec to resolve this, and they claim the first time that HotFix 209149 (see http://www.symantec.com/business/support/index?page=content&id=TECH204116 ) corrects the issue. The issue is also noted by seeing Robotic Element errors stating that OST media is full and it brings the "virtual" slot for the PureDisk or OST storage device offline. Restarting the services in BE only makes the problem worse and causes a snowball effect whereby jobs constantly error in the ADAMM log files. Essentially, the jobs never can get a concurrency/virtual slot and they stay Queued forever. I have seen others on this Forum with this problem, and while the Forum administrator seems to mark them as "Solved", they are not - because I see the threads drop off with no resolution identified. Are other people having this problem? If so, how are you overcoming it? Once it starts the environment is essentially dead in the water because the jobs never start (they sit Queued forever) - save for one concurrency which for our size environment is only 1/4 the need we have. We use CAS and 4 MMS servers with 2008 R2 with all patches applied, PureDisk 32TB volume on each MMS, Data Domain OST connection to DD-670 OST with OST-Plug 2.62 (28TB), replicated catalogs, and duplicate jobs for optimized deduplication between MMSs. We run BE 2012 SP3 clean - we reinstalled with SP3 slipstreamed because Symantec said this problem could be fixed through database repair by them manually or by reinstall...we chose reinstall (even though they did offer to "fix" the issue with database repair). We chose reinstall to validate whether SP3 truly fixes the issue. It is clear to us it does not. We are looking for anyone else who has had this problem to report into this forum. Thank you, Dana14KViews4likes15CommentsBE2014 - Loading Media for hours during any restore (deduplication)
Hi There, Whenever i try and do a restore from my deduplication device (locally attached) from a recent backup, it takes forever to restore. For example today, i tried to restore a 70MB file, and it took over 1.5 hours. Most of the time the job was just stuck on 'loading media'. I logged a ticket with support, and got nowhwere.I logged a couple of tickets with support for a similar issue with backups, and got nowhere with that either. (Was told that it was normal behaviour) Is there anyone here who has tried to restore from a dedupe store? If yes, are you getting these issues? The server i'm trying to restore from is also a CASO Thanks in advance for any comments1KViews0likes3CommentsThe queried backup sets contain two backup sets with the same ID
Hi, After some serious disaster on BE15(latest patch), I managed to get the server running again. However I am facing a different issue. When I select the deduplication storage and select Backup Sets tab, I see "Load Failed... The queried backup sets contain two backup sets with the same ID." . This happened after I removed this member server from CASO environment. I checked the catalog files and they are present. I also tried the "Catrebuildindex -r" method mentioned at the post "https://www-secure.symantec.com/connect/forums/duplicate-backup-sets-same-id". This had happened a couple of times when the server was a managed server in CASO environment but after a couple of F5, I was able to see the Backup Sets and even start restores. Any ideas?1.9KViews0likes3CommentsDuplicating backup sets from Managed Server deduplication storage to CASO deduplication storage
Hi all, After my Backup Exec 15 (latest patch and sp) server failed and returned back to life, I installed another BE15 and enabled CASO (Central Administration Server Option). Now I need to duplicate some of the backup sets that reside on the old server (now managed server) to CAS. Both have deduplication option enabled and have dedup storages configured. Both can perform backups. When I browse old server's dedup storage and select the needed backup set and select duplicate option, I see the new server's dedup storage. I select new server storage and start the process but it is stuck on "Ready; Backup Exec Server not available" on the job monitor tab. This is the screenshot of the job: And here is the frustrating outcome; I have enabled private cloud out of despair on both serverand ofcourse it did not do any good. Anyone have any idea?Solved1.8KViews1like4CommentsDuplication of dedup folders between 2 servers
Hello, I want to use optimized deduplication between 2 servers. In the documentation I just read: "For client-side deduplication and Backup Exec server-side deduplication, you must configure one deduplication disk storage on the Backup Exec server from which you want to copy the deduplicated data. You must also configure one deduplication disk storage on the Backup Exec server to which you want to copy the deduplicated data." So I have Server A and Server B. My question is : is the dedup disk storage on the server (named "Server B") I want to copy the deduplicated data can be used by "Server B" himself for his own backups ? Or this dedup disk storage must be dedicated only for replicas backup of Server A ? Thanks in advanceSolved1.1KViews0likes5CommentsBackup 2015 DLM not running
I have a CASO setup with one VM running the CASO role and three media servers on different locations. The media servers are physical with big D: and E: drives (only local disk, no removable disk or so), disk storage + dedupe storage. Catalogs are stored centrally and all servers run 2012 R2 OS and Backup Exec is fully patched with the latest patches as of today. Our normal backup procedure is to run the backup jobs to disk and then duplicate to dedupe storage, we also do some optimized duplication from one media server to another with duplication from dedupe to dedupe storage. We installed all serversearly Septemberand everything is running fine except that the disk storage backup sets will not be removed by DLM even if they are expired (or manually expired by right clicking and selecting "expire"). I installed another server to test earlier today and noticed that in the audit log (on the new test server) I could see that DLM log an event for every backup set it removes and it works just fine on that setup. This is acompletely independent backup exec instanceand only runs one backup job for test. In the audit log of our "real" backup server there is not a single entry logged by DLM from day one which is really strange. I logged a case yesterday with Veritas/Symantec support and the test we agreed on doing was to first run an inventory of the disk storage and when that didn't help we decided to do an inventory + catalog as well, since we have 39,5 TB of data to process it will take a while so I decided to try it on one of the other media servers instead (which is not fully used yet) and it didn't help so instead of just waiting until Monday I decided to write here to see if anyone else have experienced the same issue. I have tried with (and without)the "Allow Backup Exec to delete all expired backup sets" enabled with no changeof the result. Anyone who has information that can help me, it would be very appreciated? Thanks!582Views4likes1CommentHow deduplication replication and duplicate to tape
I have 2 backup exec server 2014 in different location and connected using private link (40mbps). On main site, there is CAS with deduplication disk. On second site, there is MMS with deduplication disk and tape library. I try to create a backup job on CAS but cannot select tape as duplication destination. It need to share library across server or not?Solved1.7KViews0likes9CommentsDeduplication & Optimized duplication - multiple jobs for one server?
Hi There, I have a number of servers that i want to back up with BE2014, using the deduplication add-on. I will also be adding a stage to duplicate certain backups to an offsite, shared dedupe store. I have a query about using multiple jobs for the same backup server. Say icreate job A,(consisting of Full and Incrementals), and this job backs up one server to the primary dedupe store, and this will be duplicated to another dedupe store offsite Then, i go along and create job B, with different retention settings (also backing up the same server, to the same dedupe store), and this is also duplicated to an offsite dedupe store. Will job B go off and create a completely new batch of files (eg full backups) on the primary dedupe store, or will everything be completely deduped and no duplicate of files created (, because it can 'see' all of the full backups that were previously created via Job A? If new backups will be created, will the same thing happen in the offsite dedupe store, and the duplicate jobs will also create new backups in the offsite store? Basically, i will be creating one job for daily/weekly, and the weekly will be duplicated offsite. I will be creating aseperate montly job, with suitable retention, and this will have a duplicate to send it offsite. I will also be creating seperate quarterly and annual jobs with suitable retention for them also. Is this a good approach, and will this all work together and deduplicate well? Thanks in advance for your advice1KViews0likes5CommentsBE2014 Deduplication - how many concurrent jobs do you run, what spec is your backup server?
Hi I have a couple of quick questions about the dedupe store concurrent jobs setting (for a standard dedupe store running on the media server, with no external devices andno OSTs). What do people normally leave the setting at? It's looking like i'm going to be running quite a few jobs at once, but i'm hesitant to perform more than 3 jobs at once as it might slow all the jobs down to much I have BE running as a VM, with 2 x dual core processors, with 32GB of RAM. If i double this to 8 cores in total, will it help me run more concurrent jobs efficiently? Thanks in advance for your opinionsSolved2.8KViews0likes11CommentsBE2014 Optimized Duplication - 'Loading Media - Duplicate' for hours
Hi There, I have BE2014 on the latest version, and my architecture is as follows: -Production Environment, with CASO, Duplication Add-on, and a local dedupe store on this windows 2012 R2 server. (no extra devices connected) -DR environment, with a BE managed server out there, and Deduplication Add-on (no extra devices involved out there either). I have a dedupe store shared out on this managed server (shared with the CASO server in Production) -At present, there is a 1GB LAN connection between the two servers/sites. I have my backup jobs defined on the CASO server in production, and thedata is being saved to the dedupe store in Production just fine. I havea stage added to duplicate the data over to the shared store over in DR, but the jobs are taking forever (too long for the backup window at present). Every time i check in on it, it's almost constantly stuck on 'Loading Media - Duplicate' Anyone got any ideas? Thanks in advance1.9KViews0likes9Comments