BE 2012 and Data Domain w/ DDBoost
Background: So we have Backup Exec 2012 and two Data Domain DD620's. We have all the required licensing including the Data Domain DD Boost and replication licenses. I have installed the Open Storage Technology (OST) plugin 2.5.0.3 as suggested for the version of BE. Both DD boxes are visable and connected to my BE server. I have foud some documentation and a video that explains some of what I am looking for but not entirely and most of it is for BE2010. The BE 2012 documentation from DD is not up to date yet. I am trying to setup backups as my tape drive has had enough and is not working properly. Anyone I have talked to just says you backup to it, then I continue finding little things that need to be done. No verify, no precompression, some said no deduplication in the job, other said server side dedupe because DDBoost manages it with OST plugin. Too much information and not enough people knowing what to tell me. The DD weekend support guys don't know the software titles they just know the hardware. I need help now unfortunately... Question: What settings do I use for Backups? I have Active Directory, Exchange, SharePoint, and SQL agents.6.4KViews1like32CommentsMedia Server / SLP Duplication Question
I'm curious about how NetBackup decides which media servers should be used for SLP duplications. We have 2 Data Domains which we back up to, with a total of 4 media servers; these serversare then grouped in load balanced pairs. Generally our policies are set up so that one pair writes backups to one Data Domainwhile the other pair deal with the backups for the other Data Domain. However, all 4 media servers can access both Data Domains. Looking at many of our SLP duplication jobs, the "Media server" listed in the detailed status is something along the lines of: "exampleserver1 -> exampleserver2" (with each server being from a different pair as described above) However, I have noticed that when the environment is busy overnight, any running SLP duplications often use the same server to read and write the image (e.g. "exampleserver2 -> exampleserver2"). Is this simply because the other servers are busier so NetBackup doesn't assign two different media servers for the duplication job?Solved4.2KViews1like12CommentsCannot to connect BackupExec to Data Domain with OST Plug-In
Trying to connect via creation of a disk device. When clicking "Add OpenStorage" device in Backup Exec disk devices (Backup Exec 2010 R3 SP2), I am able to get to the point as follows in the attachment. I can get all fields to work, it sees it as a "Data Domain" in the selection box, but on the LSU, it doesn't populate. Login credentials are definitely right. The OST 2.5 plugin for datadomain is installed on the media server. If I manually type in LSU1, or any free text in the Logical Storage Unit field, it doesn't make any difference. The error that I get when I click OK says: "Cannot connect to the NDMP server, or to the remote computer that is configured as a Remote Media Agent, or to the remote computer that is configured for deduplication" I do not have the DD Boost license enabled, I have one MTree. I was told and confirmed that I do not need the DD Boost option to create an OST device against a straight data domain. I am on 5.0.1 OS on the DD 580.3.6KViews1like8CommentsInvalid signature
Ok, I have seen a lot of articles and discussions on the topic, but I cannot seem to make headway. I have had this issue since BE 2010 R3, but I recently upgraded to 2014 and still have the same issue, different error code. I have an EMC Isilon Cluster with IQ 6000x nodes. running OneFS 7. We recently got new VNX storage and now I would like to make the old cluster the backup location. Upon creating the new VM server (2012 R2), I created a new service account in the domain, added it to my super users group, installed all software under that account on the server. Set that account as the BESA while setting up BE 2014. I went into the NTFS permissions and confirmed the super users had full access, and went into OneFS and made sure the super users also had full write abilities in the smb permissions. The super users group is also a local administrator of the server. Here is what I am getting. Some times upon trying to create the disk in BE, It tells me my UNC path is incorrect. I go to windows explorer and copy the path and paste it and still get the error. If I close out of BE and come back in, it may accept the path, but once I get to the 'finish' and attempt to finalize I get an "invalid signature" error and cannot finish the set up. Checking in event veiwer, the error isn't really too helpful. Windows Logs -> Application -> Event Id: 33808 If I do the b2dTest.exe tool I get some parsing errors: 08/22/16 11:03:25 System Information: 08/22/16 11:03:25 Memory Load : 35% 08/22/16 11:03:25 Total Physical Memory: 4,095 MB 08/22/16 11:03:25 Free Physical Memory : 2,629 MB 08/22/16 11:03:25 Total Page File : 6,143 MB 08/22/16 11:03:25 Free Page File : 4,496 MB 08/22/16 11:03:25 Total Virtual Memory : 134,217,727 MB 08/22/16 11:03:25 Free Virtual Memory : 134,217,664 MB 08/22/16 11:03:25 08/22/16 11:03:25 Test Parameters: 08/22/16 11:03:25 Role : Backup To Disk 08/22/16 11:03:25 Make : UNKNOWN 08/22/16 11:03:25 Model : UNKNOWN 08/22/16 11:03:25 Firmware : UNKNOWN 08/22/16 11:03:25 Location : \\isilon\dropbox\it\B2DTestDir 08/22/16 11:03:25 Username: Current User (domain\svc_backupexec) 08/22/16 11:03:25 08/22/16 11:03:25 File Count : 10,000 files 08/22/16 11:03:25 Buffer size : 65,536 bytes 08/22/16 11:03:25 Mapped file size: 1,048,576 bytes 08/22/16 11:03:25 IO File Size : 4,296,015,872 bytes 08/22/16 11:03:25 Required Space : 4,339,073,024 bytes 08/22/16 11:03:25 08/22/16 11:03:25 Path Validation 08/22/16 11:03:25 TRACE: DRIVE_REMOTE 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Create Directory 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Disk Space 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Volume Information 08/22/16 11:03:25 TRACE: Volume Name: DropBox 08/22/16 11:03:25 TRACE: File System Name: NTFS 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Traverse Volume Info 08/22/16 11:03:25 TRACE: Volume: \\?\Volume{c27a4090-8de4-11e3-80b3-806e6f6e6963}\ 08/22/16 11:03:25 TRACE: Device: \Device\HarddiskVolume1 08/22/16 11:03:25 ===> WARNING - FindFirstVolumeMountPoint() failed: (0x5) Access is denied. 08/22/16 11:03:25 08/22/16 11:03:25 Memory Mapped Files 08/22/16 11:03:25 TRACE: Initialize memory map 08/22/16 11:03:25 TRACE: Verify memory map 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Reparse Points 08/22/16 11:03:25 ===> WARNING - Reparse points not supported on appliance: (0x32) The request is not supported. 08/22/16 11:03:25 08/22/16 11:03:25 B2D Allocation (pre v14) 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Writing a single buffer to extend file 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 B2D Incremental preallocation (v14.0 and up) 08/22/16 11:03:25 TRACE: Determining sizes 08/22/16 11:03:25 TRACE: Creating hybrid allocation file 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Writing a single buffer before end of file 08/22/16 11:03:25 TRACE: Positioning file pointer for trimming 08/22/16 11:03:25 TRACE: Setting end of file for trimming 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Random IO 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Writing 67108864 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 67108864 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 1048576 08/22/16 11:03:25 TRACE: Reading 1048576 bytes from file 08/22/16 11:03:25 TRACE: Writing 1048576 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 67108864 08/22/16 11:03:25 TRACE: Writing 655360 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 67108864 08/22/16 11:03:25 TRACE: Reading 655360 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 1048576 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 1048576 bytes from file 08/22/16 11:03:25 TRACE: Writing 1048576 bytes to file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 2097152 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Seeking to offset 2031616 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 64KB Unbuffered Writes and Buffered Reads File I/O 08/22/16 11:03:25 TRACE: Writing to file Any help would be greatly appreciated. Edit: Photo from event viewer won't upload. I can find a way to get it there if it is pertinent.3.3KViews0likes8CommentsAccelerator for a VMWare Backup Policy with AIR
Hello Experts, I am using NBU 7.7.3. I have a VMWare policy querying a cluster for an annotation to backup some clients. This policy has 4 schedules. Each Schedule has override policy storage selection checked with a corresponding AIR (Automated Image Replication) SLP's as the selection. The Storage selected in the SLP is an EMC DataDomain Storage unit (Using OST Plugin) setup as a Source for AIR. Attached some screenshots. With this setup, Should the 'Use Accelarator' Option be available for selection on the policy? Currently I see it grayed out.3.3KViews0likes9CommentsNeed official Symantec clarification on cryptic instructions to backup Vault 8.0
We recently upgraded our Vault environment to 8.0sp5 from 2007. I am trying to get the backup procedure to work properly but I need clarification from Symantec. We use Netbackup 7.x without the Vault Agent for Netbackup. This uses the bpstart and bpend batch files. The main areas I need clarification are: After running the powershell script to create the list of EV backup mode commands for my environment I am confused about which of the commands need to be called. The file is divided into Site, VaultStoreGroup, VaultStore, and SiteIndexLocations. Currently we have only one Vault server for email but soon we will have others for files and possibly Sharepoint. Which commands do I need to call and in what order to start backup mode and end backup mode? Our Vault partitions are on a back-end EMC Celerra (not to be confused with Centerra). We back those up via different means and thus need to use the IgnoreArchiveBitTrigger.txt file. Symantec recommends calling a simple "echo > {drive}:\vaultstores\vaultpartition\IgnoreArchiveBitTrigger.txt" during our backup routine to create the trigger file every time. Where should these commands go? In the bpstart file? In the bpend file? Where in the file? Before or After the Vault backup mode powershells? Since we aren't using the Netbacup Vault agent and we call our SQL backups via the bpstart entry to a third-party app (SQLSafe) are there any special commands I need to call in the database portion of the backup so that when it is complete it somehow let's Vault "know" that the SQL backup is complete? We run weekly fulls and daily diffs on the SQL and I get constant nags from Vault in the event logs that the databases haven't been backed up in X days. I am open to all clarifications. -Thanks -ASolved2.9KViews1like12CommentsDuplication Jobs running wild
After rebooting my NBU servers, I have over a thousand Duplication jobs trying to run. A little info: We have 2 DD670's. Half backups go to one, half to the other, then each backup duplicates to the other DD670 after the backup completes. All that shows up for the job is this: On the Job Overview Tab: Job Type: Duplication Master Server: <server name> Job Policy: SLP_LCP_DD02_Weekly Job Schedule: Dup Priority: 0 On the Detailed Status Tab: Nothing at top - all fields blank in Status: 2/2/2013 1:46:29 AM - requesting resource LCM_dd01-su 2/2/2013 1:36:35 AM - Info nbrb(pid=3248) Limit has been reached for the logical resource LCM_dd01-su I have over 1500 sitting in queue like this? How do I keep these from launching? If I cancel them and clean them all up, 5-10 min later they all kick in again... It seems even though they run and complete, they just queuue up again and run again. Baffled...... Thanks, JohnSolved2.9KViews0likes13CommentsData centre migration - NetBackup BMR for Solaris clients
Need some advise on migration strategy to migrate Solaris based physical clients from one DC to another. The target DC is a different NetBackup domain. The client data is hosted on EMC and NetApp storage. There isno network connectivity between source and target data centres. The cutover options are likely to be limited to the physical transfer and restore of data using tapemedia. The targetDC server hardware and firmware will be "near"identical to the source environment. I am planning to use NetBackup BMR (currently NBU is being used in source DC for client backup with NO BMR). I understandthat a BMR master server and Boot Server (seperate boot server) needs to beintroduced. The SRT process The target DC will have NBU Master/Media servers and a compatible storage library. Image import (2 phase import) from tape media will be required I need to use network boot option as transfer of boot media (CD/DVD) will not be allowed Can I transfer thenetwork boot image on tape to the target DC and restore on target client? Do I need BMR master and boot server in target DC? What else do I need in the source and target DCs?Solved2.6KViews0likes21CommentsNetBackup 7.5.0.4 with DataDomain and Status 2106
(1) Master/Media Server Windows 2008 R2 Standard (3) Media Servers Windows 2008 R2 Standard (2) DataDomain DD670's Monday, I disabled all backup policies. Then I shutdown NetBackup on all 4 severs. Then I did a shutdown on 1 DataDomain so that I could physically move the unit to make room for a new DataDomain 890 unit. After moving the 670, (I had to replace 4 CAT6 cables with new ones for length)I powered up the DAE's first and let then settle to a ready state. Then I powered up the Head Unit. Initially, the head Unit locked up half way into the boot. Called DataDomain andthey had me force power off the unit and had me pull it out and reseat all the RAM, PCI Riser cards and all the Hard Drives. Then replaced and repowered and all came up fine. I then started NetBackup services on the Master and then all the media servers. After everything was up, I reactivated all the policies. This ran fine until 1:00am on Thursday. Then, the DataDomain unit I moved started failing every job that ran on it with a status of 2106 (Status 2106: Disk storage server is down). I had Symantec look at it (logs sent in that were requested), and they siad it had to be a DD-Boost problem, as they could not see an issue. I then called DataDomain and they put there people on it and said it was a network problem because they couldn't ping the server from the unit. So then I went (tonight) and ran a network tester on the cables between the unit and the core switch. Cables tested fine and i have normal flashing link lights... So I did a shutdown restart on the DD head unit. After it came back up, i could ping the server from the unit. Kicked off a failed backup and it failed again with 2106 or a media write error(84)... When looking up the 2106 error this morning, I found a Symantec article that said I needed to reautherize the the servers to the DD unit. How do I do that? Does anyone else have an idea what would be causing all this? I now have Duplications (LCP) kicking off by the hundreds every 15 or minutes and I have to clean those all up too because they are failing with a status 84 and Image Cleanups failing with a status 83... I want to pull my hair out, but I don't have any left to pull out! Hope some one can help. John2.4KViews1like11CommentsHow to manage the Enterprise Vault data content when it grows big and you are running out of disk space ?
Hi People, Can anyone suggest to me what is the best way to manage and sustain the Enterprise Vault data that grow bigger and larger every month ? My situation is that whenever my Exchange Server 2007 database drive is running low of disk space, I'd go to lower down the EV archive items from 6 months into 5 months and now down to 3 months just to create some room to breathe in the Exchange Server mailbox. I understand that this is not resolving the problem in the best way as it should since we just shifting away the disk space usage from Exchange into the Enterprise Vault. However what are my options to manage and sustain the growing data in the EV ? Can i offload to tape for the Vault data disk and then deleting it manually, what happens with the indexing and search when the data is archived to tape and I want to retrieve it later for some reason ? Any kind of assistance would be greatly appreciated. Thanks.Solved2.3KViews3likes12Comments