BACKUP EXEC 2014
Hello, We have backupexc 2014 installed on windows 2012 R2 server and we are backing up several physical servers which Includes Exchange Server and SQL servers. The backup server first takes the backup on the storage and then duplicates to the HP LTO 7 Tape drive. We have created a media group and added a tape to it. The problem is after 2 or 3 days the backupexec server adds a random tape from the tape drive and starts the backing up on that tape instead the tape which we have defined, I am also adding a screenshot to check1.1KViews0likes2CommentsBE2014 on Server 2012 R2 - OS version check failed, unable to install
Hello, I'm having issues installing BE2014 on Server 2012 R2 Standard, build 6.3.9600. I've tried with bigKB2919355 update + all updates up to date (April 2018) and and without "big update", with no luck. Environment Check 1.0 just fails with Operating System Version Check and says BE is not compatibile with this OS version installed. Any ideas?569Views0likes1CommentSharepoint backup fails majority of time BE 2014
Hi guys This is my first post on here so please be gentle :) I'm getting a lot of backups failing with the following. The odd one does succeed though which confuses things a bit. The job itself is running on a web box but also looking at a DB box which is what it's referring to below (I'm new to Backup Exec and backups in general tbh). It mentions the remote agent on the DB box and I have checked it's installed which it is. However I've seen a lot of instances of this around the time the job fails: . Do you think the crashes might be the cause or a symptom of another issue? Can anyone help get these backups working again please? Any help would be hugely appreciated. Cheers :)1.3KViews0likes2CommentsBackup Exec 2014 - Duplicate Full Backup Job - Missing Partition
Hi, Good Morning, I am User of Backup Exec 2014. We install it at our backup server. Ourbackupwill run as Server > Backup Drive > Duplicate to Catridge Tape. Currently, our company facing a problem, after Server > Backup Drive is successful, the duplicate job show successful but only 1 partition is duplicated. Attached picturefor backup drive. There is 2 partition backup. At here we can see Partition D drive from PS02 is missing. Here is the Setting of duplicate job.... Regards, OTH622Views0likes1CommentError E000E020 on Backup Exec 2014.
Hello all, I would like to make a question regarding error E000E020, the error message I've been having in a remote site is: "e000e020 - The duration that this job is allowed to remain scheduled has passed. It will be rescheduled. Verify that storage is available and check the job's schedule settings. Ensure that there is enough time for it to begin running before it is considered missed." I've already checked the following article, and I think I can discard options A and C, as I don't think the options for option A is on Backup Exec 2014 (or maybe I am not finding them properly) and C we didn't have a time change here (therefor, there was no time change). So checking option B and normally we run the jobs as the following: Backup to Disk and then Duplicate to Tape. The disk we use for backup has a lot of free space left, as for the tape I wouldn't be surprised if it is not being change daily as it is a remote site where we don't have a lot of control (but then the error would only be on the duplicate job, not on the 1st backup). However the issue happens on the Backup job, where though I will attach some screenshots to see if it helps, I find odd the Job is set to start 9:00:00 PM, so it sometimes starts at 9:00:07 and it has end date of 9:00:04 (!?).Solved7KViews0likes3CommentsInvalid signature
Ok, I have seen a lot of articles and discussions on the topic, but I cannot seem to make headway. I have had this issue since BE 2010 R3, but I recently upgraded to 2014 and still have the same issue, different error code. I have an EMC Isilon Cluster with IQ 6000x nodes. running OneFS 7. We recently got new VNX storage and now I would like to make the old cluster the backup location. Upon creating the new VM server (2012 R2), I created a new service account in the domain, added it to my super users group, installed all software under that account on the server. Set that account as the BESA while setting up BE 2014. I went into the NTFS permissions and confirmed the super users had full access, and went into OneFS and made sure the super users also had full write abilities in the smb permissions. The super users group is also a local administrator of the server. Here is what I am getting. Some times upon trying to create the disk in BE, It tells me my UNC path is incorrect. I go to windows explorer and copy the path and paste it and still get the error. If I close out of BE and come back in, it may accept the path, but once I get to the 'finish' and attempt to finalize I get an "invalid signature" error and cannot finish the set up. Checking in event veiwer, the error isn't really too helpful. Windows Logs -> Application -> Event Id: 33808 If I do the b2dTest.exe tool I get some parsing errors: 08/22/16 11:03:25 System Information: 08/22/16 11:03:25 Memory Load : 35% 08/22/16 11:03:25 Total Physical Memory: 4,095 MB 08/22/16 11:03:25 Free Physical Memory : 2,629 MB 08/22/16 11:03:25 Total Page File : 6,143 MB 08/22/16 11:03:25 Free Page File : 4,496 MB 08/22/16 11:03:25 Total Virtual Memory : 134,217,727 MB 08/22/16 11:03:25 Free Virtual Memory : 134,217,664 MB 08/22/16 11:03:25 08/22/16 11:03:25 Test Parameters: 08/22/16 11:03:25 Role : Backup To Disk 08/22/16 11:03:25 Make : UNKNOWN 08/22/16 11:03:25 Model : UNKNOWN 08/22/16 11:03:25 Firmware : UNKNOWN 08/22/16 11:03:25 Location : \\isilon\dropbox\it\B2DTestDir 08/22/16 11:03:25 Username: Current User (domain\svc_backupexec) 08/22/16 11:03:25 08/22/16 11:03:25 File Count : 10,000 files 08/22/16 11:03:25 Buffer size : 65,536 bytes 08/22/16 11:03:25 Mapped file size: 1,048,576 bytes 08/22/16 11:03:25 IO File Size : 4,296,015,872 bytes 08/22/16 11:03:25 Required Space : 4,339,073,024 bytes 08/22/16 11:03:25 08/22/16 11:03:25 Path Validation 08/22/16 11:03:25 TRACE: DRIVE_REMOTE 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Create Directory 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Disk Space 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Volume Information 08/22/16 11:03:25 TRACE: Volume Name: DropBox 08/22/16 11:03:25 TRACE: File System Name: NTFS 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Traverse Volume Info 08/22/16 11:03:25 TRACE: Volume: \\?\Volume{c27a4090-8de4-11e3-80b3-806e6f6e6963}\ 08/22/16 11:03:25 TRACE: Device: \Device\HarddiskVolume1 08/22/16 11:03:25 ===> WARNING - FindFirstVolumeMountPoint() failed: (0x5) Access is denied. 08/22/16 11:03:25 08/22/16 11:03:25 Memory Mapped Files 08/22/16 11:03:25 TRACE: Initialize memory map 08/22/16 11:03:25 TRACE: Verify memory map 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Reparse Points 08/22/16 11:03:25 ===> WARNING - Reparse points not supported on appliance: (0x32) The request is not supported. 08/22/16 11:03:25 08/22/16 11:03:25 B2D Allocation (pre v14) 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Writing a single buffer to extend file 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 B2D Incremental preallocation (v14.0 and up) 08/22/16 11:03:25 TRACE: Determining sizes 08/22/16 11:03:25 TRACE: Creating hybrid allocation file 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 TRACE: Positioning file pointer 08/22/16 11:03:25 TRACE: Writing a single buffer before end of file 08/22/16 11:03:25 TRACE: Positioning file pointer for trimming 08/22/16 11:03:25 TRACE: Setting end of file for trimming 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 Random IO 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Writing 67108864 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 67108864 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 1048576 08/22/16 11:03:25 TRACE: Reading 1048576 bytes from file 08/22/16 11:03:25 TRACE: Writing 1048576 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 67108864 08/22/16 11:03:25 TRACE: Writing 655360 bytes to file 08/22/16 11:03:25 TRACE: Seeking to offset 67108864 08/22/16 11:03:25 TRACE: Reading 655360 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 1048576 08/22/16 11:03:25 TRACE: Setting end of file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 1048576 bytes from file 08/22/16 11:03:25 TRACE: Writing 1048576 bytes to file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 2097152 bytes from file 08/22/16 11:03:25 TRACE: Testing end of file 08/22/16 11:03:25 TRACE: Seeking to offset 0 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping 131072 bytes 08/22/16 11:03:25 TRACE: Seeking to offset 2031616 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 TRACE: Reading 65536 bytes from file 08/22/16 11:03:25 TRACE: Skipping -131072 bytes 08/22/16 11:03:25 ===> PASSED 08/22/16 11:03:25 64KB Unbuffered Writes and Buffered Reads File I/O 08/22/16 11:03:25 TRACE: Writing to file Any help would be greatly appreciated. Edit: Photo from event viewer won't upload. I can find a way to get it there if it is pertinent.3.3KViews0likes8CommentsDeduplication stream size vs disk cluster size
In a few older applications that rely on file system-based databases, like old Usenet news spool servers for instance, it was a good idea to match the storage volume's cluster size to the smallest file size if possible. For that old Usenet server, 1 KB clusters were good for those less-than 1 KB posts. This made the best use of the space, though it made the file system hellacious to manage when things went wrong. Now fast-forward to today. I have a new20 TB volume to put a dedupe store on, and NTFS on Server 2008 R2 has a minimum cluster size of 8 KB for a volume that size. I can make the clusters larger though. Is there any benefit to matching the dedupe store volume's cluster size to the data stream size to maximize data deduplication and minimize storage waste? For instance, choose a cluster size of 32 KB and data stream size of 32 KB? (By the way, there's no OS selection for Server 2008 or 2008 R2. It says "2008) not applicable." --Solved2.6KViews0likes3Comments