Showing results for 
Search instead for 
Did you mean: 
Data Lifecycle Management in BE 2012

Let me start with some of my background first, since this is my very first blog. I have been with Symantec Backup Exec team more than 12 years.   In the past 12 years, I have been working on the development of both NetBackup and Backup Exec products. My area of focus has been on the development and design of the shared catalog components.   In the recent few years, I have been heavily involved in designing many different Backup Exec features.  In  2010, after more than 10 years of service, I was recognized as a Distinguished Engineer. This recognition was due to my overall contributions to the company.  During the development of BE 2012, I was leading the design and development of Data Lifecycle Management, Metadata Web Service and Simplified Restore Workflow features.  I was also heavily involved in the design and development of many other new features.


There are many new features that have been introduced in BE 2012.  It probably would be a good idea to blog about some of these new features.   Before I dive in some highlights of one of the new features-- “Data Lifecycle Management “ (DLM), let me start with some insights of why we created the DLM feature. Before Backup Exec 2012, Backup Exec had always managed the lifecycle of backup data using a tape centric method, no matter what the backup data type was. This included data that was backed up to .BKF container files (tape emulated disk container file) on disk or to tapes.  The media set was designed to manage removable media like tape.   This design has been working great with removable media like tape for years.   In recent few years, the disk storage has been adopted as the primary target storage for backup due to its enablement of advanced recovery capabilities (e.g. GRT), fast performance and continuous decreasing cost per Mega Bytes. 


There are two key issues of media centric management using media sets, especially for disk backup. When using disk based backup, the following issues apply:

1.     No guarantee of backup data integrity for recovery.   Lacking of knowledge of data dependency (incremental backup data would not be very useful if its previous incremental or full backup data is gone).  Therefore, users have to know how to configure two media sets with correct overwrite and append periods for their full and incremental backup jobs, to avoid the scenario that the media containing full backup data is overwritten before the media containing its associated incremental backup data.

2.     Lazy disk storage reclamation.  The disk storage used by expired backup data won’t be reclaimed in time.  It is because the lifecycle of backup data is managed by media, not by backup data itself. Therefore, The media won’t be overwritten until all backup sets on this media are expired.   The append period of media set is used to control this gap.  (e.g. if the overwrite period is 4 weeks and append period is 1 week, then the retention of this media is 5 weeks.)  With limited disk storage space, the expired backup data should be deleted immediately to free up disk space for new backup jobs.  This posts a real issue for customers today. Plus, you cannot simply add additional disk storage as easily as adding another tape.  Therefore, media centric management of backup data on disk storage is not a desirable solution anymore.   The data centric management design is the key concept of our new feature “Data Lifecycle Management” introduced in BE 2012.



Therefore we introduced DLM in BE 2012 to address these issues. Here are some highlights of the new feature “Data Lifecycle Management “ (DLM).  The new feature “DLM” is designed to manage all backup data stored on all types of disk storage except removable disk cartridge.  Here are some important points you should be aware of:

·      Data retention:

o   The retention of backup data is associated with backup set instead of media.

o   The retention of backup data is a property of job definition and configured using single retention value (e.g. 4 weeks).  It is not aproperty of media set using both overwrite period and append period.

o   Single backup set BKF:

§  A single BKF file can contain only one or partial backup set.

§  A backup set can be spanned over more than one BKF file.

·      DLM grooming process:

o   DLM process will proactively check for expired backup sets and groom those backup sets and associated backup data to free disk space occupied by expired backup data.

o   DLM process will be kicked off every 4 hours or whenever receiving low disk storage event.

o   The dependency of full and associated incremental backup sets will be checked to prevent broken chain of full/incremental backup sets.   In other words, DLM won’t groom expired full and incremental backup sets if there is a dependent incremental backup set that has not expired.

o   DLM won’t delete the last copy of the latest recovery point chain.

§  The definition of the recovery point chain:

·      Associated full and incremental backup sets that are generated from the same job for the same resource using the same selection list.  (e.g. \\Server\C:, \\Server\MSSQL\BEDB)

·      Delete operation from Backup set view:

o   Deleting a backup set from backup set view will delete the backup set and associated backup data from disk storage.

o   The dependency of full/incremental backup sets will be checked on deletion.   The depending incremental backup sets will be shown and prompted.  User will have the option to either cancel the delete operation or delete the selected backup set/associated backup data and all shown depending backup sets/associated backup data.

·      Retain operation from Backup set view:

o   User can manually retain a backup set and assign it with a reason code and description.

o   DLM grooming process won’t groom retained backup set/data and the backup sets/data it depends on.

o   Delete operation from backup set view is disabled for retained backup set/data.


There are other details related to DLM grooming process and design philosophy I can blog more if there are enough interests.  Hopefully, today’s blog is helpful.


Anker Tsaur

Distinguished Engineer

Backup Exec Team

Symantec Corporation


What you have wrote is fine for backup to disk, but what about tapes?

1) Can I append to a tape?

2) When I append to a tape, the retention period is a function of the media and not the backup set.

3) If I can append to a tape, can I specify a period after which it cannot be appended to, like the append period?

Yes for your question 1 and 3. The tape backup is still managed in old way using media set. There is no change in media lifecycle management for removable media (e.g. Tape).

Your blog states that DLM is intended for everything disk based except removeable devices, like what i got, a RDX device.

Since BE2012 doesn't show the B2Dxxxx.bkf files anymore inside the designated media set, and TECH178524 document also tells me to use retention policies based on media sets, I'm quite unsettled whether i should downgrade the licenses i'm about to buy for our 10 branch offices (sbs with rdx) -
as it was much easier to keep track of media retention in earlier versions.

Or is something for this case already in the works?

Sorry for replying your question late, since I am still in the middle of my vacation.

There is no change on the retention management of removable media including removable disk cartridge (e.g. RDX).  They are managed using media retention with media set. 

You will still be able to see all removable tape media or disk cartridge listed under its media set view under storage tab in BE 2012 UI.   The only difference is you won't be able to see BKF files listed under removable disk cartridge.  Instead, you will see backup sets listed under disk cartridge.   Since all backup sets (or BKF files) on a removable disk cartridge share the same data retention (in fact, media retention), that was why we decided to hide BKF (backup set container) from users.    The retention management of removable media (disk or tape) is the same as previous version, using media retention.  


What is the logic used to decide the "last copy of the latest recovery point chain" ?

For e.g. will DLM groom the expired last copy in the backup job chain if following modifications are made in the backup job?

GRT option in enabled/disabled.

Backup name is changed.

Backup device is changed (e.g. from disk to tape).

The logic used to determined the last copy of the latest recovery point chain for each resource is the  backup sets that are required to successfully recover to the latest point-in-time for a selected resource. 

For example, If you backed up your C: with weekly full and daily incremental to storage#1 with one week retention, and duplicate them to storage#2 with 2 week retention. You have following backup sets generated after first two weeks. And you decide to delete the backup job of C: 

On storage#1:

Full00, Incr01, Incr02, ...., Incr06, F10, Incr11, Incr12, ..., Incr16

On Storage#2:

Full00c, Incr01c, Incr02c, ...., Incr06c, F10c, Incr11c, Incr12c, ..., Incr16c

On the first day of the third week, DLM will delete follwoing backup sets:

Full00, Incr01, Incr02, ...., Incr06,

On the first day of the fourth week, DLM will delete follwoing backup sets:

F10, Incr11, Incr12, ..., Incr16 + Full00c, Incr01c, Incr02c, ...., Incr06c

The following backup sets will never be deleted since they are the last copy of latest recovery point chain of C: backup.

F10c, Incr11c, Incr12c, ..., Incr16c


Thanks for your blog article!  Keep the technical information coming - How about the following:

How does DLM effect the management of Disk Based Storage space vs. 2010 and earlier?

In BE2010 and before the goal was to have DBS be 100% full.  In order to do that, we set the maximum size of .bkf files at 15gb and manually pruned smaller ones until all of the files migrated toward the 15gb size.  As the .bkf files expired they would be reused in their previous space. Leaving the drive storage only slightly fragmented.

Our space is currently 98.6% full with 2,900 files and 550 folders (img). The problem is that only 8% of the .bkf files are at the maximum size, most are much smaller. 

The filesizes break down as: 8% = 15gb, 29% from 1-14.9gb, 19% from 80mb-.9gb, and 30% < 80mb, the remainder are IMG files.

The drive has ONLY been used for BE2012 Disk Storage and there are over 10,000 file fragments after running for about a month.

How should we manage this space in BE2012? Are the .bkfs being reused? When .bkfs expire is there space released, fragmented, and then reclaimed by BE2012?

How do we select a reasonable size for .bkf files, keep the drive reasonably defragmented, and maintain disk based performance?

When we designed the data lifecycle and storage management features for BE 2012, we like to treat disk storage as disk (random access device) instead of tape. 

BKF is tape emulated backup container file.  It was designed for sequential access device like tape.  

We have already stored single GRT enabled backup (e.g. exchange) in its own image folder since BE 11.   Therefore, starting from BE 2012, a "non-GRT" backup set will be stored in one or more BKF files (if the backup is bigger than BKF file size setting), we called it "Single Set BKF".  We do not append backup set to a BKF file anymore. Plus, we don't overwrite/reuse BKF file anymore.   DLM will proactively delete any expired backup set(BKFs/Image folder) to reclaim disk space back.  Not like BE 2010 and previous versions of BE, a BKF file won't be overwritten until all backup sets on that BKF are expired.

The drawback of this design is potential disk performance degradation due to disk fragmentation.  We did have discussion on this drawback during the design phase.  We thought there would be more benefits to go with "Single Set BKF" design.  Plus, disk fragmentation issue could be solved by using disk defragmentation tool.

Could you share with me your observation of performance degradation due to disk fragmentation in your environment?   I like to understand how bad it is when using default BKF file size setting.








We use a set of USB hard drives as backup destinations with daily or weekly rotation of drives on and off-site.

The ideal behavior is that every drive will have a full backup set on it and a chain of incrementals or a single differential on it that match up with the full backup on the same drive, so that a complete restore will work with every USB drive, independent of any other drive. This is how Windows Server's native backup tool works and it an attractive design.

With Backup Exec 2010 R3, the recommended way to use USB drives was to create a media pool of the drives and backup to the pool (as per ), which had the desirable behavior that any connected drive at the time of the backup would work as a backup target (such as if the site support staff forgot to swap out the drive). However, this method appears to have the problem of not ensuring that any given disk will actually work to restore data -- a disk may have only incrementals on it, for example, or a full backup but a broken chain of incrementals.

I am testing out the trial of Backup Exec 2012 and wondering what the recommended guidance is on using a group of USB drives that are rotated offsite and ensuring full system restore functionality on every drive. The lifecycle functionality described above appears useful in this regard, and suggests the device pool should not be used but rather a series of backup jobs be created that specifically target specific USB drives, with the scheduling arranged so that the backup schedule matches the physical drive rotation schedule. This is somewhat complex to schedule and loses the nice flexibility of a pool, however.

So, I'm hoping for guidance as to the way to best use a set of USB drives with offsite rotation with Backup Exec 2012.


You can achieve USB swapping scenario in BE 2012 using Storage pool.

First, you need to configure each USB disk storage as disk storage first.   During this step, only the USB disk storage which is currently configured needs to be pluged in and powered on.  

Second, Create a storage pool to include all USB disk storages you just configured.  During this step, USB disk storages you just configured does not need to be pluged in or powered on.

Thrird, Setup your full/incremental backup job with full backup frquency matching with your USB storage swapping frequence and set the disk storage pool you just created as your target device. (e.g.  If you want to swap your USB storage weekly, you need to setup weekly full and daily incremental.) 

Finally, Only plug in and power on one USB disk storage you configured and included in the storage pool at a time.  Every week after backup job completed, unplug one and plug the next one in.

One additional setting you need to change (see screen shot below), we call it "virtual write protection".

When USB disk storage is unplugged after 14 days (by default), we will set that disk storage read only.

You can change the 14 days to 9999 days (max) to disable the "virtual write protection" behavior.


Hello Anker, thanks for your response!

>Thrird, Setup your full/incremental backup job with full backup frquency matching with your USB storage swapping frequence and set the disk storage pool you just created as your target device. (e.g.  If you want to swap your USB storage weekly, you need to setup weekly full and daily incremental.)

Could you describe as an example the configuration to use for two USB disk devices that are rotated daily? Your comments above suggest that the only allowed configuration would be to do a full backup each day, and not use incremental or differential backups at all.

The ideal scenario would be to have full backups happen on the weekend where the backup can run for many hours and not inpact performance and then either incrementals or differentials to happen each night  where the backup window is smaller. However, this requires Backup Exec to match up incrementals or differentials to the disk that has the matching full backup.

For example:

Starting point: Manual full backup done using Disk 1 and Disk 1 rotated off site; Manual full backup done using Disk 2

Monday: Differential backup to Disk 2 (at night)

Tuesday: Disk 2 rotated offsite; Disk 1 inserted; differential backup done to Disk 1 (but differential to the full backup done on Disk 1)

Wednesday: Disk 1 rotated offsite; Disk 2 inserted; differential backup done to Disk 2 (but differential to the full backup done on Disk 2)

Thursday: Disk 2 rotated offsite; Disk 1 inserted; differential backup done to Disk 1 (but differential to the full backup done on Disk 1)

Friday/Saturday/Sunday: Disk 1 rotated offsite; Disk 2 inserted; new full backup done to Disk 2

Monday: Disk 2 rotated offsite; Disk 1 inserted; differential backup done to Disk 1 (but differential to the full backup done on Disk 1, which was from the previous weekend)


So, basically each disk has a full backup at most two weeks old, plus a differential backup matching the full backup. Backup Exec would need to detect changed data based on the date of the full backup job on the _currently connected_ USB device, not the most recent full backup of the system to any device.

Just brainstorming, but maybe a synthetic backup to the USB could be used as an alternative, but the incrementals would still need to be incremental vis-a-vis the synthetic baseline on the currently connected disk.

Thanks for your input in how best to do daily backups with daily offsite rotation with BE 2012.






thanks for the blog.

We'ed talk to some local symantec guy, searched the internet, but no-one has a usefull solution for swapping usb harddisks.

is it that difficult to develop a pice of software witch can back up a the entire server to a set of rotating  usb disks?

I'm sure backup exec is a good product for enterprise environment, but for our small customers where going back to the competior with A in its name.

The solution with the pool works in theory, but has some disatvantisches in practis, i noticed. May you guys can tell me a better story.

- when the usb disk goes in standby (takes 2 seconds to come back when i access it be windows explorer) the BE switches the storage to offline, backup fails.
- when the usb disk will be unplugged, there is en error message apearing, even thou there is another disk plugged in the pool. this is resulting in an ugly log overview.






Sorry for the late reply.   Your question has triggered lots of discussion among BE engineers.  We are considering a design and implementation of a real solution for USB swapping use cases.  Before that solution comes out, you could setup a D2D (backup to disk copy to a different disk) job to serve your needs.

For example,  

1. Setup your weekly full on Friday night and daily incremental backup on each night from Monday to Thursday and target them to a USB disk storage (name it disk#1) or create a a disk pool that contains disk#1 (name it pool#1) and target them to the pool#1.

2. Add a second stage to duplicate your weekly full immediately after backup to a different USB disk storage (name it disk#2) or create a a disk pool that contains disk#2 (names it pool#2) and target it to the pool#2.  

3. Add another second stage to duplicate your daily incremental at 10:00AM (day time) with "All backup sets" to disk#2 or pool#2.

If you setup this way, you can always bring your disk#2 home every night.

Is this setup going to work for you?

Sorry for the late reply.   Thanks for your feedback. We undertand your pain and are considering a design and implementation of a real solution for USB swapping use cases.

Hi Anker, it is very helpful to have you and other BE engineers looking at the daily USB drive rotation use case -- thank you!

This is a common backup scenario among my clients so I am quite interested in getting a good strategy for this.

I'd like to check I understand the idea above:

1. There are two backup destination media. The first ("device1/pool1" etc.) is always connected to the BackupExec media server 24x7. It could also be a locally attached normal SATA disk for that matter, not a USB disk, as it is always connected. The second backup media is a USB drive ("device2/pool2") that is connected by staff at the start of each day Tuesday-Friday and taken home at the end of these days. On Friday at the end of day, the portable device is left connected from Friday PM through to Monday PM for the full backup cycle.

2. Full plus incremental or differential backups are done as usual to the first storage device, and each job is then duplicated to the second storage device. The full backup happens over the weekend and the incremental/differential backup duplication jobs need to happen during the day on Tuesday-Friday so they complete by the time the second device needs to be disconencted and taken home. (The primary incremental/differential backup job could happen Monday-Thursday night and then be duplicated the next day.)

One would want to have a set of at least two USB disks in the pool2 set as one of the devices is left onsite over the weekend. The devices would be rotated as of each Friday at end of day, so if the building burns down over the weekend, there is a device offsite that has a full backup from the previous weekend plus weekly backups through Thursday night. In this disaster case, just one day of data, the Friday work, would be lost.

There is a staff disadvantage in that the USB drive needs to be handled twice each day instead of once each day, and the media rotation strategy is a bit more complicated in that the staffperson bringing drives offsite needs to keep track of which drive is offsite for most of the week and which one is just offsite for the night.

The gains are that incremental/differential backups are possible, thus making the weekday backups much shorter in time and having them take much less backup disk space, allowing for a longer history than if only full backups were kept.





I think DLM does not work at all, is there any logs or event where to check?

Can I run it manually? I delete lot of backup set manually, not get any free space still and now manual delete stuck also, cannot delete anymore nothing.

Attached png.

Regards, Jyrki


Jyrki -

My company is right in the midst of trying to work through the 'new features' of BE2012 with regard to Deleting Backup Sets (it doesn't seem to matter how they are created - DLM, manual, or scheduled backups) - there is already a hotfix out to prevent some of the deleting errors and a script to fix your database if you have had errors previously.  But.. even these 2 things do not yet fix ALL of the issues with deleting Backup Sets - there are places in BE2012 that create jobs temporarily and then try to delete them that still have problems and we are not currently sure that our database is still in good shape after the gui crashes when deleting those temp jobs...

So, your problems with setting up and using DLM may be the result of other things in BE2012 that are not working and simply affect DLM.

I'm just wanting the advanced support to get back to me on my open cases and solve these deleting issues so we can move forward...

The second storage device is used to store your copies.  You can bring it home every night (Monday to Friday) excepting the very first Firday's night (your very first full backup job and its duplicate job).

The first duplicate job will only dulicate your weekly full back to the second storage.  In fact, you don't really need this duplicate job on the second thought if you are OK to duplicate your every Friday's backup till next Monday.

The second duplicate job will duplicate all backup sets "full and incremental" (which has not been duplciated during the duplicate job running time) to the second storage devie.

In this setup,  you can always bring your second storage device home every night including weekend.





I get manual delete work again, but it is very slow job. Dedup disk storage was offline, just give 1% more disk space from limit and it comes active again.

I've been installed all hotfixes via live update.

After manual delete, can I get more disk space with this solution:

Regards, Jyrki

As of May 26, 2012 there are actually 3 BE2012 hotfixes that are all important (and apparently a 4th that is close to release): 182395 - ODBC catalog alerts on upgrade - prevents proper restores. 182237 - Issues with VM backups and restores 180962 - this is the Query for Job View failed fix 180962 fixes many of the deletion problems, but not all as we are still having problems when selecting backup sets to delete from storage media (the delete works okay from servers).

Thanks again for your suggestions, Anker!

We'll take this approach until such time as Backup Exec can automatically pair matching full and incremental backups among separate individual USB disks in a device pool.


Anker - I'd like to get back to your original posting in this thread as our recent work with Tech Support has provided us information through the Audit log that appears to contradict what you stated in your original posting here. We are now running 2012 SP1a.

I don't know where/how (other than here) to go to improve my understanding of how BE2012 is really working with DLM- can you please explain?

Your statements from above: " DLM process will proactively check for expired backup sets and groom those backup sets and associated backup data to free disk space occupied by expired backup data.

o DLM process will be kicked off every 4 hours or whenever receiving low disk storage event." and " Plus, we don't overwrite/reuse BKF file anymore. DLM will proactively delete any expired backup set(BKFs/Image folder) to reclaim disk space back."

Here is but 1 example from our single B2D folder:

From the ServerA full backup log:

All Media Used



From the Audit Log (all occurrences of 3575, and 1089):

Backup Exec Audit Log 6/25/2012 2:18:39 PM

7954 entries

Date/Time User Name Category Message
--------------------- ---------------------- ---------------- ------------------------------------------------------------------------------------------------------
6/23/2012 6:24:04 AM Device and Media Media : B2D003575 has been overwritten by job MFGENGR1003.heli-cal.local Backup 00138-Full.
6/23/2012 6:40:01 AM ADAMM Device and Media Media : B2D001089 has been quick erased by job MFGENGR1003.heli-cal.local Backup 00138-Full.
6/23/2012 6:40:01 AM Device and Media Media : B2D001089 has been overwritten by job MFGENGR1003.heli-cal.local Backup 00138-Full.
6/23/2012 6:39:31 AM HELI-CAL\administrator Device and Media Media : B2D001089 has been deleted.
6/23/2012 6:39:31 AM HELI-CAL\administrator Device and Media Media : B2D001089 has been moved from media set Internal Disk Images to media set Retired Media.
6/9/2012 5:12:36 AM Device and Media Media : B2D001089 has been overwritten by job AUTOPROGRAM1101.heli-cal.local Backup 00105-Full.


From the system properties of the .bkf files:


Created: Saturday, June 23, 2012, 6:24:00 AM

Modified Saturday, June 23, 2012, 6:39:57 AM


Created: Saturday, June 09, 2012, 5:12:33 AM

Modified: Saturday, June 23, 2012, 6:50:21 AM

Our retention period for the June 09 backup was 2 weeks. There is over 6tb (terabytes) of FREE, AVAILABLE space on the B2D drive being used

Can you explain why it looks like the 1089.bkf IS being reused in a subsequent job when I read from your description that it will NOT be reused? Did I misunderstand?

If the .bkf files ARE being reused, then the level of defragmentation over time will decrease as files are reused (like before 2012) and the file sizes will increase to reach the maximum that is defined. This is a good thing.

The use of the term "overwritten" in the audit log is certainly confusing as it also means "created" when the file is first created by BE2012 for its use. BTW - there are ZERO occurrences of the word "create" in the audit log's 7954 entries.

I was working with a tech support person on duplicate jobs failing with error # 0XE0009444 and he was completely thrown off by the contents of the audit log as it looked as if the errors were being generated because the bkf's were being deleted prior to the duplicate (this case is still open)...

Thank you for any help/clarification that you can give.


There is a slim window from the expired media put into scrach pool and real deletion operation happen.  If the media request for backup happen in this window, the expired media could be picked up for overwritten.

This is a bug.  I will have the responsible engineer to submit a defect.



How do you un-virtual write protect these external disks if they become write protected after being disconnected for 14 days.

I've since changed it to 9999 days, but have yet to be able to find a way of restoring my ability to write to the disk short of deleting it.

Google yields no results.

The setting to change the "virtual write protect" of a particular disk storage device is called 'Limit Backup Exec to read-only operations' in the detail properties of the device on the Storage tab and can be changed by the user at any time (see below).

The Backup Exec global setting called 'Limit Backup Exec to read-only operations on disk-based storage if it has been detached for at least' configures the time period after which Backup Exec will automatically set 'Limit Backup Exec to read-only operations' to Yes for a disk storage device.

yes, thank you!  It was verbosely written and I scanned over it when looking to un-write protect the media.

Hi Anker,

Can you help me with a doubt? I want to backup to disk virtual machines, but if the jobs detect that certain machine was already backup without change, overwriting then.

What is the recomended configuration for the job?


I want to make sure I understand your question correctly.  Do you try to backup a virtual machine to a disk storage or try to convert a physical machine or a backup of a physical machine to a virtual machine(P2V/B2V)?  If you are talking about backing up a virtual machine, every backup job create a new backup set and there is no overwriting.  If you are trying to do P2V/B2V, it will always overwrite with new changes.  If there is no changes, nothing will be overwritten.  I hope I answered your question.

Is there any setting by which I can avoid deleting expired bkf files rather just overwrite the existing bkf files itself ?


Hi Anker,

Thanks for the blog.

How will DLM treat backup sets from same server with FQDN and NetBIOS name.

Suppose I run a full backup on Monday with the NetBIOS name of the server. 2 hours of retention.

On Tuesday, I run a seperate backup for same resources with the FQDN name. 2 hours of retention.

And on Wednesday, I run a backup using IP address of the server. 2 hours of retention.

Also, Monday backup goes to disk storage1, Tuesday backup to disk storage 2 and Wednesday to disk storage 3.

Will DLM detect that these backup sets are for the same server, and keep only the Wednesday set, and groom the other two?



The answer is no,  DLM always will keep the last recovery backup image chain per job definition per resource.  The way you setup will result different job definition for each setup.

Hi Anker,

I keep coming back to this very informative post to understand how DLM grooming works.

A question regarding:

>DLM process will proactively check for expired backup sets and groom those backup sets and associated backup data to free disk space occupied by expired backup data.

>DLM process will be kicked off every 4 hours or whenever receiving low disk storage event.

Does this mean that backup sets that are expired will be groomed (that is, deleted from the backup media) within 4 hours of their expiry time even if there is no low disk storage event?

Thus, if backup sets are configured for a 7 day retention period, they will be removed potentially on day 8 (assuming there is another full backup, they are not in a dependent chain, etc.) even if there is still lots of free space on the disk? This seems to match observed experience with Backup Exec 2012 with SP1a.

I was hoping that DLM would not groom until there was a low disk space event on that device, thus allowing as many old backups to be kept as disk space allows, even if they are past their retention date.

Can you confirm how backup sets that are expired are handled by DLM grooming when there is no lack of disk space on the device?


Tim Miller Dyck



Yes, DLM will groom expired backup sets (backup data on disk storage) even there is still lots of free disk space.   Your sugestion/request will be noted and taken back to the development team for future improvement consideration.


Is there an update to this? This post was from May - and here we are almost 6 months later. I was hoping for this to be resolved / fixed by now. I have quite a few clients with BDB drives that rotate (USB drives)... now that we go up BUE2012 - we're getting all sorts of errors - pool or no pool - when the drives are rotated! Logs and error reports look terrible!!!! Please help guys - get a solution out ASAP! Not the next version - but something added in this version PLEASE?

Hi, are you referring to the alert "Media Error. The disk is offline." message that occurs when physically disconnecting a USB drive?

I looked into trying to mark the device offline through Backup Exec scripting before disconnecting the device to avoid the alert error, but this does not appear to be possible. Here is the post:

Thanks for considering that change, Anker. In my view, it is helpful to default to keeping as much backup depth as possible given disk space constraints.

I'm running into an issue with DLM across multiple customers running Backup Exec 2012, using Backup to Disk. Specifically the problem is that when the grooming process occurs if the USB disk is not attached to the server that Backup Exec will remove the job history from the server, but not remove the B2D files or IMG folders associated with the backup, creating an orphaned file or files.

For example, one client has five 1TB USB3.0 backup disks which rotate during business days, Monday through Friday, and then need to be overwritten the next week. The client has instructed me to configure Backup Exec to overwrite the previous night's backup if they forget to swap the disk, they would rather have a current backup than day old history. So the retention period is set to only 12 hours on the job so that it  can be overwritten the next night. The job runs and completes in eight hours with verify.

Normally they swap the disk out early in the morning, by 9AM.

The next week when that backup drive is connected again, the disk is listed as 800/1000GB full, but the Backup Sets screen shows no backups are on the disk. Running an Inventory and Catalog operation reveals the backups that were run the week before, however the expiration date is set to 1 Year from the time the catalog operation was run. Backup Exec sees this as an imported backup, not as the backup it ran the previous week.

Looking at the information alerts for the previous week, every night includes a line in regards to 1 job history was deleted.

What I surmise is happening is that when Backup Exec grooms expired media, it removes the catalog and job history including the IMG and B2D files from the media. However, when the media is offline it only removes the catalog, and does not have any logic for finding those orphaned B2D files at a later date.

I've confirmed my suspicion by examining the behaviour at other clients with different media usage. For a client with an iSCSI disk backup target that remains online constantly, the B2D files are groomed properly when they expire and the disk remains clean. At another client I found orphaned B2D files dating back to June. On every client I've examined if the USB disk is offline and unavailable at the time the grooming process removes the catalog and orphans the B2D files and IMG folders and there does not seem to be any follow up process to find and remove orphaned B2D files.

To work around this issue I've had to run Pre-commands that delete the contents of the \BEData folder on the USB disk, otherwise the backups fail to run properly because they are out of disk space and don't know that they should be able to overwrite old data.

We found this issue too.  We are working on a solution to fix this issue. 

We found this issue too.  We are working on a solution to fix this issue. 

Yes - and I believe there should be. Symantec changed the behavior of ext. swappable media in this new build. I'm asking for them to change it back, or at least put some sort of conditionals into it. Tim - I read your other thread, that advisor is correct, that based on the current setup it is not possible. But I believe Symantec and (and should) change it. We should be able to tag USB Ext Drives as swappable media for rotation, thus NOT producing that error. It should be allowed both ways - not just the one way. OR - at the very least - there should be a way to disable that error in the logs when you know it won't be a true conditional.

A couple of suggestions in regards to DLM in Backup Exec.

1. Retention period should be broken out into two values.
Overwrite Period: Specifies the minimum length of time Backup Exec will retain media before allowing it to be overwritten by another backup job.
Retention Period: Specifies the maximum length of time Backup Exec will retain media before it is automatically removed through the Data Lifecycle Management grooming process.

example: Overwrite period of 1 week, Retention period of 2 weeks. New backups would be able to overwrite old backups within a week, and old backups would be removed regardless after 2 weeks.

2. The grooming process should write to the database which backup files and IMG folders can be groomed, and then during the grooming process it should scan all media to find these files/folders. When the file has been finally removed, the database is updated to reflect this. In this way backup files do not get orphaned. Additionally reporting or alert options should exist to inform the user better which files will be removed, when they will be removed, and also which files have not been removed due to the media not being found. The administrator should have an interface available that allows them to clear files/folders that have been manually purged or which exist on media that has failed.

3. Retention/Overwrite Period on backups should be calculated from the beginning of the backup job, not from the time it completes. This is actually quite annoying.
As an example, if I have media that I use every Monday with a backup job that starts at 11PM and completes at 4AM (Tuesday), currently if I specify a one week retention period the backup media cannot be overwritten until 4AM next Tuesday. So at a number of clients where I've seen one week retention periods the client reports their backups take a very long time, several hours longer. Because Backup Exec doesn't let the media be overwritten when the job is supposed to start.

Something I'd like to add in terms of behaviour when grooming backup sets on removable drives: Usually I have to do as stated here, that is, manually delete the .bkf files.  Today, however, there was one backup set stil showing in the list that hadn't been groomed, and when I deleted it, it removed *all* of the .bkf files that had expired.

This is a scary concept. I've had people ask this in the past ("Can't I just buy a cheap USB drive and back up to that?") and I've told them they can, if they set it up themselves and sign a waiver saying they are responsible for their own backup failures.

This is just a bad idea.

RDX is pretty much the same thing, but supported much better, and easier for the client.

We are experiencing the exact same problem with one of our customers. On 11/06/2012 Ankar Tsaur wrote Symantec found this issue also and a solution is in the making.

Is this patch avaiable?

JaccoM -

As another customer experiencing this, we have been told that there is hotfix for a number (but not all) of the DLM issues in BE2012 SP1 that is close (but not ready yet)...I can attest to the situation that there are quite a number of issues that can block DLM from operating on files we believe should be pruned and that DLM fixes are NOT simply patching one section of code to get it working... Stay tuned - I'm sure hoping for a solution to some of my DLM problems very sooooonnnn...


We are seeing these same type of problems with our BE2012 SP1 environment.  Previously on BE2010 and earlier versions, we had 2 B2D drives.  One drive would contain all of our Differentials and the other drive contained all of our Full backups.  Differentials run each night during the week and the Full backups run on the weekends.  The drives are iSCSI connected drives on a LeftHand SAN.  They are 5TB each in size.  Using this method we were able to use Media Set retentions to keep as many backups on hand as possible.  It would fill up the drive and overwrite the oldest backups.

The problem we are having with BE2012 is that the Differentials are never getting overwritten.  I have to manually delete the BKF files to free up space.  It seems that since all of the Differential backups are on the same B2D drive, they are all dependent on each other and can never be overwritten based on DLM.  Is this correct?

If so, what is the best practice for using B2D drives?  Should we just use one large B2D drive or should we split up the backups so that each part of a server's backups are on the same B2D drive (This means we would have some servers backing up to one B2D drive and others backing up to the other B2D drive)?

We would like it to behave similar to the way it was before BE2012: the B2D drive fills up and new backups will be written over the oldest backups.

Thanks in advance,


First, an update on our status: <hope this doesn't jinx us>

Since the last couple of months we have been running the following config of BE2012:

Service Pack 1

Hotfix 189571

Hotfix 180964

Hotfix 194470

Hotfix 199866

Hotfix 200433

Hotfix 201596

And things (including DLM are working well) - we haven't needed to manually prune anything since January 2013 - very good...

Our current procedure is to completely re-create the server's backup if multiple days fail after it ran successfully for days/weeks prior.  This does orphan some files (at least 1 copy of the dependency tree per server that has had its backups recreated), but it seems a small price to pay so far for keeping DLM running smoothly and not having to manually prune sets/files...

Next for Philip -

We have NOT segregated our backups by type to different B2D devices, so I can't speak to a specific bugs concerning spreading ONE server's backups by type across multiple devices.

Having said that, your description sounds like there may be other things that are controlling the dependency tree for the differentials that have nothing to do with the multiple B2D devices (try the following if you haven't already) -

First, make sure that your differentials and fulls are accomplished for each server in the SAME backup for that server - your situation would surely occur if the Differentials and the Fulls were in separate backup definitions for the single Server. Each Server should have its own stages for Full, Differentials, (and Duplicates if you use them.) Dependencies are only maintained for a single Server backup definition- there are NO dependencies from one backup definition to another, only within a single backup definition.

Second, we found the easiest way to determine dependencies is to attempt to delete the backup set (ALWAYS USING THE BE2012 GUI) that you feel should have already been deleted.  When this backup set is chosen for delete, BE will review the dependency chain and give you a window with all of the files that are dependent upon the chosen backup set.  The study of these lists enabled us to optimize our procedures to determine when and how individual backup sets were getting orphaned, and then allowed us to modify/update the related backups to produce the results that we were hoping for.  With the present setup we have almost 150 servers under BE2012 control and need to rebuild/recreate backups for maybe 1 or 2 servers per week.  We backup about 4tb of data in each 24 hr period with 1/3 to disk storage (18tb) and 2/3 to tape - critical servers directly to tape and everything else duplicated from the B2D device.

Our retention times are: Tape (Infinite, 6 weeks, or 4 weeks) and B2D (13 days fulls,  9 days incrementals)



As I mentioned in another post of mine, I don't see DLM is working on my BE instalation having a Data Domain 620.

In fact, take a look on the attached file.

I even try to delete the older incremental sets stored on my DD direclty from BE GUI, and by some reason all (ALL) the BKF files were deleted from my appliance, that was so strange because I didn't do anything from windows, everything was done from GUI console and the only thing I delete were some incremental backups that were left by BE that in theory, they had to be expired. (1 week retention) 

Righ now I'm dealing with this sending all my backups to tape, because I have 1 Tb free of 7 tb on the dd. I will have to wait a couple of weeks to see if there's some clean up on the DD. 

Anyone is facing a similar issue?