cancel
Showing results for 
Search instead for 
Did you mean: 

Incremental backups not appending to full backup

Andrew__Thompso
Level 4

Im using BE2010R3 to backup some remote servers to B2D

There is about 2TB of data that grows by about 3GB every day.

I have done a 1 off full backup of the whole 2TB in 1 job and its created a 2TB bkf file

Then created a new recurring job to daily backup any changes and append to the original backup set. (using incremental using archive bit - reset)

However when the first incremental job runs it creates a new bkf file rather than appending to the already exisitng 2TB bkf file

Then on the 2nd run of the incremental job it completely overwrites the 2TB bkf and creates a bkf file with just the incremental changes from the night before. So all i end up with is the last 2 nights incremental changes and not the original full backup data anymore

The incremental job is set up to append to media and terminate if none available.

I just want to be able to create 1 full backup and then append to it every day for the rest of the jobs life - the setup above should achieve this i thought?

Anybody know where its going wrong please and how i can resolve?

Thanks

17 REPLIES 17

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

Firstly the overwrite protection on your Full backup is not long enough (or you have media protetion set to none) so you need to fix that (or upgrade to a newer version of Backup Exec as newer versions block the deletion of on disk disk based backup sets if they relate to incremental chains where the complete chain is not expired)

Secondly best practice (for older BE versions) is to not use appends with backup-to-disk and always start disk jobs in a way that will create new BKF files (so start as Overwrite and it will either delete and recreate an existing one or create a new one) Note: we decided this best practice was so important due to how a sequence of ongoing appends affects media families and catalog operations that newer versions of Backup Exec do not append to BKF files ever, it is one set per bkf (and a set is something like a drive letter or a system state, meaning a complete server backup is usually more than one set)

Hi Colin,

Thankyou for a speedy reply....

So will this scenario work as per your explanation above?

One off Full Backup Job - overwrite protection until 31/12/2099

Create a new Job for the incremental changes each night but set that job to also Overwrite, not append?

The data is all a mixture of word, pdf and xml files only. Nothing else bar these 3 file formats.

Just to confirm the full backup will only be ever run once and then its just the appended data backed up each day until the job becomes redunadant somewhere down the line.

Thankyou

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

Hmm not a good idea - in fact very bad

 

If you try for an incremental forverer strategy and one of more of the backup sets is lost or gets corrupted, any complete disaster will be very limited limited (and complicated) depending on which parts of the chain were lost - with the worse case scenario of being impossible to recover the system at all. You should be running regular full backups and have enough storage to be able to at least maintain a GFS strategy against a full backup.

Also if you try for an incremental forever strategy and have all your backup sets but (as an example) it is 300 days of backups and then you need to recover from a server disaster, your restore to the state at the time of the most recent incremental could take days to achieve as the backup admin  would have to create 300 restore jobs (more or less one at a time)

You could possibly look at synthetic backups - but as BE 2010 R3 is no longer supported, if you don't already own the correct licenses for this option then you can't use it and even if you have the correct licenses this still need enough storage for at leats 2 full backups and a weeks worth of incrementals

Not sure what to do here then...

The full backup took 2 days to complete - there are over 1 million files - and it backs up to a NAS over a gigabit network.

So this kind of rules out doing full regular backups as the data will just never be up to date?

I have 8 jobs in total that i need to run - 3 of them in the above format of far too much data for fulls consistently and the other 5 are a Full backup every Sunday and then incrementals every other day until the next Sunday when the full overwrites - rinse and repeat.

The version of 2010 in use is a NFRS licence and i believe we have licences granted for every agent possible.

Any suggestions Colin?

Thanks

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

I guess if you are not interested in DR of the complete server and only looking at documents then the issue of losing part of the chain (unless you lose the initial full)  is less critical although the scenario for 300 days if you lost the volume containing the documents would still apply (with or without a missing set in the chain)

Traditionally backup products that do file level backup operations (such as Backup Exec) do experience performance issues if a very large number of smaller files are involved - the solution to this is usually look at Image/Block level products (such as Veritas System Recovery) instead (or possibly virtualize the server containing the data and then Backup Exec does do image level backup of VMs)

 

Mind you writing data to a backup device over CIFS is also a performance limiter  due to the protocol being used, backup to locally connected (or SAN connected) disks could also potentially be quicker - although I could not guarantee how much quicker and the only way to find out is for you to invest in such storage.

 

I take System images using a different product for DR of the OS partition and server restore to same or bare metal servers, to external hard drives that are rotated and taken offsite.

Backup exec takes care of the data only, that is generally on seperate raid arrays in the remote server(s) away from the OS partition - because all of this data has to go to an offsite remote location

So in terms of BE i literally just need to be able to backup at file level.

If i had say a 300 day incremental on top of the original full and i needed to restore all the data on the 301st day - 1 restore job by resource wouldnt restore all the data from all 299 incrementals and the full automatically working through the backup chain?

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

One Restore job would work, BUT ONLY IF you were on a newer version of Backup Exec (BE 15 and 16 have a point in time restore which assembles all of the incremental chain for you,  with you making selections against the date of the last incremental and BE and dealing with all the required media etc)

For BE 2010 you have to restore the full, then the 1st incrmenetal, then the second etc through to the 300th (and you would not be able to let your users access data during this work as if they edit one document not realizing a newer versions is in a later set that still has to be restored then it will cause issues. )

Got you Colin - Going to have to think up some other method of getting the 3 huge jobs offsite now then :\

With regards to the overwrite settings of the original post...is this correct for the media sets for the job......

Want to achieve:

Full Backup Sunday -> Incrementals Mon-Sat appending to Sundays full, -> Full again Sunday (overwriting everything from the previous 6 days)

Job 1 - Full Backup. Overwrite Period = 6 days. Append Period = 6 days

Job 2 - Incremental Backup using archive bit. Overwrite Period = 5 days. Append Period = 5 days

This will create 1 backup set with the full backup then a backup set for each incremental, and then the subsequent Sundays job will overwrite the lot and start again?

Or am i not understanding your original explation correctly?

Thanks

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

Those media set settings would work - but the full won't overwrite the lot it will overwrite one bkf file each time it needs a new bkf and then each subsequent incremental will overwrite one bkf file as required too (as long as overwrite recyclable before scratch is set) 

Again newer BE  versions such 15/16 do reclaims (backup sets deletions) without needing jobs to run so the answers would be different for a newer version of BE.

 I worry that if you overwrite the previous full before the next full completes and then you have a power cut in the middle of the job for the new full - you would have zero full sets (and if that power cut causes a disaster for the server or volume owning the user data, potentially zero ways to restore, unless your image-level product has got you covered)

Im stuck with BE2010 for the time being so I just need to make it work until time comes to upgrade.

I understand the point about power cuts - however the servers are in a datacentre with super high power redundancy and the chances of a complete power outage are extremely slim, not impossible granted, but not enough that its going to keep me up at night worrying.

Thanks Colin, you have been a great help today :)

 

Sorry Colin, 1 last question...

What media set settings will allow Sundays job to clear down everything before it starts writing its data?

As once the Sunday job starts and starts to overwrite the previous Sundays Full bkf file then the previous incrementals are useless to me anyway without the previous full no, so no point having them there?

jurgen_barbieur
Level 6
Partner    VIP    Accredited

first of all, my opinion is that you are misusing backupexec. 

the definition of backup is, to revert your production environment to the most appropriate  working state in case of a failure. I f you want some retention on your (user)files, you have to use an archiving product.

If you inplement an archiving solution (eg. Enterprise vault) you will have less (active) files to take in daily (or weekly) backup, and all other (historical) files will be handled by the archive. this will also decrease your backup time.

 

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

The ony way to clear down all BKF files after a set time period, needs a newer version of Backup Exec that uses DLM instead of Media Sets to control to handle when disk based sets are deleted.

 

In older versions of Backup Exec media is only overwritten when a backup job needs overwritable media - this means either at very start of the job, or when a job fills a first piece of media and needs a second (or third etc). When this overwrite request occurs we only delete and then recreate one piece of overwriteable media, we do not erase all the available overwritable media in the storage.

 

What customers sometimes do is keep weekly full backups for at least 2 weeks (13 day Overwrite Proetction) set daily incremental backups to 1 week (6 day overwrite protection) and then let Backup Exec overwrite media as it needs to. However you do need enough disk space for at least 2 full backups + 1 week of incrementals for this strategy

"What customers sometimes do is keep weekly full backups for at least 2 weeks (13 day Overwrite Proetction) set daily incremental backups to 1 week (6 day overwrite protection) and then let Backup Exec overwrite media as it needs to. However you do need enough disk space for at least 2 full backups + 1 week of incrementals for this strategy"

I do have enough space for this scenario Colin and on reflection i think its what im going to do - at least then i have a degree of failsafe.

So if i have for example in 1 job:

Sunday 1 - Full Backup 100GB

Week 1 - Incremental 30GB each night (5 x 30 = 150GB)

Sunday 2 - Full Backup 100GB

Week 2 - Incremental 30GB each night (5 x 30 = 150GB)

Sunday 3 - Full Backup 100GB

Sunday 3 will overwrite Sunday 1 bkf file

Week 2 incrementals will overwrite Week 1 incrementals day by day

Meaning for the above scenario i would need 350GB of space?

Is this correct Colin?

Thankyou

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

That looks OK - the minor flaw might be if the full backup overwrite an incremental set (which woud mean not as much space reclaimed) I guess you need to try it.

I could negate this potential issue by just splitting the Full and Incrementals jobs onto seperate B2D though?

So Job 1 will write its fulls to say d:\fulls and its incrementals to e:\incrementals

That way then the full should only overwrite the full bkf's?

jurgen_barbieur
Level 6
Partner    VIP    Accredited

correct