01-03-2014 07:19 AM
01-03-2014 07:41 AM
bpsynth log can give us more details.
Please verify below link and try the backup:
http://www.symantec.com/business/support/index?page=content&id=TECH67048
execute a traditional FULL backup and then an incremental backup before attempting to synthesize another FULL backup.
01-03-2014 08:09 AM
I know i can run a full backup, incremental and then Synth(hopefully) but that just a workaround.
I'm wanting to know the cause or what is causing this. Having to regularly run Fulls again defeats the purpose of Synth's.
Activity Log shows no useful info either...
01-03-2014 08:24 AM
This error sequence is noted when a traditional FULL backup no longer exists in the catalog and a synthetic backup job is initiated. The FULL backup has most likely expired.
Check what is the retention period for the full backup.
The reason might be traditional full backup was nt availble during the synthetic backup triggered.
01-03-2014 08:49 AM
01-03-2014 09:07 AM
Please review this tech note: http://www.symantec.com/docs/TECH68187 and verify if you follow the practice outlined.
One way of getting around the classic full before synthetic full take over is to configure a schedule as (traditional) full with a frequency of 3000 weeks.That schedule will only run the first time a client is added to the policy. The incremental and weekly synthetic schedules of cause also need to be located in the same policy
Best Regards
Nicolai
PS: Please post the policy configuration and attach as a file.
01-03-2014 09:09 AM
did you verify the bpsynth log may give us some clue
01-03-2014 09:46 AM
Sri Vani,
What you mean "verify the bpsynth log". Here is the error section from the bpsynth log
01-03-2014 10:03 AM
Oh Nicolai the full backup has run before, and its schedule is still present. Both the Synth and Full have no days set, I've been running them manually as this setup is semi-new and wanted to make sure it all went smoothly before leaving it to the scheduler.
01-03-2014 03:19 PM
There is usually some clue somewhere for issues like this, finding it however ..
Ideas ...
cat_export for the client will 'recreate' the header files for this clients backups, these will be located under /usr/openv/netbackup/db.export
If we consider :
client_full
client_inc
client_ful synth
client_inc
client_inc
The last client inc backup header file will show the previous inc backup, and that header file will show the previous backup etc ... so you can follow back to the last full or synth full.
I would then make sure that the backups all exist, in particular, I would check that the .f files for each of the backups are there, and readable (in /usr/openv/netbackup/db/images/client/<ctime> )
This can be done with:
cat_convert -dump <name of .f file>
This may or may not all check out. Clearly if it does, the issue is elsewhere.
Could be worth running bpdbm -consistency 2
Does this show any issues with any DB entries for this client ?
The other log that might show something would be bpdbm (needs to be at VERBOSE = 5 ) - hopefully this will show a reference to an image.
Regards,
Martin
01-04-2014 01:34 AM
I was going through that kinda thing last week. I've ran the image cleanup and -consistency (without the 2) which didn't really show anything.
I already have verbose = 5 and no bpdbm log.
I'll run that bpdbm -consistency 2 now and see what that brings up.
Thanks for your time, much appreciated.
01-05-2014 02:42 PM
My two cents...
I have seen this happen under the following scenarios:
- When paths have been added to the policy when allow multiple datastreams is enabled.
**** Doesn't look like that's the case as it isn't enabled per your policy config.
- When corruption occurs on the client due to disk related issues.
- When the network is not 100% stable - dbm may get corrupted information.
- When the STREAMS file is corupt.
The only way to recover from this event is to rerun the 'regular' full.
The bpdbm log at VERBOSE 5 will tell you which image is problematic. The bpsynth log is useful too(bpdbm gets huge, bpsynth allows us to split up the bpdbm log and quickly find the applicable section(s)).
Something seems not quite right with the information you pasted in about the last backups.
Taking out the columns that aren't needed - the output looks like:
Backed Up Files KB Sched Type
01/02/2014 22:09 144016 195312342 Differential
01/01/2014 22:35 110664 39154387 Differential
12/31/2013 22:34 149299 284390969 Differential
12/30/2013 22:41 619389 2075683546 Differential
12/27/2013 10:49 1838465 11849412426 Full Backup
Looking at the above there seems to be a much larger amount of data backed up for the diff on 12/30 then would be expected. *** even considering that most likely the full data is from prior to the 27th if it was a synth full. Did something happen between the full and the diff on the 30th? Filesystem issues? Network outages?
The short story is that I have more questions than answers...
01-13-2014 06:56 AM
Apologies for the delay, it never rains it pours :) Had other NetBackup stuff to work around.
Anyway i haven't had a chance to run the bpdbm -consistency 2 yet...been busy and the server is too. Will try and get a chance to do this, this week sometime.
Deb,
Firstly thanks for your time :) On to your questions...
Your right no multistreaming is on. We are adding paths to this policy a fair amount, and there are a lot of includes *sigh*.
Its highly unlikely there is disk corruption this data is being worked on, and when it isn't its archived away.
I've attached my bpsynth log and below is the entirety of my bpdbm log in case it helps. I purged the logs before starting the synth so it should all be just for this one attempt. Thanks for your time folks!
01-13-2014 06:57 AM
whoops double post