06-05-2012 12:28 PM
I have some rather large backups, which I feel should be much smaller, and need to identify what is being backed up. Is there a way to generate a report of the files that are backed up on the client?
Any help is sure appreciated!
Solved! Go to Solution.
06-05-2012 12:50 PM
bplist should be your friend here.
See pages 206-213 to determine which command line switches would be most useful to you:
You'll definitely want "-C" and "-s" at the very least.
Good luck!
06-05-2012 12:50 PM
bplist should be your friend here.
See pages 206-213 to determine which command line switches would be most useful to you:
You'll definitely want "-C" and "-s" at the very least.
Good luck!
06-05-2012 02:35 PM
Thanks very much for the reply and reference.
So I ran the following command "bplist -C clientname -s 06/04/2012" and was returned a message stating "EXIT STATUS 227: no entity was found".
Can you tell me please if my syntax incorrect, or am I missing another command?
(I am using NetBackup 6.5.6 on a Windows Server and Windows Client)
Thank you!!!
06-05-2012 03:06 PM
Here you are ...
http://www.symantec.com/docs/TECH145113
bplist/ bpflist as you are finidng are difficult to get to work as there are many options.
Here is the esay way ...
Take backup id ...
womble_1337635216
cd to ...
/usr/openv/netbackup/db/images/<client name>/<first 4 digits of ctime and 6 zeros>/
Eg.
/usr/openv/netbackup/db/images/womble/1337000000
Find the files with th ctime in their names
ls -al *1337635216*
Run cat_convert on the filename ending with .f (if you are at 7.5, all the files will be .f)
/usr/openv/netbackup/bin/cat_convert -dump robot_0_1337635216_FULL.f
06-05-2012 03:56 PM
Joe,
Use bpflist as :
bpflist -d 02/26/2012 -client <> -backupid <> -rl 10 |grep <The File Name Which U want To confrim>
Thanks,
Giri.
06-05-2012 04:25 PM
Thanks very much, Chris, Martin, and Giri.
I am testing out these different approaches and look forward to finding one that works best, and to learning something new.
I have been working a little bit with bplist with guidance from Chris, and found a set of commands that look like it may serve my need. I appreciate anyones input as to if my method below is flawed, or if it might work, or if there is a better way.
bplist -C clientname -l -R -s 06/04/2012 18:00:00 -e 06/05/2012 06:00:00 /C > D:\log_clientname_C.txt
I will continue testing the other options and see how easy they may be.
Much thanks experts!
06-06-2012 11:14 AM
I ran the report and got the file details but they do not match to the backup size.
Is there an overhead amount that should be expected?
I am running a cumulative backup, selecting of ALL_LOCAL_DRIVES, and the reported detailed file size totals about 500MB, but the backup size is listed at 4.5GB.
Any thoughts?
Thanks very much!
06-06-2012 11:25 AM
Hmm, interesting ...
Just an idea - how big is a Full backup ?
I am wondering if, for what ever reason, the cumulative is running as a Full ???
This doesn't provide you a solution, but at least it would narrow things down ...
I have a idea of one thing this could be , but lets , first get the answer to the question.
Thanks,
Martin
06-06-2012 11:33 AM
Thanks Martin.
Our Full backup runs weekly and totals about 6.5GB. Each cumulative backup this week has totaled about 4.5GB.
I appreciate your assistance!
06-06-2012 02:06 PM
Thank you for your help here!
Here's something... I just ran a cumulative backup and observed the activity monitor. It looks like it is performing a system state backup before it gets to the files, even though the policy does not call for a system state backup, only the ALL_LOCAL_DRIVES. Does the ALL_LOCAL_DRIVES setting automatically run a system state backup? I think that may be what is taking up so much space. Maybe?
- Joe
06-06-2012 02:44 PM
Yep, pretty sure that ALL_LOCAL_DRIVES will include the system state backup.
This will not be that large, so I doubt it is causing the problem ...
06-06-2012 02:59 PM
Thanks Martin,
I'll double check the items you listed.
How big should the cumulative backup be, based on previous ones that have been normal.
Have the previous (cumulative) backups been of a consistant size
Is it possible some files have been added, or the area is holding some large files that are constantly changing ???
- The cummulative backup size is consistent and always about 4GB larger than the changed files. The server is just hosting a small application, and the data inside it does not change very much.
When I ran a backup of just the C: and D: drive, as opposed to ALL_LOCAL_DRIVES, the job complete in a minute or two and totals in size about 50MB.
I am grateful for your helping me. Have a good night!
- Joe
06-06-2012 03:05 PM
This is useful for info about the archive bit/ timestamp.
http://www.symantec.com/docs/HOWTO34692
I've checked the DB , there are plenty of previous cases / technotes about cumulative backups running as a full, but I can't find anything (yet) where it backup up 'more than it is meant to' but 'not as much as a full' that is caused by a fault.
Of course many people use NBU on windows, and I'm not recalling any cases where this is happening frequently.
As a matter of interest, what happens if you run a differncial backup - do we see the same issue ?
Martin
06-06-2012 03:14 PM
This is a bit more background info ...
http://www.symantec.com/docs/TECH15682
It's given me an idea (sorry ...)
cumulative backups do not reset the archive bit (which is how NBU knows what to backup).
The full backup (or diff incr) does. ...
So, after the full, the archive bits are reset on all files ..
"The archive bit is used by the Windows file system to identify which files have been created or changed since a backup. Whenever a file is created, or changes in the file system, the archive bit is turned on."
I wondering if its poosible if 'something' is setting the archive bit (incorrectly) on files that have not changed, thus causing them to be backed up.
If this is the case, it is likely to be someting outside of NBU, and at this exact second I have no idea how to check for this (hence just an idea).
NBU uses the archive bit method by default, which I suspect is what you are using.
How about changing it to use the timestamp, if the problem disappears, we have some clues.
Martin
06-06-2012 03:21 PM
- The cummulative backup size is consistent and always about 4GB larger than the changed files. The server is just hosting a small application, and the data inside it does not change very much.
OK, its consistant which is good, we're not chasing a moving target.
When I ran a backup of just the C: and D: drive, as opposed to ALL_LOCAL_DRIVES, the job complete in a minute or two and totals in size about 50MB.
The system state stuff will be on C:, so this is ruled out ...
I wonder if you have any anti virus running. For what ever reason, we see various anti virus software cauing all sorts of issues that make no sense, could be worth disabling this briefly for a test.
Martin
06-07-2012 12:00 AM
Ok, not that then, which is a shame, 'cos at least we'd know what direction we are heading in ...
OK, so ... get ready for a few daft questions .... I'm sure you'll understand that this is a bit tricky when we can't see your system. Hopefully Marianne will be along shortly, to give some ideas.
So, first question :
How big should the cumulative backup be, based on previous ones that have been normal.
Have the previous (cumulative) backups been of a consistant size
Is it possible some files have been added, or the area is holding some large files that are constantly changing ???
I was going to ask ..
Are the windows client increment backups running using the timestamp or the archive bit ?
I'm heading towards the thought that you'll have to compare the two lists (full and the cumulative) so we know exactly what is backed up, and can then look/ see for any clues ...
Martin
06-07-2012 12:47 AM
Another thought:
Does the client perhaps have DFSR in place?
These filesystem are getting backed up as part of Shadow Copy Components.
If I am not mistaken SCC is always backed up in full? (not 100% sure...)
To know for sure if DFSR on client, ask system owner and create bpbkar log.
To pin-point where the problem is, enable Multiple Data Streams, but ONLY when Full backup is due.
If done before Incremental, all backups WILL run as full.
06-07-2012 07:07 AM
My suggestion of using the timestamp wouold appear correct. Marianne confirmed for me on email, that this issue can be caused by Windows (because of Client Service account with insufficient permissions.
Timestamp change will fix it without having to fiddle with permissions.)
Could be worth a go ...
Martin
06-07-2012 03:42 PM
Hello,
Thanks for your suggestions.
I tried a backup with the timestamp option enabled, but unfortunately it did not change the backup size. Something, other than files, are getting backed up as part of the cumulative backup job.
I know we figured that the system state backup was not an issue, but when I look at the activity during a backup it shows the "current file" is /System State/ for about 4GB and then it backs up the files on the drives.
In the bplist report there are several thousand lines of "System State:\_SharedHardlinkData_\" items, which seems to be the culprit here.
Do you know what this _SharedHardlinkData_ is, or how to exclude it? Do I want to back it up?
I appreciate all your advice!
- Joe
06-07-2012 04:03 PM
Wow, I wasn't expcting that ...
Back to the drawing board ...
It's real late here (again) so I'll have to dash - try this ...
Empty the recycle bin - I think SharedHardlinkData is connecedt to it ...
Martin