cancel
Showing results for 
Search instead for 
Did you mean: 

How to generate a detailed report of a client backup?

joe-j
Level 4

I have some rather large backups, which I feel should be much smaller, and need to identify what is being backed up.  Is there a way to generate a report of the files that are backed up on the client?

Any help is sure appreciated!

1 ACCEPTED SOLUTION

Accepted Solutions

CRZ
Level 6
Employee Accredited Certified

bplist should be your friend here.

See pages 206-213 to determine which command line switches would be most useful to you:

Symantec NetBackup 7.5 Commands Reference Guide
 http://symantec.com/docs/DOC5182

You'll definitely want "-C" and "-s" at the very least.

Good luck!

View solution in original post

38 REPLIES 38

CRZ
Level 6
Employee Accredited Certified

bplist should be your friend here.

See pages 206-213 to determine which command line switches would be most useful to you:

Symantec NetBackup 7.5 Commands Reference Guide
 http://symantec.com/docs/DOC5182

You'll definitely want "-C" and "-s" at the very least.

Good luck!

joe-j
Level 4

Thanks very much for the reply and reference.

So I ran the following command "bplist -C clientname -s 06/04/2012" and was returned a message stating "EXIT STATUS 227: no entity was found".

Can you tell me please if my syntax incorrect, or am I missing another command?

(I am using NetBackup 6.5.6 on a Windows Server and Windows Client)

Thank you!!!

mph999
Level 6
Employee Accredited

Here you are ...

http://www.symantec.com/docs/TECH145113

bplist/ bpflist as you are finidng are difficult to get to work as there are many options.

Here is the esay way ... 

Take backup id ...

womble_1337635216

cd to ...

/usr/openv/netbackup/db/images/<client name>/<first 4 digits of ctime and 6 zeros>/

Eg.

/usr/openv/netbackup/db/images/womble/1337000000

Find the files with th ctime in their names

ls -al *1337635216*

Run cat_convert on the filename ending with .f (if you are at 7.5, all the files will be .f)

/usr/openv/netbackup/bin/cat_convert -dump robot_0_1337635216_FULL.f

 

num     len     plen    dlen    blknum  ii      raw_sz  GB      dev_num path    data
 
0       0       1       50      0       0       0       0       32      /       16877 root root 0 1337615094 1337611733 1337611733
0       0       11      50      1       0       0       0       257     /netbackup/     16877 root root 0 1337590723 1336552185 1336552185
1       0       20      50      2       1       0       0       257     /netbackup/testdata/    16877 root root 0 1337634790 1336991750 1336991750
2       0       25      52      3       1       0       0       257     /netbackup/testdata/file1       33188 root root 231 1335963094 1334651331 1337634790
3       0       25      52      5       1       0       0       257     /netbackup/testdata/file2       33188 root root 231 1335963094 1334651348 1337634790
4       0       25      52      7       1       0       0       257     /netbackup/testdata/file3       33188 root root 231 1336122964 1334651351 1337634790
5       0       25      52      9       1       0       0       257     /netbackup/testdata/file4       33188 root root 231 1336991750 1334651353 1337634790
6       0       25      52      11      1       0       0       257     /netbackup/testdata/file5       33188 root root 231 1336991750 1336991750 1337634790
 
Martin

Possible
Level 6
Accredited Certified

Joe,

Use bpflist as :

bpflist -d  02/26/2012 -client <> -backupid <>  -rl 10 |grep <The File Name Which U want To confrim>

Thanks,

Giri.

joe-j
Level 4

Thanks very much, Chris, Martin, and Giri.

I am testing out these different approaches and look forward to finding one that works best, and to learning something new.

I have been working a little bit with bplist with guidance from Chris, and found a set of commands that look like it may serve my need.  I appreciate anyones input as to if my method below is flawed, or if it might work, or if there is a better way.

bplist -C clientname -l -R -s 06/04/2012 18:00:00 -e 06/05/2012 06:00:00 /C > D:\log_clientname_C.txt

I will continue testing the other options and see how easy they may be.

Much thanks experts!

joe-j
Level 4

I ran the report and got the file details but they do not match to the backup size.

Is there an overhead amount that should be expected?

I am running a cumulative backup, selecting of ALL_LOCAL_DRIVES, and the reported detailed file size totals about 500MB, but the backup size is listed at 4.5GB.

Any thoughts?

Thanks very much!

mph999
Level 6
Employee Accredited

Hmm, interesting ...

Just an idea - how big is a Full backup ?

I am wondering if, for what ever reason, the cumulative is running as a Full ???

This doesn't provide you a solution, but at least it would narrow things down ...

I have a idea of one thing this could be , but lets , first get the answer to the question.

Thanks,

 

Martin

 

joe-j
Level 4

Thanks Martin.

Our Full backup runs weekly and totals about 6.5GB.  Each cumulative backup this week has totaled about 4.5GB.

I appreciate your assistance!

joe-j
Level 4

Thank you for your help here!

Here's something...  I just ran a cumulative backup and observed the activity monitor.  It looks like it is performing a system state backup before it gets to the files, even though the policy does not call for a system state backup, only the ALL_LOCAL_DRIVES.  Does the ALL_LOCAL_DRIVES setting automatically run a system state backup?  I think that may be what is taking up so much space.  Maybe?

- Joe

mph999
Level 6
Employee Accredited

Yep, pretty sure that ALL_LOCAL_DRIVES will include the system state backup.

This will not be that large, so I doubt it is causing the problem ...

 

The Volume Shadow Copy components include the following:
 
System State writers, which can include:
System Files
COM+ Class Registration Database
SYSVOL
Active Directory
Cluster Quorum
Certificate Services
Registry
Internet Information Services
 
Also, if, in the NBU config you have not changed anything , then NBU has not been set to back up anything more than 'last week'.
 
Now, normally (as in most of the time) NBU won't suddenly break and start backing up more than it was previously, unless ...
 
1.  Something has changed in the config 
2.  More files are added to the client, or change
3.  A very slight change has happened that has caused  ' a bug' to become evident (though for this case, I have no idea what it could be) 
 
the slightly bad news, is that 'on average' - (3) is the least likely ...
 
It's getting real late here (uk) - I'll have a look in the DB see if we have any known issues .
 
Martin 

joe-j
Level 4

Thanks Martin,

I'll double check the items you listed.

How big should the cumulative backup be, based on previous ones that have been normal.

Have the previous (cumulative) backups been of a consistant size

Is it possible some files have been added, or the area is holding some large files that are constantly changing  ???

- The cummulative backup size is consistent and always about 4GB larger than the changed files.  The server is just hosting a small application, and the data inside it does not change very much.

When I ran a backup  of just the C: and D: drive, as opposed to ALL_LOCAL_DRIVES, the job complete in a minute or two and totals in size about 50MB.

I am grateful for your helping me.  Have a good night!

- Joe

mph999
Level 6
Employee Accredited

This is useful for info about the archive bit/ timestamp.

http://www.symantec.com/docs/HOWTO34692

I've checked the DB , there are plenty of previous cases / technotes about cumulative backups running as a full, but I can't find anything (yet) where it backup up 'more than it is meant to' but 'not as much as a full' that is caused by a fault.

Of course many people use NBU on windows, and I'm not recalling any cases where this is happening frequently.

As a matter of interest, what happens if you run a differncial backup - do we see the same issue ?

Martin

mph999
Level 6
Employee Accredited

This is a bit more background info ...

http://www.symantec.com/docs/TECH15682

It's given me an idea (sorry ...)

cumulative backups do not reset the archive bit (which is how NBU knows what to backup).

The full backup (or diff incr) does. ...

So, after the full, the archive bits are reset on all files ..

"The archive bit is used by the Windows file system to identify which files have been created or changed since a backup. Whenever a file is created, or changes in the file system, the archive bit is turned on."

I wondering if its poosible if 'something' is setting the archive bit (incorrectly) on files that have not changed, thus causing them to be backed up.

If this is the case, it is likely to be someting outside of NBU, and at this exact second I have no idea how to check for this (hence just an idea).

NBU uses the archive bit method by default, which I suspect is what you are using.

How about changing it to use the timestamp, if the problem disappears, we have some clues.

Martin

mph999
Level 6
Employee Accredited

 

- The cummulative backup size is consistent and always about 4GB larger than the changed files.  The server is just hosting a small application, and the data inside it does not change very much.

OK, its consistant which is good, we're not chasing a moving target.

 

When I ran a backup  of just the C: and D: drive, as opposed to ALL_LOCAL_DRIVES, the job complete in a minute or two and totals in size about 50MB.

The system state stuff will be on C:, so this is ruled out ...

 

I wonder if you have any anti virus running.  For what ever reason,  we see various anti virus software cauing all sorts of issues that make no sense, could be worth disabling this briefly for a test.

Martin

mph999
Level 6
Employee Accredited

Ok, not that then, which is a shame, 'cos at least we'd know what direction we are heading in ...

OK, so ... get ready for a few daft questions .... I'm sure you'll understand that this is a bit tricky when we can't see your system.  Hopefully Marianne will be along shortly, to give some ideas.

So, first question :

How big should the cumulative backup be, based on previous ones that have been normal.

Have the previous (cumulative) backups been of a consistant size

Is it possible some files have been added, or the area is holding some large files that are constantly changing  ???

I was going to ask ..

Are the windows client increment backups running using the timestamp or the archive bit ?

I'm heading towards the thought that you'll have to compare the two lists (full and the cumulative) so we know exactly what is backed up, and can then look/ see for any clues ...

Martin

 

Marianne
Level 6
Partner    VIP    Accredited Certified

Another thought:

Does the client perhaps have DFSR in place?
These filesystem are getting backed up as part of Shadow Copy Components.
If I am not mistaken SCC is always backed up in full? (not 100% sure...)

To know for sure if DFSR on client, ask system owner and create bpbkar log.

To pin-point where the problem is, enable Multiple Data Streams, but ONLY when Full backup is due.
If done before Incremental, all backups WILL run as full.

mph999
Level 6
Employee Accredited

My suggestion of using the timestamp wouold appear correct.  Marianne confirmed for me on email, that this issue can be caused by Windows  (because of Client Service account with insufficient permissions.

Timestamp change will fix it without having to fiddle with permissions.)

Could be worth a go ...

 

Martin

joe-j
Level 4

Hello,

Thanks for your suggestions.

I tried a backup with the timestamp option enabled, but unfortunately it did not change the backup size.  Something, other than files, are getting backed up as part of the cumulative backup job.

I know we figured that the system state backup was not an issue, but when I look at the activity during a backup it shows the "current file" is /System State/ for about 4GB and then it backs up the files on the drives.

In the bplist report there are several thousand lines of "System State:\_SharedHardlinkData_\" items, which seems to be the culprit here.

Do you know what this _SharedHardlinkData_ is, or how to exclude it?  Do I want to back it up?

I appreciate all your advice!

- Joe

mph999
Level 6
Employee Accredited

Wow, I wasn't expcting that ...

Back to the drawing board ...

It's real late here (again) so I'll have to dash  - try this ...

Empty the recycle bin  - I think SharedHardlinkData is connecedt to it ...

 

Martin