cancel
Showing results for 
Search instead for 
Did you mean: 

Backup a folder that is constantly written to

jdhenry5
Level 0

Hello,

I haven't been able find this issue addressed in my searches and readings, but:

I've recently setup NB 8.2. One of the policies I have is for an audit folder that needs to be backed up; but since it handles our internal audit logs, it is being constantly written to.

Based upon the fact that the job for the first "Full" backup of the audit folder has been running for ~2 days, it looks like it never actually finishes and just keeps backing up all the newly written logs.

Does anyone know of a way to have NB backup the audit folder as it exists at whatever point the backup begins, based upon time stamps or such? Or do I need to pursue other methods of backing up the audit folder, because NB will just continue to write all the new audit entries?

Thank you,

3 REPLIES 3

quebek
Moderator
Moderator
   VIP    Certified

Hello

Where this audit folder do exists? Windows server or unix? If windows are you using snapshot for this job? 

Can you provide more details about client, policy used MS-windows vs Standard?

jnardello
Moderator
Moderator
   VIP    Certified

Umm...no, that's not your issue. =) NetBackup generates the backup list when the job first triggers and then those are the only files that are going to get backed up no matter how much additional stuff gets written to the location after that point in time. If you have a job still running days later, at a guess it's more likely that you've got a gazillion audit files in that directory and you're either having OS processing issues doing directory traversals (if it's one sub-directory per server times 4000 servers, each with thousands of logs), or alternately there's 12 million files in a single directory to back up that are all 8k and having to constantly do file-open/file-closed operations every half second is really slowing things down. You may need to look at alternate backup methods and I suspect probably move to something like the classic grandfather-father-son schedule setup to keep your backup times in check; also check out Flashbackup to see if it does a better job. 
Another possibility since you said all your servers write to this one location, are you just slamming the involved hardware (whether underlying audit server SAN space, audit server NIC, fiber cards involved, audit server switch, audit server firewall, etc. ) harder than it can handle ? Your first step in troubleshooting this is figuring out where the bottleneck is currently living and troubleshoot that. Fix it, then see where the new bottleneck is.

Repeat until you're happy with the backup speed or you run out of money. 

Nicolai
Moderator
Moderator
Partner    VIP   

Hi @jdhenry5 

To figure out how long time Netbackup will use to backup files in a "best world" scenario, you can call bpbkar direct with the nocount option. This option is good for testing backup speed on the client. With the nocount option bpbkar read all the files, but throw away the actual read data.

See more in : https://www.veritas.com/support/en_US/article.100006447

The time used is the "best ever time" with no time used to actual transfer files over the network. So if bpbkar uses 3 hours to read the files, there is no need to optimize the Netbackup backend, time and effort has to be used on the client.