Linux client with millions of files (nfs mount points)
We have a linux client (physical server)with millions of files on its nfs mount points. It is running with NetBackup 7.5. We are trying to take the backup of these nfs mount points with NFS enabled on policy setting. Increased the client read time out to 1200. Backup is running too slow and is failing with file read failed after running for some time. The total mount points backup size is around 6 TB(with millions of files in each mount point).
Do we have a better solution for backing this up ?
You can ramp the timout right up and try again, using the accelerator option to help on subsequent runs once you manage to get that first run out of the way.
You could also split it up into more datastreams based on its sub directories so that they all get running at the same time to reduce the overall time of the backup - again you could use accelerator with this method.
Hope this helps