Decrease backing time for a long folder
Hi guys,
Here again with a problem to be helped.
First I want to thank to all that helped me to decommissed my phantom media server.
Here is my new problem.
We have NetBackup Enterprise 7.0, a media server connected to VTL by optic fiber that spouses to guarantee best backup times.
We also have a policy to copy all windows profiles from the hole company. It is a big folder with more than 2 Tb that I have to copy at once.
NetBackup manages it very well, but it takes too long, for a day or longer.
Right now it is not acceptable for us and I have to find a solution.
Does any one has an idea to decrease this backing time?
Large volume backed up via NDMP was taking just too long. Could not break it down as there were too many sub-folders & as you say NDMP does not allow wildcards and no possibililty of redefining the structure of the volume to facilitate a more "granular" backup. Nor was there anything we could achieve resource-wise to improve performance.
We had to go the same route you are already thinking & Marianne suggesting - start to back up via an alternate route (in our case an NFS mount on our Solaris master) so that we could implement multiple data streams AND wildcards. In our case, if you go this route, our backup selections were thus:
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[a-dA-D]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[e-lE-L]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[m-rM-R]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[s-zS-Z]*Taken individually the backup would appear to take longer, but as all streams are being written at the same time the actual time taken to backup was reduced compared with the single stream NDMP backup.

