cancel
Showing results for 
Search instead for 
Did you mean: 

Decrease backing time for a long folder

Raco
Level 4

Hi guys,

Here again with a problem to be helped.

First I want to thank to all that helped me to decommissed my phantom media server.

Here is my new problem.

We have NetBackup Enterprise 7.0, a media server connected to VTL by optic fiber that spouses to guarantee best backup times.

We also have a policy to copy all windows profiles from the hole company. It is a big folder with more than 2 Tb that I have to copy at once.

NetBackup manages it very well, but it takes too long, for a day or longer.

Right now it is not acceptable for us and I have to find a solution.

Does any one has an idea to decrease this backing time?

1 ACCEPTED SOLUTION

Accepted Solutions

Andy_Welburn
Level 6

Large volume backed up via NDMP was taking just too long. Could not break it down as there were too many sub-folders & as you say NDMP does not allow wildcards and no possibililty of redefining the structure of the volume to facilitate a more "granular" backup. Nor was there anything we could achieve resource-wise to improve performance.

We had to go the same route you are already thinking & Marianne suggesting - start to back up via an alternate route (in our case an NFS mount on our Solaris master) so that we could implement multiple data streams AND wildcards. In our case, if you go this route, our backup selections were thus:

NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[a-dA-D]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[e-lE-L]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[m-rM-R]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[s-zS-Z]*

Taken individually the backup would appear to take longer, but as all streams are being written at the same time the actual time taken to backup was reduced compared with the single stream NDMP backup.

View solution in original post

12 REPLIES 12

Marianne
Level 6
Partner    VIP    Accredited Certified

Break the Backup Selection up into more streams, e.g:

NEW_STREAM
Z:\folder1
Z:\folder2
NEW_STREAM
Z:\folder3
Z:\folder4
NEW_STREAM
Z:\folder5
Z:\folder6
NEW_STREAM
Z:\folder7
Z:\folder8

Remember to select 'Allow Multiple data streams' in policy attributes. Increase Max Jobs per Client in Master's global attributes (minimum of 4 to allow simultaneous backups in example above).

If MPX is not enabled in schedule and STU config, the above will need one drive per stream. Depending on throughput, you might want to enable MPX as well.

Raco
Level 4

Thanks Marianne, but it does not help me.

This folder changes dynamically the amount and title for it´s children folder.

Every day new children folders are created and erased. Due to that I can not use specifics folder name.

I been thinking on using wildcards. But as far as I know, wild cards works on Windows clients only and we are using NDMP client because all this data are in SAN.

If no other solution is possible I´ll have to think on moving to windows client, but I hope to find other solution.

Any other idea pleaseeeeeeeeee.

Marianne
Level 6
Partner    VIP    Accredited Certified

"using NDMP client because all this data are in SAN" ??? I don't understand? NDMP is NAS, not SAN.

Andy_Welburn
Level 6

Large volume backed up via NDMP was taking just too long. Could not break it down as there were too many sub-folders & as you say NDMP does not allow wildcards and no possibililty of redefining the structure of the volume to facilitate a more "granular" backup. Nor was there anything we could achieve resource-wise to improve performance.

We had to go the same route you are already thinking & Marianne suggesting - start to back up via an alternate route (in our case an NFS mount on our Solaris master) so that we could implement multiple data streams AND wildcards. In our case, if you go this route, our backup selections were thus:

NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[a-dA-D]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[e-lE-L]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[m-rM-R]*
NEW_STREAM
/nfs_mount_to_NAS_volume/path_to_some_folder/[s-zS-Z]*

Taken individually the backup would appear to take longer, but as all streams are being written at the same time the actual time taken to backup was reduced compared with the single stream NDMP backup.

Raco
Level 4

I´m sorry you are right.

We have this information on volumes into the NAS.

Searching I found that I can exclude list by using the set keyword. 

Now I´m studying this topic but any of your suggestion would be great

Raco
Level 4

Thank Andy,

I will study your suggestion and comment later.

Thanks guys.

Raco
Level 4

Well guys,

This what I did.

Included a path for NDMP policy

NEW_STREAM
/vol/VOL_M1_PERFILES_W2K3/[a-dA-D]*

and run a manual backup as testing one.

I got a code 99 error

14/02/2011 17:37:57 - begin writing
14/02/2011 17:37:59 - Error ndmpagent(pid=19944) ndmp_data_start_backup failed, status = 9 (NDMP_ILLEGAL_ARGS_ERR)       
14/02/2011 17:37:59 - Error ndmpagent(pid=19944) NDMP backup failed, path = /vol/VOL_M1_PERFILES_W2K3/[a-dA-D]*       
14/02/2011 17:38:00 - Error bptm(pid=19816) none of the NDMP backups for client adcentral00 completed successfully   
14/02/2011 17:38:00 - end writing; write time: 00:00:03
NDMP backup failure(99)

I keep looking for solution for this 99 code error,

Marianne
Level 6
Partner    VIP    Accredited Certified

I think you might have misunderstood Andy's post... He changed his config from NDMP to a Standard policy of NFS mounted NAS volumes.

Extract from NBU NDMP Admin Guide:

The following Backup Selections capabilities are NOT supported for an NDMP policy:
■ Wildcards in pathnames. For example, /home/* is an invalid entry.

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi,

 

Are you using remote ndmp or are the VTL drives attached/zoned to the NDMP host? Since you have a VTL i would hope its the latter, if not, change it.

 

What speed are you getting on the job?

Stumpr2
Level 6

I have a client that I setup 10 policies... Policy_0 thru Policy_9 to backup the files. I used the telephone keypad as a model to break up the wildcards. A telephone key has both numbers and letters. I setup the filelist as follows the telephone keypad.

Policy_0
   D:\dept\0*

Policy_1
   D:\dept\1*

Policy_2
   NEW_STREAM
   D:\dept\2*
   NEW_STREAM
   D:\dept\A*
   NEW_STREAM
   D:\dept\B*
   NEW_STREAM
   D:\dept\C*

Policy_3
   NEW_STREAM
   D:\dept\3*
   NEW_STREAM
   D:\dept\D*
   NEW_STREAM
   D:\dept\E*
   NEW_STREAM
   D:\dept\F*

and so on.....

I needed to break them up into separate policies so that I could split up the full backups throughout the week.

Otherwise the 2.5TB of data would take too long to do it at one time.

Andy_Welburn
Level 6

angel

Raco
Level 4

Ok guys,

Still I want to keep using ndmp policy type.

Then I read that using SET EXCLUDE it is possible to exclude files and directories from a backup.

I included this sentence on my policy for a particular extension as follow.

SET EXCLUDE= *.pst

/vol1/myprofilesroot/

First it took to long to start writing and the speed was to low about 550 kb/seg instead of regular one we get of 23000 Kb/seg.

Does any one know the repercussion of using SET EXCLUDE on dnmp backup performance?