cancel
Showing results for 
Search instead for 
Did you mean: 

Is too many backup and many entries in Catalog a concern?

kkhoo
Level 5

Hi,

Customer has backup requirement.

Frequency is every day, every minutes. Size is about this below..

 Host # Files Size Backup Sched
Server1 62 175MB FULL
Server2 19 50MB FULL

Accdg to our vendor, he foresees that: "

I’m asking, because I have a lot of concerns, e.g. this will have a huge impact on your backup environment, especially on the master server,

the catalog will grow, because you’re creating 4*24h=96 jobs a day, 672 Jobs a week, 3360 jobs for a 5 week retention per client and every job

creates another entry in the catalog. So each backup adds to the NetBackup catalog, leave alone the high load, which will be created on the

SLP and duplication queues."

 

Please advise if this is a concern.

5 REPLIES 5

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

Hi,

 

The sizes are fairly small, so its should not be too much of an issue but when working with the catalog the number of files in the backup is more important than the size of the backup.

Regarding SLP/Duplicates, since they're small they'll be queued up with other duplications so the amount of jobs should not be affected.

Lowell_Palecek
Level 6
Employee

Why all full backups? Why not one full per day with incrementals for the rest?

Thiago_Ribeiro
Moderator
Moderator
Partner    VIP    Accredited

Hi,

Take a look this information about Catalog space. Maybe, can help you.

Estimating catalog space requirements

NetBackup requires disk space to store its error logs and information about the files it backs up.The disk space that NetBackup needs varies according to the following factors:

■ Number of files to be backed up
■ Frequency of full and incremental backups
■ Number of user backups and archives
■ Retention period of backups
■ Average length of full path of files
■ File information (such as owner permissions)
■ Average amount of error log information existing at any given time
■ Whether you have enabled the database compression option.

To estimate the disk space that is required for a catalog backup:

1 Estimate the maximum number of files that each schedule for each policy backs up during a single backup of all its clients.
2 Determine the frequency and the retention period of the full and the incremental backups for each policy.
3 Use the information from steps 1 and 2 to calculate the maximum number of files that exist at any given time.

For example:
Assume that you schedule full backups to occur every seven days. The full backups have a retention period of four weeks. Differential incremental backups are scheduled to run daily and have a retention period of one week.

The number of file paths you must allow space for is four times the number of files in a full backup. Add to that number one week’s worth of incremental backups.

The following formula expresses the maximum number of files that can exist for each type of backup (daily or weekly, for example): Files per Backup × Backups per Retention Period = Max Files

For example:
A daily differential incremental schedule backs up 1200 files and the retention period for the backup is seven days. Given this information, the maximum number of files that can exist at one time are the following:
1200 × 7 days = 8400
A weekly full backup schedule backs up 3000 files. The retention period is four weeks. The maximum number of files that can exist at one time are the following:
3000 × 4 weeks = 12,000

Obtain the total for a server by adding the maximum files for all the schedules together. Add the separate totals to get the maximum number of files that can exist at one time. For example, 20,400.

For the policies that collect true image restore information, an incremental backup collects catalog information on all files (as if it were a full backup). This changes the calculation in the example: the incremental changes from 1200× 7 = 8400 to 3000 × 7 = 21,000. After 12,000 is added for the full backups, the total for the two schedules is 33,000 rather than 20,400.
4 Obtain the number of bytes by multiplying the number of files by the average number of bytes per file record.
If you are unsure of the average number of bytes per file record, use 132. The results from the examples in step 3 yield: (8400 × 132) + (12,000 × 132) = 2692800 bytes (or about 2630 kilobytes)
5 Add between 10 megabytes to 15 megabytes to the total sum that was calculated in step 4. The additional megabytes account for the average space that is required for the error logs. Increase the value if you anticipate problems.
6 Allocate space so all the data remains in a single partition.

Regards,

Thiago

@RiaanBadenhorst :thanks .. can i conclude that this setup should not affect the work load of backup server?

@Lowell_Palecek: currently, we are triggering this backup from client. Client triggered are always FULL backup, please correct if this is wrong. I have limited access to the masterserver. i need the admin to make frequency to 15mins (by command line) and schedule an INCR. but Admin is refusing this request due to his concerns on high loads and  others..

@Thiago_Ribeiro: wow, give me time to digest and understand your advise. interesting.

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

I don't see how it could affect it,. Yes, it is a lot of extra work in terms of streams but because it is run over 24 hours its really not that bad.