cancel
Showing results for 
Search instead for 
Did you mean: 

How big should my cataloge be

PDragon
Level 4

We currently have a need to backup 1,385,315,337 Billion files across an enviroment. How big should my disks be?
 

2 ACCEPTED SOLUTIONS

Accepted Solutions

GulzarShaikhAUS
Level 6
Partner Accredited Certified

Normally I would assign 100GB minimum for catalog for a small to midium sized environment. For larger environments I assign 300GB minimum. This is just a rough esitmation which worked for me well.

As suggested by Nicolai, we can extend the partition.

If you have a Windows master server you can make use of diskpart.

There are few requirements in order to extend the partition.


1. The volume must be formatted with the NTFS file system.
2. For Basic volumes, the unallocated space for the extension must be the next contiguous space on the same disk.
3. For Dynamic Volumes, the unallocated space can be any empty area on any Dynamic disk on the system.
4. Only the extension of data volumes is supported. System or boot volumes may be blocked from being extended, and you may receive the following error:
    Diskpart failed to extend the volume. Please make sure the volume is valid for extending
5. You cannot extend the partition if the system page file is located on the partition. Move the page file to a partition that you do not want to extend.

 

 

View solution in original post

sdo
Moderator
Moderator
Partner    VIP    Certified

Attached is a simple calculation.

It assumes that the average length of all file path specifications is 132 bytes.  If your folders structures have very long names, and/or are very deep, then you may need to change this.

 

View solution in original post

11 REPLIES 11

GulzarShaikhAUS
Level 6
Partner Accredited Certified

Number of files as well as retetion determines the catalog size. 

Here are some pointers for you.

Imaga DB:

Method 1 - image database size = (132 * number of files in all backups)/ 1GB

Method 2 - Multiplying percentage = (132 * number of files that are held in backups / average
file size) * 100%

NBDB :

160 MB + (2 KB * number of volumes that are configured for EMM) + (number of images in disk storage other than BasicDisk * 5 KB) + (number of disk volumes * number of media severs * 5 KB)

Please refer to the Perf Tuning Guide for more details 

https://support.symantec.com/en_US/article.DOC7449.html

 

 

GulzarShaikhAUS
Level 6
Partner Accredited Certified

Some other Best practices for NB catalog

https://support.symantec.com/en_US/article.TECH144969.html

sdo
Moderator
Moderator
Partner    VIP    Certified

@SymGuy - Method 2 - "average file size" - should that be... "average size of the full file path specification" (i.e. including directory, sub-directories, and file name) ?

.

@PDragon - and by "number of files in backups" we mean... for example, if a server has a full backup once a week and you have a four week retention on the backup - then count that file four times.  If you have weekly full retention 4 weeks, and monthly full with retention for 12 months, then count that file 16 times.  And don't forget to add the number of files for daily incremental and differential too.

.

@All - Top Tip:  If you plan to use NetBackup BMR, this requires "TIR" (True Image Restore) feature, which can significantly increase the size of the catalog - if BMR is widely deployed/utilised/selected on many backup policies.

 

PDragon
Level 4

So and that is where my problem comes in, currently if I grab a count of one of our "current NBU" enviroements (Not the one I am speccing for) and tally up the total number of files in OpsCenter...

 

I get 15,521,278,772 Billion files counted. When I do the math it tells me I should have a 1.9TB cataloge. However the current size is 353GB. no where close to the amount calculated by the formula.

and I assume the count from OpsCetner includes backup copys per retention. So maybe I dont fully understand the formula of

 

132*15521278772 /1GB....
 

GulzarShaikhAUS
Level 6
Partner Accredited Certified

My suggestion would be to go through the Perf Tuning Guide which describes in detail.

The second method mentioned in the guide refers to the file size.

To use this method, you must determine the approximate number of copies of each file that are held in backups and the typical file size.

The number of copies can usually be estimated as follows: Number of copies of each file that is held in backups = number of full backups + 10% of the number of incremental backups held

The multiplying percentage can be calculated as follows: Multiplying percentage = (132 * number of files that are held in backups / average file size) * 100% Then, the size of the image database can be estimated as: Size of the image database = total disk space used * multiplying percentage

The following is an example of how to calculate the size of your NetBackup image database with the second method.

This example makes the following assumptions: 

Number of full backups per month: 4

Retention period for full backups: 6 months

Total number of full backups retained: 24

Number of incremental backups per month: 25

Average file size: 70 KB

Total disk space that is used on all servers in the domain: 1.4 TB

Solution: Number of copies of each file retained: 24 + (25 * 10%) = 26.5 NetBackup image database size for each file retained: (132 * 26.5 copies) = 3498 bytes Multiplying percentage: (3498/70000) * 100% = 5% Total image database space required: (1,400 GB * 4.5%) = 63 GB

Nicolai
Moderator
Moderator
Partner    VIP   

You can extend the file system space for the Netbackup image catalgo as time go :)

Makre sure you use Storage Foundation with VxFS for this. EXT4 is not going to do it.

 

PDragon
Level 4

SymGuy-IT - I did look at the tuning guide and the numbers again were way off.

31 fulls
3 month retention
----
93

93*10%= 9.3 --> (132*9.3) = 1227.6 -->  (1227.6/70000) * 100% =0.01753714285 --> 1,400 GB * 1.9%) =26.6 gigabytes

 

And my Image DB is much larger then that...maybe I just really SUCK and math here....My end goal is to just see how much space I need, but so far I cannot seem to get acurate numbers.

 

GulzarShaikhAUS
Level 6
Partner Accredited Certified

Normally I would assign 100GB minimum for catalog for a small to midium sized environment. For larger environments I assign 300GB minimum. This is just a rough esitmation which worked for me well.

As suggested by Nicolai, we can extend the partition.

If you have a Windows master server you can make use of diskpart.

There are few requirements in order to extend the partition.


1. The volume must be formatted with the NTFS file system.
2. For Basic volumes, the unallocated space for the extension must be the next contiguous space on the same disk.
3. For Dynamic Volumes, the unallocated space can be any empty area on any Dynamic disk on the system.
4. Only the extension of data volumes is supported. System or boot volumes may be blocked from being extended, and you may receive the following error:
    Diskpart failed to extend the volume. Please make sure the volume is valid for extending
5. You cannot extend the partition if the system page file is located on the partition. Move the page file to a partition that you do not want to extend.

 

 

sdo
Moderator
Moderator
Partner    VIP    Certified

Attached is a simple calculation.

It assumes that the average length of all file path specifications is 132 bytes.  If your folders structures have very long names, and/or are very deep, then you may need to change this.

 

Genericus
Moderator
Moderator
   VIP   

Check to see if compression is active.

My catalog and images are several TB uncompressed, but I compress after 14 days, at this time my entire /usr/openv is 869G.

Please review the information posted by users above.. If your catalog images are compressed, if someone tries to do a restore and is not careful, they can uncompress ALL the backups for a specific server, if it is the one with the large number of files, your images directory can EXPLODE until the next auto compress.

I have had 'dueling' processes, automatic compression trying to compress, and someone's restore uncompressing. 

Nothing like running "ps -ef | grep compress" and seeing 6 processes. Can you say CPU overload?

I have the same situation with my NDMP filer, I have had to issue strict instructions on restores.

It does not help that NetBackup sometimes will select EVERY backup to expand, even after you specifically select one.

NetBackup 9.1.0.1 on Solaris 11, writing to Data Domain 9800 7.7.4.0
duplicating via SLP to LTO5 & LTO8 in SL8500 via ACSLS

sdo
Moderator
Moderator
Partner    VIP    Certified

FYI - for clarity - I need to add that this behaviour wil not be seen with Windows based catalogs - because the 'compressed' catalog entries are never fully 'uncompressed' - i.e. because NTFS folder and file "compaction" is used - then the OS and Windows NTFS file system driver never actually de-compact on disk - what happens is any activity to read the files results in de-compaction taking place on the fly - as the files are read.