cancel
Showing results for 
Search instead for 
Did you mean: 

catlog compression and recovery of the db

KISHORE_SADI_GE
Level 3
Can some help in compress my catalog - which is occuping a lot of space in my disk
Any suggestion  is appreciated.
2 REPLIES 2

sdo
Moderator
Moderator
Partner    VIP    Certified
Some points:
 
1) No matter the size of the catalog, compression should be implemented piecemeal, in small-ish increments. (N.B. this item is first because it is important!)
 
2) Catalog compression does achieve a 2:1 ratio, e.g. my catalog is 160GB in size, occupying 83GB on a 100GB volume.
 
3) Catalog recovery is faithful to compression, i.e. a catalog recovery will bring the catalog back compressed.
 
4) Compression is implemented using standard Windows NTFS file compression that is native to the file-system, i.e. transparent to DOS, Windows Explorer and NetBackup (i.e. typical "blue" files and folder in Windows Explorer), much like using the COMPACT command in DOS.
 
5) Compressed catalogs take longer to backup.  We achieve 11.4MB/s in our catalog backup, whereas the SAN based disk storage system we use is are able to sustain about 18 to 19 MB/s.
 
6) Compressed catalogs are supported in an MSCS cluster (as mentioned, we use DRM replication on our SAN attached catalog volume).
 
7) Catalog compression is not forwarded to the tape drive, which does it's own compression, i.e. bpbackupdb, will uncompress from disk, send plain data via the SAN, and the tape drive re-compresses, i.e. you will not achieve more storage on your catalog backup media.
 
8) Catalogs like to be defragged.  Our average catalog backup crept up to over 6 hrs, but is now back down to 04:17 on average (over the whole year).  We use Raxco PerfectDisk v8.0 to defrag, and whilst it is scheduled to run whilst NetBackup is quiet, i.e. defrag runs during the day Mon to Fri, and NetBackup at night Mon to Fri and weekends - we have run backups during defrag - and many times image DB cleanup has been running when defrags have kicked in - never a problem.  It take PerfectDisk about 1:30 to 2:00 hours to pass over 83GB on our 100GB volume - but then our SAN attached drives are quite old and slow by modern standards.
 
9) Compressed catalogs will slow down the restore GUI a bit. In fact all things catalog slow down, even image DB cleanup/expiration.
 
10) Back to point one, how/when to implement.  NetBackup will run compression during image cleanup.  My recommendation is to start with a high cutoff value.  What you use to start with depends on your longest retentions.  What you want to avoid is NetBackup spending many hours/days compressing the catalog on the first or any subsequent pass over the database (i.e the database is actually a set of files on disk - and not SQL or Oracle etc).  Does this make sense, or do you need more explanation?
 
HTH, Dave.

Stumpr2
Level 6
If you have a lot of old images then another option is to use catalog archiving. here is a technote on archiving the catalog.
 
DOCUMENTATION: Overview of Catalog Archiving process