04-29-2014 10:41 AM
Hello,
Is there a way to confirm that catalog compression is working?
We are using lots of disk space and I was looking at the image cleanup job and it is reporting that it is finding >8000 records to compress each time it runs (every 12 hours).
Here is a sample job:
04/29/2014 04:45:53 - Info bpdbm (pid=12659) image catalog cleanup
04/29/2014 04:45:53 - Info bpdbm (pid=12659) Cleaning up tables in the relational database
04/29/2014 04:45:53 - Info bpdbm (pid=12659) deleting images which expire before Tue Apr 29 04:45:53 2014 (1398771953)
04/29/2014 04:51:32 - Info bpdbm (pid=12659) deleted 37 expired records, compressed 8380, tir removed 0, deleted 37 expired copies
the requested operation was successfully completed (0)
Solved! Go to Solution.
05-12-2014 05:59 PM
OK, I figured out what the issue was. Compress wasn't installed:
[root@bkpsvr ~]# rpm -qa |grep compress
[root@bkpsvr ~]#
Installed the package and now it's working.
04-29-2014 11:01 AM
Goto Host Properties of NetBackup Master Server
in Global Attributes Properties
Compress catalog interval
This property specifies how long NetBackup waits after a backup before it compresses the image catalog file.
04-29-2014 11:03 AM
04-29-2014 11:25 AM
Yes, it is enabled (see attached).
I'm just not sure that it is working properly as we don't write 8000 images in a day (let alone 12 hours).
04-29-2014 11:49 PM
Your screenshot is saying that images older than 30 days must be compressed.
So, you probably have >8000 images that are older than 30 days, right?
If you REALLY want to trace what exactly is happening during catalog compression, you need to enable bpdbm log, but that will just increase disk usage!
04-30-2014 03:03 AM
If you want to compress the catalog more than:-
Make sure no backups are running, decrease the number of days (in place of 30 use 15) in Compress catalog interval.
Restart NetBackup services :)
04-30-2014 03:03 AM
Please "Mark Solution" if it helped :)
05-02-2014 11:25 AM
I've done that (the server team was able to add another disk, so I'm not in danger of running out of space right now).
It looks like something is wrong:
16:52:34.397 [13073] <2> lock_file_by_path: Acquired db_Imglock(0x10ced8a0) in /usr/openv/netbackup/db/images/bkpfiler/1386000000/prodfiler_bkpvol_1386836430_FULL.lck
16:52:34.425 [13073] <2> compress_dbfile: /usr/bin/compress -cf /usr/openv/netbackup/db/images/bkpfiler/1386000000/prodfiler_bkpvol_1386836430_FULL.f > /usr/openv/netbackup/db/images/bkpfiler/1386000000/prodfiler_bkpvol_1386836430_FULL.f.Z
16:52:34.430 [13073] <2> compress_dbfile: took 0 seconds
16:52:34.431 [13073] <4> db_error_add_to_file: system() returned 32512 (127)
16:52:34.431 [13073] <16> compress_dbfile: system() returned 32512 (127)
16:52:34.431 [13073] <4> db_error_add_to_file: stat of /usr/openv/netbackup/db/images/bkpfiler/1386000000/prodfiler_bkpvol_1386836430_FULL.f Failed
16:52:34.431 [13073] <2> CatImg_FilesFileSize: stat of /usr/openv/netbackup/db/images/bkpfiler/1386000000/prodfiler_bkpvol_1386836430_FULL.f Failed
This image was written in December. I have a ticket open with Symantec...
05-02-2014 12:13 PM
05-02-2014 12:31 PM
Based on the text it's referring to these files in /usr/openv/netbackup/db/images/bkpfiler/138000000:
-rw------- 1 root root 72 Dec 12 02:59 prodfiler_bkpvol_1386836430_FULL.f
-rw-rw-rw- 1 root root 0 Dec 12 00:21 prodfiler_bkpvol_1386836430_FULL.lck
05-02-2014 09:41 PM
Compression of this particular image is probably failing because of the orphaned .lck file.
I found the following about .lck image files:
http://www.symantec.com/docs/HOWTO66960
I would say that this lock file can be removed. You Support engineer will confirm.
One more thing - your OS is listed as Linux.
Verify that ncompress RPM package is installed.
05-12-2014 05:59 PM
OK, I figured out what the issue was. Compress wasn't installed:
[root@bkpsvr ~]# rpm -qa |grep compress
[root@bkpsvr ~]#
Installed the package and now it's working.