Forum Discussion

Tekkali's avatar
Tekkali
Level 4
11 years ago

Compress Catalog Interval

Hi

Netbackup7.5 Environment in Solaries master server ( Available clusters). netbackup /opt/openv/netbackup/db is linked to another mount point. ( /opt/VRTSnbu )

DB images increasing and reached 97%.  On this situvation verified Compress Catalog Interval in Master server attribute. it's disable state.

Enabled and mention Ater 30days. It's aking all Cluster nodes also required this. Selected yes and it's asked some daemon still running..?

Now few backups running... on this required refresh netbackup services ?

used bprereq -rereadconfig. How much time it will tack to decrease mount point ? or any other souction available ? Please sugggest ?

 

 

 

 

 

  • You will need to use the cluster tools to restart NBU resource to apply the change.

    Catalog compression will happen next time image cleanup runs after the restart.
    All images older than 30 days will be compressed.

    Best to assign more space to the catalog volume. 
    Catalog compression is a temporary workaround.
    Each time you need to do a restore for backups older than 30 days, the images for the client needs to be uncompressed first.

  • You will need to use the cluster tools to restart NBU resource to apply the change.

    Catalog compression will happen next time image cleanup runs after the restart.
    All images older than 30 days will be compressed.

    Best to assign more space to the catalog volume. 
    Catalog compression is a temporary workaround.
    Each time you need to do a restore for backups older than 30 days, the images for the client needs to be uncompressed first.

  • netbackup services refresh or cluster tools to refresh ?  how to refresh cluster tools ?...

     

     

  • If you stop/start NBU the normal way, the cluster will see that as a resource failure and either restart NBU or failover to other node.

    You should always use cluster tools to restart NBU.

    The tools depend on the cluster software.
    If Veritas cluster, you can use the hares command:

    hares -offline nbu_res -sys <system>
    (This will take time -  use bpps -a to ensure all processes/daemons are stopped) 
    hares -online nbu_res -sys <system>

  • Consider to extend the disk space in /opt/VRTSnbu too. Compressing the catalog may be a short term relief.

  • Hi Marianne

    Thanks for details..

    Done same, but image cleenup hang and eventually fail with Status 50.. I am looking bpdbm log..

     

     

     

  • Hi Nicolai

    Thanks for response.

    Yes, your right and I am agree with you. But in my case not avilable free disk space now.

    no other options. so I am following this temparary solution...

     

     

     

     

  • Hi Marianne

    Logs not avialble in default location, How can I found which location to link ?

    Kindly suggest on this.

     

  • Hi Marianne / All

    bpdbm logs not avialble that failed image cleenup job time stamp(08/08/2014 03:28:06).

    -rw-r--r--   1 root     root     52428794 Aug  8 05:13 080814_00040.log
    -rw-r--r--   1 root     root     52428787 Aug  8 05:26 080814_00041.log
    -rw-r--r--   1 root     root     14021354 Aug  8 05:29 080814_00042.log

    So I am planing to run Manualy. but still mount points size is decreasing. How cleaning job failed..but image disk size is decreasing now ?

    bpimage -cleanup -allclients ( now running cleanup job )...
     
    by defalt cleanup job will run every 12 hours. Can I decrease 12 to 6 hours due to space conserns.
     
    any impact in case I am using 6 hours ?
     
    Thanks
    Tekkali

     

     

     

  • How cleaning job failed..but image disk size is decreasing now ?

    It means that cleanup/compression of some images failed. Meaning that some images cleaned up and compressed successfully.
    What do you see in Job Details for failed job?

    What is logging level in your environment? Those bpdbm logs look quite big. 
    Check Host Properties or bp.conf on active node.
    Unless you have a support call open, there is no need for any logging to be higher than level 0.