Forum Discussion

t_jadliwala's avatar
10 years ago

rehydration is slow from msdp(5230) to sl500 RL

Hello Friends, I need yr help to fix the issue. I am having netbackup in guc cluster. ON Main site there are 3 media servers  1) Main media1 - windows vwware host 2)Main media 2 - 5220 ap...
  • Marianne's avatar
    10 years ago

    Duplication is going over the network:  dr-media-3 to dr-media-2-hcart-robot-tld-0

    Zone drives to dr-media-3 as well and config drives as shared.

    Local duplication should give better performance than network duplication.

    PS:
    Please always share NBU/Appliance versions in new discussions.
    Various improvements/enhancements are introduced with recent versions.

  • deduplikador's avatar
    10 years ago

    - That job did around 34MB/sec which is really not bad...definitely seen worse.

    - How does a backup directly written to tape perform? If slow, troubleshoot that first.

    - That said, it does appear there are waits for full buffer meaning tape was waiting on dedup.

    - Try:

    echo 256 > /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS
    echo 1048576 > /usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_DISK
    echo 512 > /usr/openv/netbackup/db/config/NUMBER_DATA_BUFFERS_DISK
    echo 0 > /usr/openv/netbackup/NET_BUFFER_SZ
    echo 0 > /usr/openv/netbackup/NET_BUFFER_SZ_REST

    And try:

    - these 3 in contentrouter.cfg:
    PrefetchThreadNum=8 ...to speed up prefetching
    MaxNumCaches=1024 ...increases # of data container files opened avoiding frequent open/close
    ReadBufferSize=262144

     

    Above doesn't help enough, can try:
    PrefetchThreadNum=16 /usr/openv/pdde/pdcr/etc/contentrouter.cfg
    PREFETCH_SIZE = 67108864 /usr/openv/lib/ost-plugins/pd.conf



    Another item that can help....reduce the total # of concurrent tape jobs by reducing the Maximum write drives on the destination tape storage unit. Reduce that to 6, then test....then try 4 and then test....and then all the way down to two and test.

     

    Realize that if the backup image in question has poor segment locality (segments are spread across many many containers in dedup storage), then the 'read' takes much longer.

     

    If the above does not help, open a support case to have them assist with determining why certain dedup images duplicate to tape slowly.