cancel
Showing results for 
Search instead for 
Did you mean: 

RMAN very large Database backup tunning

Tape_Archived
Moderator
Moderator
   VIP   

The full backup time for very large RAC DB (~1TB) using the  standard RMAN backup process is taking between 35-40 hours to complete. Could you please suggest for Tunning Options that we can setup withing NetBackup side. Any other alernative options are welcomed (like snapshot at storage level). Please share your exprience if you have done some tunning in such cases. 

4 REPLIES 4

Michael_G_Ander
Level 6
Certified

I would start with checking if it is the Netbackup or the RMAN end that is the bottleneck or if it is some of the underlying infrastructure.

Things to look at:

disk read performance (for example rman backup validate)

actually gained network throughput (for example iperf)

 

Usually ways to improve RMAN backup speed is to increase the number of channels and fileperset

 

Usally to improve Netbackup is to make sure you using the recommended buffer size for you storage unit type and increase the number of buffers (SIZE_DATA_BUFFERS .. and NUMBER_DATA_BUFFERS)

 

Inside Oracle there things in the SGA that can be tune to improve backup speed

 

In general tuning Oracle/RMAN backups should be done together with the Oracle DBA

 

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

Genericus
Moderator
Moderator
   VIP   

I back up my 40TB Oracle database using 6 streams to a Data Domain over Fiber Channel, it takes around 14 hours. I could add streams if I needed more throughput.

NUMBER_DATA_BUFFERS:   128
SIZE_DATA_BUFFERS:  262144

IMHO speed is a function of the storage, the host server, the media server and the infrastructure, not just the buffer sizes.

 

RMAN variables like "verify on" can also impact throughput...


 

NetBackup 9.1.0.1 on Solaris 11, writing to Data Domain 9800 7.7.4.0
duplicating via SLP to LTO5 & LTO8 in SL8500 via ACSLS

Tape_Archived
Moderator
Moderator
   VIP   

Thanks Michael and Genericus for sharing your input.

We are using 262144 data buffer size & 32 data buffers.

We have verified and the Advanced disk where the backups run does not show any contention. 

The backups are compressed at the Oracle end then backed by RMAN to NetBackup Advanced Disk. This adds a overhead to the througput but increasing the number of buffers may not show immediate change as i have seen backup runs with similar throughput even no other backups are running on the same client & same disk storage unit.

I will check the other configuration from RMAN see if I can put more input to get answer from you guys. Thanks.

Genericus
Moderator
Moderator
   VIP   

How is everything connected? If your data is going over 1GB LAN, that is your chokepoint. IP based backups use CPU to encapsulate the TCPIP packets, check the CPU on the source system.

 

I had the luxury of running a 1TB backup using multiple buffer sizes and numbers and found the best was at 128 256K buffers. maximum buffer size can be a funtion of the destination - you likely can use 1024KB if you write to disk, and never to tape.

NetBackup 9.1.0.1 on Solaris 11, writing to Data Domain 9800 7.7.4.0
duplicating via SLP to LTO5 & LTO8 in SL8500 via ACSLS