Highlighted

NetBackup Best Practices for SLP

My network architecture of NetBackup server is as below:Architecture(2).png

 

My Backup Jobs are running with good speed but duplicatiion is running very slow speed. My quries are:

  • What are th ebest practices? Is there any documents are KB articals related to same. Please share.
  • We are running duplication using SLP, what are the best practice for SLP in this case?

 

 


Thanks
Puneet Dixit
Tags (2)
1 Solution

Accepted Solutions
Accepted Solution!

Re: NetBackup Best Practices for SLP

1. Backup to data domain

2. Duplicate two copies to tape

SO - both at the same time? same destination tape library? same retention? same path?

There is a feature that allows you to make both copies at once. OR, you can immediately make one copy, then another later. This allows you to use either the DD or tape as source for the second copy - are you doing that? IMHO, I actually get better throughput from my DD then from tape. Plus tape to tape requires allocating two drives. Writing to both tapes at once means they HAVE to stay synchronized, so if one goes slow they BOTH go slow.

Do you have continous blocks of idle times with no backups? If so, schedule the SLP duplications then. 

We specifically overpowered our DD due to contention reading and writing at same time. DD are optimally designed for reading in data and deduplication. Writing to tape - not so much, although it CAN be done, because I do it.

I had three DD ( two 990 and a 7200 ) now combined into one dual head 9800, and it works great. I write to 18 LTO5 drives and get reasonable throughput. I have 10G and FC connections from media servers to DD, and FC from media server to tape. 

You may double check your networking if using 10G - I had some fun getting good speed.

I have multiple /usr/openv/var/global/nbcl.conf.XXX files, like nbcl.conf.032, nbcl.conf.256, and set up aliases and crontab processes to automatically run /usr/openv/netbackup/bin/admincmd/bpsetconfig like:

/usr/openv/netbackup/bin/admincmd/bpsetconfig /usr/openv/var/global/nbcl.conf.032

This updates the SLP parameters, I have ones that update the scan time, and the sizes of MIN and MAX per job.

So:

# cat /usr/openv/var/global/nbcl.conf.032
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 16 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 32 GB

# cat /usr/openv/var/global/nbcl.conf.short
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 32 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 64 GB
SLP.JOB_SUBMISSION_INTERVAL = 5 minutes
SLP.IMAGE_PROCESSING_INTERVAL = 5 minutes
SLP.IMAGE_EXTENDED_RETRY_PERIOD = 10 minutes
SLP.MAX_TIME_TIL_FORCE_SMALL_DUPLICATION_JOB = 2 minutes

# cat /usr/openv/var/global/nbcl.conf.big
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 64 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 512 GB
SLP.JOB_SUBMISSION_INTERVAL = 60 minutes
SLP.IMAGE_PROCESSING_INTERVAL = 60 minutes
SLP.IMAGE_EXTENDED_RETRY_PERIOD = 1 hour
SLP.MAX_TIME_TIL_FORCE_SMALL_DUPLICATION_JOB = 40 minutes

 

So I can micromange my SLP, I like to balance my SLP between media servers to tape, so I do not overload one.

 

NetBackup 8.1.2 on Solaris 11, writing to DataDomain 9800
duplicating via SLP to LTO5 in SL8500 via ACSLS

View solution in original post

Tags (2)
6 Replies

Re: NetBackup Best Practices for SLP

It's kinda difficult to make sense of what you're saying. What is the source and the destination of the duplication? The tape drives or the Data Domains? Duplications are slow across Media Servers or even when keeping it in a single Media Server? Are you multiplexing the backups? If so, are you keeping the MPX in the duplication? Are you getting similar speed rates in all tape drives?

As for best practices for SLP:
https://www.veritas.com/support/en_US/article.000018455

Re: NetBackup Best Practices for SLP

How are your data domains and tape drives connected?

I have both 10G and Fiber channel to media server and data domains, and fiberchannel from media server to tape.

I get excellent thoughput. My data domain is a 9800, so I should!

I can drive 18 LTO5 tapes as fast as they can go.

I back up oracle data bases over fiber channel and 10G to the data domain 14 channels at 200,000KB/Sec each

When I back up my NDMP server with tons of small files I get nowhere near this throughput.

If I use BOOST on some systems it looks like I am backing up at over 1,000,000 KB/Sec!

So, it depends on what you are backing up, and how.

NetBackup 8.1.2 on Solaris 11, writing to DataDomain 9800
duplicating via SLP to LTO5 in SL8500 via ACSLS
Tags (2)

Re: NetBackup Best Practices for SLP

Dear Punit,

Can you please elaborate the environment details. the Duplicate backup job you are using is on tape or on the second datadomain ? 

Re: NetBackup Best Practices for SLP

My backup Jobs are going on both the DataDomain and duplication on tape. My tape library has 8 drives. We are duplicating data using SLP. each backup creates three copies, primary copy is on DD, 2nd and 3rd copy goes on tape. In that case how should I configure my backup and duplication job so that I can get best performance for my environment? If there is any arctical or document for best practices of NetBackup architecture, Please share with me.


Thanks
Puneet Dixit

Re: NetBackup Best Practices for SLP

Did you check the link I provided above?

Also, is this the same platform you described in https://vox.veritas.com/t5/NetBackup/configure-SLP-for-duplication-on-tape/m-p/857430#M236615 ?
Because in that case this thread would be duplicated.

Accepted Solution!

Re: NetBackup Best Practices for SLP

1. Backup to data domain

2. Duplicate two copies to tape

SO - both at the same time? same destination tape library? same retention? same path?

There is a feature that allows you to make both copies at once. OR, you can immediately make one copy, then another later. This allows you to use either the DD or tape as source for the second copy - are you doing that? IMHO, I actually get better throughput from my DD then from tape. Plus tape to tape requires allocating two drives. Writing to both tapes at once means they HAVE to stay synchronized, so if one goes slow they BOTH go slow.

Do you have continous blocks of idle times with no backups? If so, schedule the SLP duplications then. 

We specifically overpowered our DD due to contention reading and writing at same time. DD are optimally designed for reading in data and deduplication. Writing to tape - not so much, although it CAN be done, because I do it.

I had three DD ( two 990 and a 7200 ) now combined into one dual head 9800, and it works great. I write to 18 LTO5 drives and get reasonable throughput. I have 10G and FC connections from media servers to DD, and FC from media server to tape. 

You may double check your networking if using 10G - I had some fun getting good speed.

I have multiple /usr/openv/var/global/nbcl.conf.XXX files, like nbcl.conf.032, nbcl.conf.256, and set up aliases and crontab processes to automatically run /usr/openv/netbackup/bin/admincmd/bpsetconfig like:

/usr/openv/netbackup/bin/admincmd/bpsetconfig /usr/openv/var/global/nbcl.conf.032

This updates the SLP parameters, I have ones that update the scan time, and the sizes of MIN and MAX per job.

So:

# cat /usr/openv/var/global/nbcl.conf.032
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 16 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 32 GB

# cat /usr/openv/var/global/nbcl.conf.short
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 32 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 64 GB
SLP.JOB_SUBMISSION_INTERVAL = 5 minutes
SLP.IMAGE_PROCESSING_INTERVAL = 5 minutes
SLP.IMAGE_EXTENDED_RETRY_PERIOD = 10 minutes
SLP.MAX_TIME_TIL_FORCE_SMALL_DUPLICATION_JOB = 2 minutes

# cat /usr/openv/var/global/nbcl.conf.big
SLP.MIN_SIZE_PER_DUPLICATION_JOB = 64 GB
SLP.MAX_SIZE_PER_DUPLICATION_JOB = 512 GB
SLP.JOB_SUBMISSION_INTERVAL = 60 minutes
SLP.IMAGE_PROCESSING_INTERVAL = 60 minutes
SLP.IMAGE_EXTENDED_RETRY_PERIOD = 1 hour
SLP.MAX_TIME_TIL_FORCE_SMALL_DUPLICATION_JOB = 40 minutes

 

So I can micromange my SLP, I like to balance my SLP between media servers to tape, so I do not overload one.

 

NetBackup 8.1.2 on Solaris 11, writing to DataDomain 9800
duplicating via SLP to LTO5 in SL8500 via ACSLS

View solution in original post

Tags (2)