efficient Volume Pools for vaulting

I'm working on trimming down the number of Volume Pools in my NBDC4.5 environment. The goal is to speed up jobs by writing multilple Policies to a tape simultaneously and by cutting down tape load-and-unload times.

Most of my polices have their own separate Vaults.

I'm concerned that if policies share volume pools, two different Duplication jobs may want to read the same tape at the same time.

I'm also concerned that a Duplication job may want to read from a tape that is alreay mounted by a Backup job from another policy.

What will happen in either of these cases?
Will the Duplication read from a tape already mounted for another job?
Will the Duplication wait in the Queue until the tape is available?
Or will the Duplication fail?
6 Replies

Re: efficient Volume Pools for vaulting

Remember that media servers DO NOT share tapes regardless if the tapes are in the same volume pool.

Will the Duplication read from a tape already mounted for another job?NO. another mount request will be generated

Will the Duplication wait in the Queue until the tape is available?Yes. for a time it will remain queued. Then it may timeout.Message was edited by:
Bob Stump

Re: efficient Volume Pools for vaulting

Thanks for the concise answers, Bob. That's exactly what I needed to know.

Good point about the Media Servers owning tapes. I'll have to think through how that affects my goals. The majority of my Policies are configured to use "any" Storage Unit, and I have two Storage Units that are not set to On Demand Only.

Re: efficient Volume Pools for vaulting

Mike Werger,

I know you marked this thread as answered but I was wondering what have you decided for your best solution?

Re: efficient Volume Pools for vaulting

For now, I've set my unvaulted Policies all to a few generic Volume Pools. Those Policies I do Vault I've assigned to individual Pools, or to Pools shared among a few of them.

I was getting much better results on my backup times when I had some of the Vaulted Policies writing to generic Pools, but some admins on my team reported vault failures and partial failures.

I didn't investigate the individual errors thoroughly (there's too much else to do), but am guessing/assuming they might have been related to tape availability. My backups are definitely writing a lot slower with these major policies separated, but still finish within my nightime time window deadlines.

It's diffucult with LTO2 drives and 100mb/s connections from clients, because without writing multiple policies to a tape simulataneously I can't combine enough streams to hit the drives' native speed (30mb/s).

My concept of engineering a Netbcakup environment, and probably any kind of backup environment, is it's a juggling act balancing optimal backup, vault, and restore speeds. At least in my data center restore speeds are still a very low priority. I mainly just have to worry about backup and vault legal requirements to get everything on tape and out the door.

Re: efficient Volume Pools for vaulting

As for Media Servers not sharing tapes, I've set Policies that I vault to designated Storage UInits. This has definitely sped up vault times, but also built up traffic in the queue during nighttime backups. I get more error 134 re-queues, but I'll be abled to reduce them over time as I decide how to juggle Storage Units.

My other policies all still go to "any" Storage Unit, just like they go to generic Volume Pools.

Re: efficient Volume Pools for vaulting

Have you seen this technote?
How to optimally performance-tune systems with IBM Linear Tape-Open 2 drives (Linear Tape-Open generation 2 drives, IBM 3580 Ultrium-2)
http://seer.support.veritas.com/docs/262724.htm

Also inline tape copy may be a good solution to help with the overall backup imaging and vaulting. It does have drawbacks in that you need a good number of tape drives available and there is about a 30% increase in time neded to make the images. But then you are done with both original and copy.

I also discoverd that inline tape copy makes restores much, much faster for Oracle db threads.