01-20-2016 09:03 AM
Hello !
Anyone know some way of using Netbackup accelerator with "user archive" types ?
of course my "user archive" policies type are Standard. but accelerator seems working only with full backup or incremental schedules
Solved! Go to Solution.
01-20-2016 10:26 AM
Hi,
Your request, if you think about it, doesn't really make sense. Accelerator works by determining which files were changed since the previous day and then quickly selecting only the changed segments whilst constructing the full/incremental of the storage server.
A user archive is designed to remove files, and accelerator works on the premise that files remain on the file system and should not need to be backed up again.
Just seems ilogical to me, what do you think?
01-20-2016 10:26 AM
Hi,
Your request, if you think about it, doesn't really make sense. Accelerator works by determining which files were changed since the previous day and then quickly selecting only the changed segments whilst constructing the full/incremental of the storage server.
A user archive is designed to remove files, and accelerator works on the premise that files remain on the file system and should not need to be backed up again.
Just seems ilogical to me, what do you think?
01-20-2016 10:26 AM
01-20-2016 11:09 AM
01-20-2016 12:13 PM
hey Rian, you r right. I agree with you.
actually my request was maybe not well formulated.
I want to know if there is any feature that can provide same functionnality as Accelerator but for "user archive" types.
Or the only option i have is client side dedup ?
@Marianne, the atribute was not grayed but in AM, when job starts, bprm says that "disabling accelerator .."
01-20-2016 12:34 PM
Why do you think you need accelerator for user archive type jobs? What is your "use case" scenario?
01-20-2016 12:45 PM
01-20-2016 02:10 PM
ok.Got it.
Actually my issue is that we are doing "user archive" types backup daily for a large oracle db. and the time it uses to complete exceeds the backup window.
I wanted to reduce significantly that time. the setup was done before i join the organisation but i think the purpose of using such "user archive" types was the posibility to have the backup filesystem cleared before the following batch process cycle during which new oracle rman of the db is dumped.
01-20-2016 04:58 PM
So, the intention was to clear up space by known successful backups. In which case I would have expected the number of files involved to only be a few files. Which then makes me ask, what kind of performance issues are, or were, you experiencing to make you ask your original question? Back to my question, what is your "use case"? Or, put another way, why is this question important to you? I don't mean to press you. I'm purely curious. Maybe you have one of those scenarios that fosters new thinking.
"The time it takes exceeds the backup window." you say. How many files are we talking about? How big is the daily data set? Why do you think "accelerator" will help?
01-20-2016 05:03 PM
Remember, "accelerator" doesn't move data bits any faster than normal backups. It doesn't make backup data transfer/transport faster. What it does is make the locating of small amounts of updated disperse data, amongst huge sets of static data, faster. Nothing can make a "bit" of data move faster than the speed of an electrical or optical signal across the encumbent physical carrier. Clever software can make the discovery experience a shorter elapsed duration, and that's what "accelerator" does.
01-20-2016 08:11 PM
You should be looking at Oracle Co-Pilot available for appliances in version 2.7.1. That will accelerate it for you :)
01-20-2016 10:33 PM
Hi, i am aware that accelerator will not move data bits any faster than normal backups. But to make the transfer faster, we can also just send less data over the wire.
I have 180 files (for a size of 2 TB) to push every night and i am also sure that within those 180 files, only 10% got changed from the current day to the following day.
Hence , i was looking for a kind of "accelerator feature" to allow me send just the modified blocks to the MSDP and then over there reconstruct the whole image.
@Riaan, I am running 7.6.0.1 - I will try to plan for upgrade to 7.7 to make use of Oracle Copilot.
Regards.
01-20-2016 11:04 PM
Please also consider using Oracle agent to backup directly to NetBackup instead of database dumps to disk followed by NBU backup/archive of the dumps.
Client side dedupe works well with Oracle agent backups - if the guidelines in Performance Tuning manual is followed for Oracle backups.
01-21-2016 01:13 AM
Hello Marianne.
Thanks, thats a very good idea. let me read about it, test it and revert.
regards.
01-21-2016 02:47 AM
Hi Kwakou,
1) When you say "I am aware that accelerator will not move data bits any faster than normal backups. But to make the transfer faster, we can also just send less data over the wire."
...to me, "move data bits" and "transfer" are the same thing. So, accelerator does NOT transfer the data faster. Accelerator speeds up the discovery of data. It does NOT make the move or transfer faster.
...it is "client side deupe" which will "send less data over the wire".
2) When you say "Hence , I was looking for a kind of "accelerator feature" to allow me send just the modified blocks to the MSDP and then over there reconstruct the whole image."
...what you are talking about is exactly what "client side dedupe" will do.
3) I think you may have been experiencing some confusion regarding the defintion of, and the features of two different aspects... i.e. "accelerator" and "client side dedupe" which are two different things.
HTH.
01-21-2016 11:06 AM
@sdo, Thank you for the clarifications.