cancel
Showing results for 
Search instead for 
Did you mean: 

SAN Client & PDDO

D_Peppas
Level 3
Partner Accredited Certified

Hi to all.

Is there any point (and is ot possible) to use a PDDO storage unit as a target for a SAN FT Client? I am trying to investigate a case with about 30-40TB data that the end user want's to send to a dedupe storage unit and then tape-out, and use only target dedupe.

A GB network will be saturated with this amount of data, so one idea is to go for source dedupe (but how well will this work on a 1-2TB Oracle DB) or use SAN FT Client.

Any thoughts?

3 REPLIES 3

AAlmroth
Level 6
Partner Accredited

You can certainly send your backups over the SAN to a FT media server, which in its turn is connected to a MSDP or even a PDDO for that matter.

However, to gain better performance, I use put a "non-dedup" disk storage unit first in the SLP, so that the actual backup windows is minimized. Then a duplication to PDDO/MSDP. For 30-40TB full backups this would obviously not be the preferred approach though...

For the 30-40TB scenario you could possibly instead consider virtual synthetics, so that you only do subsequent incremental backups on the client and then build new full backups using synthetics. By using MSDP I have seen significant decrease in actual build time of the full backups. From doing 40-50MB/s to 230-260MB/s.

Oracle is another thing. Although the de-dup stream readers have been improved a lot, I still believe the best approach for optimum de-dup of Oracle databases is to use snapshot backup (RMAN proxy) so that the actual dbf files are backed up, and not a stream from RMAN. You can also alter the RMAn script to only use fileset=1 which also improved the de-dup. There is a best practices paper here: http://www.symantec.com/business/support/resources/sites/BUSINESS/content/live/DOCUMENTATION/3000/DOC3534/en_US/NetBackup_OracleDedupe_BestPractices(rev2).pdf

/A

Yogesh9881
Level 6
Accredited

challenging & Interesting .....smiley

Dear AAlmroth thakx for revert,

i am little curious to know that ....

can we maintain performance if we implement MSDP & FT Media on sigle host ? & again we are talking about almost 40 TB data Oracle database (not files) can we get good de-dupe ratio for database ??

AAlmroth
Level 6
Partner Accredited

First, there is no one single solution...

De-dup

Databases are tricky to get as good de-dup on as we see for normal unstructured data (aka files). A normal Oracle database backup with high load, you would possibly just get the normal 2:1 ratio at best. RMAN removes the empty blocks in the stream and hence no stream is the same from backup run to backup run.

Using RMAN proxy and snapshot backups, we can de-dup the dbf files and all empty block will have extremely good ratio, but we would need to read/back up more data.

Performance

Using de-dup storage for that size of database and expect fast restore is asking for trouble. If on the other hand the backup is only seen as the last resort, that is, all replicated copies in DataGuard etc are not any good, then de-dup storage has the best TCO...

But I wouldn't consider de-dedup storage for an Oracle database of that size. Oracle backups stream well so tape is the prime candidate. Predictable timings for backup and most importantly restore. You could however use normal staging disk and split the backup on tablespaces and/or partitions, and then duplicate to tape or de-dup disk. This would give you a very good RTO still as the primary copy is on the staging disk for some time.

Depending on your RTO/RPO, using tape for restore is most likely much better for very strict RTO, whereas restoring from disk is giving better RPO flexibility.

And to answer the OP on tape out performance. Forget full streaming speed from de-dup to tape. Using LTO5 is most likely a waste of hardware... Better to re-use LTO3 or something that can handle 10MB/s up to 80MB/s.

If you do a POC on tape out and test one duplication you would probably see very good streaming. This is single session...

Now, imagine you are in full production, running 200+ backup sessions and then some optimized duplication jobs to DR de-dup storage, and on top of this a few tape out sessions... well... the story is different...

/A