05-29-2017 01:53 PM
I am using LTO6 tapes with the HPE MSL2024 tape library.
I am sure the Hardware Compression has been enabled at the tape library level, but the data writing to the tapes does not seem to be compressed at all. Each tape shows Full when the usage reachs to 2.8TB which is just raw capacity.
BTW, my NetBackup system is a NetBackup Appliance 5230. It looks like the hardware compression is disabled from the Appliance somewhere.
Any idea would appreciated.
Ross
05-29-2017 11:58 PM
Hello
Well all depends on the data you do back up - some data does not compress well, ie. movies, pictures etc as it is already compressed. In regards of LTO6 it is 2.5 TB in native as per this roadmap LTO6 capacity is 6.25 TB and assumed compresion ratio is 2.5 so when we will divide the 6.25/2.5=2.5 TB - so looks like some tiny compression is there... as you tapes do show 2.8 TB, right?
05-30-2017 12:18 AM - edited 05-30-2017 12:19 AM
Please ensure the compression option in the policy in unchecked.
Compression in the policy is Netbackup built-in software compression, it is very slow and will greatly reduce the number of data you can have on tape (compressed data can't compressed a second time).
LTO tape drive will per default compress data, you don't have to do somthing "extra".
05-30-2017 06:49 AM
Thank you, quebek.
Yes, the tapes in Full show about 2.8TB. Most of the backup data are Hyper-V VMs, Oracle RMAN backups, and some regular document files.
I have two almost identical NetBackup sites/systems. The other one shows about 5.0TB in Full. Comparing with the other one, I thought this site was not compressed. Maybe it is because this site has more VMs backups and Oracle RMAN data which can't get high compression rates?
Thanks again,
Ross
05-30-2017 06:54 AM
Thank you, Nicolai.
Yes, the compression option in the policy in unchecked.
I was wondering if the compression was disabled at the Appliance/Linux OS level. But as quebek counted, it does have a tiny compression. It's kind of interesting.
Thanks again,
Ross
05-31-2017 08:13 AM - edited 05-31-2017 08:14 AM
@rossxu - Please see this tech note: http://www.veritas.com/docs/000091135
Since a Veritas appliance is running Linux, you can use the GEN_DATA directive to generate random data. My idea here is to generate a 50% compressible data stream and then fill up a tape. If you still get native capacity, for sure somthing is not right, but else you should see a tape with a lot of data on it.
Writing a entire tape may take some time ...
06-01-2017 06:22 AM
That's a good idea, and good to know.
I'll give a try.
Thank you, Nicolai.
Ross
06-01-2017 07:26 AM
You could also maybe send the data to another media server/ drivs and see if you get the same issue.
06-01-2017 07:34 AM
Are you backing up directly to tape or to Appliance followed by duplication to tape?
If duplication from MSDP - what kind of dedupe rates are you seeing?
Are you following all the best practices as explained in NetBackup 52xx and 5330 Appliance Capacity Planning and Performance Tuning Guide ? e.g. Turn off Oracle compression while performing backups ?
Do any of the systems that you backup contain a high amount of media files (images, videos)?
06-02-2017 06:37 AM
Hi marianne,
I am backing from MSDP to tapes, and the dedupe rate is about 65%.
The systems don't have media files. They are mostly the Hyper-V VMs, and some files dumped from RMAN.
I'll look into the Tunning Guide.
Thanks a lot,
Ross