09-14-2020 03:15 PM
Hello
We are using netbackup 8.1.1 in a windows 2012 R2 environement with a I500 quantum tape library consisting of 6 LT06 Drives. The Windows Master server is the media server as well...
How do I set my job policy to write to multiple tape drives at same time.. I have a policy that is backing up a linux client and the "Backup Selection" looks like this /share/* With "*" wildcard it spills out like 4 separate job streams.. How can I have this policy write these job streams to seperate tape drives at the same time..
Looking to have this job finish faster.. Or what other possible solutions are there when going to Tape?
We currently only have 6 tape drives but we will be adding another i500 library in the mix soon.. So I want to take advantage of this job finishing faster if possible,
thanks
BC
Solved! Go to Solution.
09-16-2020 04:34 AM - edited 09-16-2020 04:36 AM
With the speeds you are getting, you will not win anything by sending each stream to a different drive. The drives will wear out because of shoe-shining (too slow throughput)
All of those streams can run concurrent to the same drive.
STU setting: MPX = 4
Policy schedule: MPX = 4
Max Jobs per Client = 4 (or more)
09-15-2020 02:49 AM
Curious to know what the current result is - how many streams go active?
What is the current throughput that you are seeing?
Getting 4 tape drives to write simultaneously is the easy part.
Can you guarantee that the Linux client can generate and push 4 x streams in excess of 100MBytes/sec each?
Can the network between the client and master/media server push this amount of data in a constant stream?
Can the master/media server resources stream the amount of drives simultaneously to prevent shoe-shining?
To get 4 drives to write single-stream each, ensure that the STU is configured for 6 drives with MPX set to 1.
Policy must have 'allow multiple streams' enabled and Max Jobs per policy set to 4 (or leave unlimited) with MPX in schedules set to 1.
Global Attributes on master must be changed to 'Max Jobs Per Client' to 4 (or more).
Most users battle to stream tape drives above 100MB/sec, because data stream cannot be generated quick enough by clients or because of limited network resources.
Here we increase backup speed by increasing number of simultaneous streams to a single tape drive,
e.g. MPX in STU = 4, MPX in schedule = 4, Max Jobs per Client = 4.
There is an old Performance Tuning Guide that has not been updated for many years, but basic principles to understand environment bottlenecks remain the same.
There is no quick solution - you need to examine each component in the backup chain, and then make changes.
Document each change, monitor and document the results before making another change.
Good luck!
09-15-2020 08:46 AM
Hi Marianne - Thanks for your Fast Response and taking my question.
When I run the policy with the selection path of /share/* - only 4 Streams start as there are only 4 main subdirs under the "/share" filesystem.
so the current throughput on the job details was as follows- (all of these streams were active as same time but only writing to one tape)
stream#1 - 31,747 KB/Sec - finished
stream#2 - 20,825 KB/Sec - finished
stream#3 - 15,972 KB/Sec - finished
stream#4 - 26,760 KB/sec - still running
Opscenter is showing this Master/media server having the following throughput - 22,351 KB/Sec
Also opscenter showing this linux client with the following throughput 9,378,463 KB/Sec
-------------------------
Basically the Stream#4 is the subdirectory that is taking forever to backup.. I figured if I could get the other streams to write to their own tape drives i could at least cut some time off...
To answer your questions about pushing 4 x streams in excess of 100MB/sec and other questions I will have to figure that out.. I understand what you are getting at now.
I will google for the netbackup perfromance tuning guide before i change anything. This particular backup - Full level (about 12TB) takes pretty much all week to run when running with 1 stream and 1 drive.
We have this policy schedule to run once a week (FULL) so basically as soon as it finishes it pretty much starts a new backup.. I inherited this backup server / environment so this is the way it has been for some time... Although the /share/ dir on this client is just getting bigger and bigger and therefore taking longer to run..
Thanks
BC
09-16-2020 04:34 AM - edited 09-16-2020 04:36 AM
With the speeds you are getting, you will not win anything by sending each stream to a different drive. The drives will wear out because of shoe-shining (too slow throughput)
All of those streams can run concurrent to the same drive.
STU setting: MPX = 4
Policy schedule: MPX = 4
Max Jobs per Client = 4 (or more)
09-16-2020 10:25 AM
09-17-2020 01:49 AM - edited 09-17-2020 02:01 AM
Marianne is correct ...
Multiplexing is only there to make sure the tape drive runs at or above it's minimum streaming speed.
LTO 6 drives (slight variation between different brands) have a minimum streaming speed of:
40MB/s uncompressed
160 MB/s compressed.
So, if you are for example backing up uncompreeable files, eg .zip files or .jpegs, you need to send data to the drive at a minimum rate of 160MB/s. I would suggest most data backed up is compressable to some extent, so realistically you need to be sending data to the drive at 160 MB/s to ensure it avoids stop/ start which is extreamly bad for the tape and the drive.
mpx is not there to speed up client backups ... in fact, if we consider clients with compresssable data but the disk speed limts each client to an output of 80MB/s and each client has the same amount of data, and can backup in say 10 mins - it makes no difference to the backup time if the data streams goto a single dive (mpx) of seperate drives - it does however make a difference as to how fast the drive runs.
Multiplexed
Each client sends data to the memory buffers at a rate of 80MB/s, so 160MB/s in total is coming in. The drive is able to write data at 160MB/s, so you have a constant speed of data and the tape drive is happy.
Non - multiplexed using two tape drives
Each client sends data to the memory buffers at 80MB/s, so the client 'data send' rate is the same, and thus the client backup time is the same.
However, the drive can empty to memory buffers twich as fast as they are fillling up, so it has to stop, wait for the buffers to fill, write that data, then stop and wait again - tape drive is sad.
Looking at your figures
stream#1 - 31,747 KB/Sec - finished
stream#2 - 20,825 KB/Sec - finished
stream#3 - 15,972 KB/Sec - finished
stream#4 - 26,760 KB/sec - still running
A total of about 90MB/s - so even using mpx, we only have about 90MB/s coming in, not fast enough for the minimum streaming speed of an LTO6 drive if the data is compressable, and no benifit to splitting the streams to seperate drives. If the speed of the streams combined was faster that the max speed the drive can write at (160MB/s native / 400MB/s compressed for LTO6) then there would be a benefit to using more drives, as at that time, the drive is the limiting factor.
10-08-2020 01:23 PM
Sorry for the late response but Thank you so much for everyones input.. IT makes sense. Im currently in talks with the client just to see if out of this share dir (4 sub dir) what really needs to have a full level every week. Thanks again!