I have a couple of quick questions about the dedupe store concurrent jobs setting (for a standard dedupe store running on the media server, with no external devices and no OSTs).
What do people normally leave the setting at? It's looking like i'm going to be running quite a few jobs at once, but i'm hesitant to perform more than 3 jobs at once as it might slow all the jobs down to much
I have BE running as a VM, with 2 x dual core processors, with 32GB of RAM. If i double this to 8 cores in total, will it help me run more concurrent jobs efficiently?
Thanks in advance for your opinions
Solved! Go to Solution.
Are you saying i should not use a VM for a media server because of performance?
I have loads of resources added to the VM, and it's almost got the full use of a host with 128GB of RAM.
Yes that is what pkh says. It is not recommended to use a virtual server for a media server with dedupe.
But if you are perfectly fine with the fact that it uses almost an entire host and that it can slow down other vms on that host then it's ok. It's your own choice...
Thanks for the reply. Yes, the host is really dedicated for backups, and i am fine with it using up almost all of the resources on it, and i'm sure that it won't. The only other VMs on it are non-production.
So in my instance, i can't see any reason whatsoever why you would recommend a phyiscal server over a VM, as it is a supported configuration (no external devices). The decision to have the hardware as an ESXi host opens up many other doors, avenues and functionality. So for my solution, having it on a VM is a no-brainer.
Yea.. i have all of those bases fully covered
I backups of the VM in another locaton, (just as you would try to back up the physical OS files if it was physical). I know that because of my choice to go virtual, backup and recovery is much much easier.
I also have duplicates of the backup sets elsewhere (i use CASO with optimized duplication)
So, running backup exec as a VM (in my case) is much better than running it as a physical machine. There are no benefits to choosing physical over virtual in my case.
We're going way off topic here, and my original query was to find out from people about how many concurrent jobs they are running with their server specifications. Nobody has given me any of their answers just yet, but thanks for your original related suggestion in your initial reply.
It's important to remember that each time you fire a backup job, a full set of Virtual Tape Libraries is loaded with a maximum of around 256. Each backup job represents 12 VTL's if I remember correctly from the engineer I spoke with last year. I started to have performance issues with anything over 12, though so we dialed back tremendously. We've finally settled on four concurrent operations, as inline Deduplication keeps our backup window short. When everything comes to a head is on weekends when we do full backups. I've gone to staggering full backup days and the server really smoothed out after doing so.
The last engineer I spoke with suggested starting with three to four concurrent jobs, then jumping to seven but recommended keeping concurrent jobs as small as the backup window would allow. You could, however, go up to 21 if you REALLY wanted to push the envelope.
It's important to remember that each time you fire a backup job, a full set of Virtual Tape Libraries is loaded with a maximum of around 256. Each backup job represents 12 VTL's if I remember correctly from the engineer I spoke with last year.
Can you add any more detail or clarity to that? I never heard that about 12 or 256 before. I know that the concurrency limit is 32 and that there are other scale limits.
I thought that the number of concurrencies specificied for the dedupe or OST determined the size of the psuedo-VTL that was created. So if you have 6 concurrencies, you have one pseudo VTL with 6 pseudo drives created.
Actually Larry, you are correct, I was remembering the conversation incorrectly and thought "12" was the amount of VTL's spawned when in fact it is one for each concurrent operation you allow on the deduplication drive (I had 12 concurrent operations permitted.)
There is a hard limit of 256 VTL's though:
Apologies, I can't appear to edit my other statement. So I hope people read your response and mine!
We have a dedicated Dell R710 server with two quad core X5670's and 60GB of memory. One PCIe 2.0 x8 slot connected to a PERC H830 adapter. Twelve 6GB 12G/sec SAS disks in RAID 50 w/ 5 parity groups (using ~32tb of it for dedup space). A second H700 at x8 connected to 4 ssd drives in RAID 10 for the C drive and 4 ssd drives in RAID 10 for BE. An LTO5 SAS tape drive on an x4 adapter with a few USB drives with legacy b2d jobs on them. I can run somewhere around 10 concurrent jobs at a time without hiccups, i forget how many. It doesn't really cap out the system while backing up network systems no matter how many concurrent jobs we run. 4 gigabit nics just can't cap the system and we settled on 6 concurrent jobs due to the way our network is arranged.
From a performance standpoint, our server has no problem transferring at 300MB/s which is 2-3x the recommended throughput for dedup storage.
I don't think you'd benefit muchg adding more processors to the BE VM. Only the DR disc creation uses more than 2 cores and that can be done from a management workstation.
We do have a problem when we try to backup the dedup storage to tape for offsite storage while other jobs are running. Too much I/O from running jobs, catalogs or verifies causes failures to tape. We have to pause the dedup storage while the tape is running or the tape backup fails. Tape drives are amazingly fast writers. I have a bit more investigation to do on this issue.
As far as running BE in a VM, i'm sure it works but I'll never do that again. A tape drive attached to the VM will inevitably stop functioning or go offline. The entire server has to be restarted to get it back online, This kind of defeats the benefits of a VM and disrupts any of the other VM's on the server. I'm already having some iops issues, in a VM it would be worse.
If you are running 2012R2 and backing up your VM's, if BE is running inside a VM, you can't utilize 2012R2's block level backup feature for incremental backups to deduplicated storage when backing up VM's directly on the server, it is very inefficient to run BE in a VM in that scenario.