Backup Exec 15 Performance Slow
Hi, we're another struggler w/ Backup Exec's performance.
Brand new fresh install of Windows 2012 R2, Backup Exec 15, Dell PowerEdge R430, internal RAID5 (or 6 I don't recall what the Perc on it had available), dual port SAS 6Gbps card to Dell TL2100 w/ two drives, (4) 1G NIC's LACP'ed to one team - and we tried just one single un-teamed NIC by itself, all current firmwares, Windows and BE updates/patches, trying to backup VMware 6.0u1 in the same rack; VMware environment is Dell M1000 blade stack w/ EQL 6110's, all 10Gb in blade stack and storage infrastructure, connected at 10Gb uplink to 1Gb Cisco 3750 switches, the 1Gb to our BE server. Jumbo frames enabled throughout entire infrastructure.
Same as other folks who have this issue, we can single-stream file copy at full 1Gbps, or multi-stream file copy up to nearly 4Gbps of our team. We've also tried stripping out LACP team entirely, just one single NIC connected at all, same thing, can easily drive up full 1Gbps throughput at the OS level file copy (either drag-drop in Windows or use mutliple instances of robocopy). Backup Exec will only pull data at ~300-500Mbps no matter what either B2D or B2T using VMware agent backing up VMDK (not Windows agent). That's watching network utilization of 300-500Mbps, and BE job rate of 2.5-3.5GBpm which seems to jive.
Our argument is the same as others, network speed is proven to be good since we can move files at speed so BE should be able to do the same. Everything runs very fast except BE. Trouble shooting steps taken:
-Repair, despite it being a fresh install; no change
-Local disk defrag, despite being fresh install and OS reporting 0% fragmentation - also wouldn't seem to matter when B2T runs same slow speed; no change
-Trend Micro excluding install and BEData folder, unloaded; no change
-NIC tuning, including enabling/disabling flow control, confirmed 1Gbps full-duplex on each NIC, as mentioned can push up full bandwidth w/ file copies, enabling/disabling off-loading, NIC buffers, flow control on switch; no change
-File size and buffer tuning from w/in BE; no change
-Registry hack buffer tuning via article #85311; no change
-Numerous reboots, including reboot every trouble shooting attempt listed above; no change
-Wireshark ran while running backups; no trouble found, good clean network connectivity, jumbo frames confirmed, good window size, etc.
Backing up a couple of small 20GB VM's runs at about 2.5GBpm/300Mbps average. Backing up single 460GB VM runs at about 3.5GBpm/500Mbps average. Even the large VM at the faster speed is unusable if we have 13.7TB worth of critical data - 30TB overall data, that needs to be backed up in a 29 hour window (calculated 90-95 hours for 13.7TB!!). Please double-check my math just in case.
Opened support case #20190763; overseas technician indicated our speed was "already better than average" or "better than most people get." Did include notes for above registy hack buffer tuning as well as article #90966, disabling buffer reads. I'll try this next.
Searching this forum, I did find several similar issues but the threads were closed w/o resolution. I don't know why. I also found similar posts on other forums of our same issue w/ no resolution. We've engaged our VAR about the backup system they've sold us. Does anyone here have any ideas please?
This sounds pretty normal for NBD. Is that what transport you're using, over SAN?
Why do you also not have 10gbE on the Backup server? The limitation to backupexec among many is performance and lack of many key enterprise features like multiplex and multistreams. Thus ive found 10gbe almost a requirement especially at your data totals.
Ultimately for any customer with over around ~5TB+ of data i require them to have more than one BE server, or move to a real enterprise server. It just cannt move that kind of data in a 12hr window, or even 24 in many cases.