cancel
Showing results for 
Search instead for 
Did you mean: 

Network performance not as expected, and it IS a problem

LillF
Level 3
Hi....
 
I am running BE 10d, using Remote Agent for Windows Servers on a couple of customer servers.
 
When running backup jobs, the network performance is way below what it should be.
 
A test with 1 source-server with Gigabit (full duplex) and 1 source-server with 100Mbit (full duplex), using large single files with a size between 9 - 25 Gb, will result in a max (peak) throughput about 280 Mbps and average throughput 150Mbps.
 
However, when using the same servers and the same filesets, but using FTP instead of BE, I will get a max (peak) throughput of 300 Mbps and an average throughput of about 270-275 Mbps.
 
Since the test using FTP works as expected, there can be no problem with the hardware itself or with the physical network (thinking of duplex or packet loss).
 
This is a problem for me.
 
Does someone have any clues to this problem ?
 
 
13 REPLIES 13

Ken_Putnam
Level 6
Do local backups crawl also?
 
or is it just network backups?

Hywel_Mallett
Level 6
Certified
To be honest I'd expect FTP to be faster too.
On 100Mb networks, a decent server should be able to push 600MB/min.

LillF
Level 3
Ken: I ´ve connected the source-servers and BE-server with a HP GB-switch, bypassing firewalls just to test if the FWs have anything to do with the problem (local LAN). The result is exactly the same as mentioned earlier.
 
Hywell: please explain why you would expect FTP to be faster? And really, THAT much faster ?
 
Also I have updated firmware and drivers for all involved hardware in case this would be a software-error(however, this seems illogical regarding the speed when using FTP).
 
 
Any more ideas ????

Ken_Putnam
Level 6
Hmm
 
If you define a B2D Folder on the Media server, is the backup still slower than FTP?
 
if not, from the Devices Tab, right click the drive\Properties
 
What are the settings for Block Size, Buffer Size, and number of Buffers and both Single Block settings?

LillF
Level 3
Sorry, it seems that I was not very clear when describing my BackupExec-configuration.
 
All backupjobs are directed to B2D-folders. This is where I have performance problems.
All backupjobs are duplicated to a LTO3 Autoloader. There is no problems here what so ever.
 
Just to make sure the duplication-jobs do not conflict in any way with my tests, I have eliminated all of those.
 
All B2D-folders have the following settings (I had a Symantec Consulting Partner or whatever the name is to set it up for me):
Maximum Size: 5 GB
Maximum filesets: 100
Disk space reserve: 1 gb
Allow 2 concurrent operations
 
I must apologize for not mentioning this earlier :(
 
 
So, the question is still: why would it be that much slower than FTP ?
 
Also, if disk fragmentation would be a problem for Backup Exec, it would also be a problem when running FTP.
 
 
//Thomas

Jon_Ziminsky
Level 4
Do you have any Realtime AV running?
 
If so, make sure to exempt the beremote.exe process on the remote machines. It seemed to help my throughput a little bit.
 
 

LillF
Level 3
Jon, all source-servers have live and active realtime AV-scanning.
I will of course test it again and exclude beremote.exe.
 
However, I think that if it was an AV-problem, it would also be a problem when testing using FTP ?!?!?
 
 
LillF

Jon_Ziminsky
Level 4
I tend to agree with you LillF. You would think a prgram that has "agents" that are supposed to optimize data movement would be able to equal, if not exceed, FTP.
 
My system really goes slow in a couple of situations; the first one is when it has to back a lot of small files, the second is brick level Exchange backups. I understand why it slows down on the Exchange backups, since it has to individually access each users mailbox, but lots of small files shouldn't have that great of an affect on it. I have just learned to live with slow speeds on certain areas of certain systems i backup.
 
 
 
JZ
 

Ken_Putnam
Level 6
but lots of small files shouldn't have that great of an affect on it.
 
Actually it has a large effect, and it has nothing to do with BackupExec.   Try copying a single large file and several hundred smaller files totalling the same byte count using XCOPY and you'll notice a large diference in total time required
 
 

Hywel_Mallett
Level 6
Certified
What Ken says!
To give you a real-world example, belete a user's roaming profile folder. Their profile can be 100MB, which includes 10000 (small) shortcuts in the Recent folder. The shortcuts take longer to delete than the rest of the profile.

Jeremy_Kepf
Level 2
hmmm not understanding the answers here.

Seems to me that LillF is asking why his network slows to a crawl whenever backup is running.... Not that the backups take longer, or don't backup data as fast as FTP.

I have the same problem. I backup nightly to an LTO. Actually, my backup procedure seems like it is just like LillF.

I guess my question is the same as LillF's:

Why is network performance affected so dramatically by this backup, and is there anything that can be done to rectify this?

As it stands now, our backup begins around 8PM, and from that point until the backup finishes, the network basically comes down. No one is able to remote in... Terminal users get screen freeze and ither latency/lag issues, etc etc etc.

Is there anything that can be done about this?

Jon_Ziminsky
Level 4
I reread the orginal question, and she was asking about throughput.
 
As for why the network would slow... Backups tend to move large amounts of data across the network, that is probably why it is slowing things down. I would expect to see the target machine and the BE Server to have heavy network traffic during backups. I can't think of any way around it besides a dedicated NIC for backups, or a fiber channel.
 
 
 
JZ

LillF
Level 3
It seems I still have to clarify some things...
 
1) Using FTP to transfer files files from source-servers to my Backup Exec-server works almost perfectly. It will generate an average throughput of 270 - 275 Mbps... And peaks at 300 Mbps. I am fine with this.
 
2) Using BE to transfer the exact same files from the exact same source-servers to the exact same Backup Exec-server, under the same conditions, will generate an average 150 Mbps (tests done earlier this day shows the average decreased to about 85 Mbps) and one peak at 280 Mbps.
 
3) No tapedrivers/autoloaders involved in the test.
 
4) FTP transfers are directed to the same RAID-devices as when using BE.
 
 
I have updated nic-drivers on all involved servers.
I have double-checked the settings for speed and duplex.
The version of BE used is 10.1 Rev 5629.
All Remote Agents have been updated.
 
 
There is only 2 differences in this scenario that I am aware of:
a) FTP protocol vs. ndmp
b) NTFS vs. Backup Exec´s backup-to-disk device
 
If disk fragmentation is the problem, it would also affect FTP-transfers.
 
Is it possible to configure backup-to-disk devices block and buffer size ?
 
 
 
 
LillF