cancel
Showing results for 
Search instead for 
Did you mean: 

Question about the actual data path through the servers in Exec2012

Mark532010
Level 2

I am a brand-new user, just running my first backup as I type.

The backup is running correctly but I can't find out if I have the optimum configuration or if I am generating a ton of network traffic I don't need to create.

 

My situation:

ServerA has the BackupExec console on it (our application server)

ServerB has the locally-attached drive array (our big disk resource)

ServerC has the Windows Agent installed and is being backed up (user server)

 

From ServerA I mapped to the share on ServerB for my backup device and created a job to backup ServerC.

 

The job is running, but I feel like I may be doing something backwards, that the data is flowing from ServerC to ServerA and then over to ServerB (doubling network traffic).

ServerB does not have a lot of resources left, I don't think I would want to run the console on it, Is there a better way to configure this? Are there docs that help with these scenarios?

 

Mark

3 REPLIES 3

teiva-boy
Level 6

You are correct that the data flows from C to A then to B.

The disk should be locally attached to Server A for best performance.

Perhaps you have a 2nd NIC on A and B?  You could use HOST files and x-over cables to connect them and have data move over a private network?

For for a bit of coin, buy the Deduplication option and use client side-deduplication.  However, the storage attached to A MUST be an NTFS formatted volume that is direct attached.  No UNC paths over CIFS.

Mark532010
Level 2

Thanks for the information. Could this be why the backups are so slow? or is this a normal speed?

Currently we have all 1gb connections and a good switch but last night when I ran a full disk to disk backup it timed out taking more than 10 hours to backup 350gb - actually it took 7 hours to back up and then started the verify which I halted by rule to prevent interference with business hours.

I checked the server for fragmentation, it is somewhat fragmented ( 831,457 files, 21546 fragmented files) but at 350gb, this is our smallest server, this isn't going to work for our multi-tb servers!


Main backup summary:

Backed up 831240 files in 79351 directories.
Processed 253303522530 bytes in  6 hours,  24 minutes, and  14 seconds.
Throughput rate: 629 MB/min
Compression Type: Software
Software compression ratio: 1:1


Snapshot Backup summary:
Backed up 31612 files in 3807 directories.
Processed 9,381,885,040 bytes in  53 minutes and  52 seconds.
Throughput rate: 166 MB/min
Compression Type: Software
Software compression ratio: 2:1


System State backup
Backed up 12 System State components
Processed 1,039,278,256 bytes in  8 minutes and  25 seconds.
Throughput rate: 118 MB/min
Compression Type: Software
Software compression ratio: 1.8:1

Verify Status
Verified 249439 files in 14508 directories.
Processed 159001741748 bytes in  2 hours,  15 minutes, and  56 seconds.
Throughput rate: 1116 MB/min
Job ended: Friday, June 15, 2012 at 5:00:03 AM
Completed status: Canceled, timed out
The job was automatically canceled because it exceeded the job's maximum configured run time.

Mark

teiva-boy
Level 6

Well you definitely have a network bottleneck to some degree.  I would make sure that your BackupExec server has trunked links or a dedicated NIC to move data to it's disk storage system.

However file backups are generally the slowest of the bunch.  The agent has to crawl the file system, send the data over the wire almost sequentially, and have it written to disk/tape.

You've got a number of network hops in the way, using the mose inefficient protocol (CIFS), your sourcec disk could be limiting in read speeds, your target disk could be limited in write speeds, you maybe over saturaing the single GbE link on the BackupExec server...  The list goes on and on.

Like I said, private network to isolate the traffic somewhat, or locally attached disk and perhaps the dedupe option.