cancel
Showing results for 
Search instead for 
Did you mean: 

Best practices - backup-to-disk files size?

Kevin_J
Level 3
I've been running nightly full backup to disk jobs and my 20GB test job is taking an hour (pulling files from 5 servers). AOFM is enabled, software compression as well. This is taking way to long since I will end up backing up upwards of 400GB when I fully deploy the system, this translates to 20 hours.

Right now I have 1 backup folder and only one B2D file that is overwritten each night (duplicate job then runs and dumps the job to tape).
My backup-to-disk folder is on a SAN with 1.2TB free space, so no space problems there if I overwrite the B2D files every night for years.

My backup-to-disk folder is set as follows:
Max size for Backup-to-disk files: 800GB
Max number of backup sets per B2D file: 8000
Disk space reserve: 100GB
Allow 5 concurrent operations...

I need to speed things up - what changes do you recommend? Having numerous B2D files (I know it's set to 1GB by default, wouldn't that then create hundreds of B2D files?).

Like I said, full backup job to disk each night, overwrite the B2D files every night, then a duplicate job dumps it off to a seperate tape M-F (which is very fast).

Thanks.
10 REPLIES 10

Ken_Putnam
Level 6
What kind of throughput are you getting on your B2D jobs?

What kind of network 100MB/1GB ?

What kind of speeds do you get if iyou just copy a large block of data to the media server?

Kevin_J
Level 3
Gigabit network, servers attached to the same gigabit switch, gig nic's, etc...

4 servers being backed up, the "c" and "d" drives and a few system states.
Average for all of these is around 380MB/min, so when fully deploy it to all of my servers, we're looking at 19 hours for 400GB.

No compression set for the disk-to-disk job. Tonight's job will run software compression to disk.

I have not tested copy speeds of a large block of data to the media server.

I think I need to have more than one job running at the same time, but I also want each full nightly backup to end up on the same tape as a duplicate (so I can restore directly from tape).

There's a few tweaks and tests I will keep running but any input as we go along is much appreciated.

shweta_rege
Level 6
Hello,


Kindly select the option Buffered reads Buffered writes instead of the option Auto detect settings in the properties of the backup to disk folder under Configuration tab.




Thank you,


Shweta

Kevin_J
Level 3
I have selected Buffered reads and writes. Also have the max B2D file set at 10GB and allow 3 concurrent operations. I'll see what happens tonight.

Ken_Putnam
Level 6
if the network is the problem, multiple jobs will make things worse.

If you do want to runh concurrent jobs, set up a policy to Duplicate the backups to tape, make that job "Append, else Overwrite". Set the OPP of the target media set such that the tape will have expired when it is due to be reused. This way, when the first B2D job finishes, it will start writing to tape, and as each subsequent job finishes, it will append

Kevin_J
Level 3
Thanks for the follow-ups.

Average throughput is around 400-500MB with one server at 280MB (this one has minimal data backed up so it's not really adding a whole lot to the backup window).

I just put a new Exchange 2003 server online, migrated a few users - and backing up the mailboxes was WAAAAAY too slow, 15MB/min - so this tacked on another 90 minutes to my backup time. I understand brick-level backups will be much slower, but at 15MB/min I might was well forget it (12 users all with mailboxes under 50MB should not take 90+ minutes). Symantec is NOT running on the new Exchange server.

One thing I'm going to test is where the B2D folder is located. Technically it's the "F" drive on the Backup server, but that drive is actually a SAN virtrual drive connected via iSCSI. I may run the next B2D job to the local D drive of the Backup server.

EDIT: I disabled virus scanning on the B2D folder, let 'er rip tonight.Message was edited by:
Kevin J

Ken_Putnam
Level 6
15MB/min is a bit on the slow side, but not all that much. if you have not already seen it take a look at http://mail.tekscan.com/nomailboxes.htm to see if you really want to do a full brick level backup

Kevin_J
Level 3
I'm now getting up to 2GB throughput on the brand new SAN (main file server) and email server. The older "misc" systems that are also being backed up are a bit slower (slow CPU's), but since I'm not backing up all that much data from those I can live with an extra hour or so of backup time. Also doing a pre/post batch file to kill symantec AV services before backing up.
Also ditched the plan for brick-level backups of Exchange. So I should be good to go.

Support_User
Level 4
Hi Kevin, I read your posts with interest. What was responsible for the increase in throughput - was it the change to using local disk for the B2D folders? If so, can you let me know your disk configuration, i.e. SCSI or SATA, what RAID level etc?

Regards,
Nathan.

Kevin_J
Level 3
Sorry for the delay Nathan...

The B2D folder is on a "virtual disk" on the Backup server. The virutal disk is located on an EMC SAN and connected via iSCSI. The SAN drives are SATA, RAID 5.

I've pinned the long backup window on a few slow workstations that I backup. One system takes 1.5 hours to backup - it's a PII, 400Mhz with gigabit NIC. It seems the slower the system, the longer the backup window. Also my B2D verify job is running quite long, so I may just verify the duplicate-to-tape job and not the B2D. I have a seperate thread asking about this: http://forums.symantec.com/discussions/thread.jspa?threadID=71521&tstart=0