cancel
Showing results for 
Search instead for 
Did you mean: 

Backup to Disk

KevanD
Level 3

I am about to switch from using Backup to Tape to Backup to Disk.

Should I create a Backup to Disk file for each job?

Should I create one file and send all my Backup Jobs to it?

What is the recomended number of concurrent jobs to send to a backup file?

We are backing up a number of Domain Controllers, SQL servers. Exchange server. 3Tb File server and a number of application servers. Currently doing full backups to Tape each night our backups are running for 18 hours a day.  I am hopping that backing up to Disk will improve this backup window?

Once the backup to disk has finished we are then going to backup the backup to disk folders to tape.

Is there a way of automating this, so as each backup to disk job finishes the backup to tape job starts to back up backup to disk file?

Any help or advice on this new setup appreciated.

 

We are running Backup Exec 2010R2. Installed on on Windows 2008R2 server.

Regards

Kevan

1 ACCEPTED SOLUTION

Accepted Solutions

teiva-boy
Level 6

For your B2D, I've done a lot of testing and tuning and here are some highlights of my results.

Start with decently fast disk.  Use some sort of RAID.  10 would be ideal as you need fast writes and data protection.  But understand that you will be losing a lot in terms of capacity...  

Use the largest possible block size NTFS has to offer.  NTFS performs much better than most any NAS/CIFS share.  

Start your concurrent stream count at (number of disks in raid set - parity disks) = n.  so a 3drive raid 5 would be 2, a 4 drive RAID6 would still be 2.  You start there, and watch perfmon to see if there is disk contention.  You can up the number or lower it as needed.

Defrag often, almost weekly.

A fixed size for the BKF's is ideal to reduce fragmentation, however will consume all of your space now, even if not used yet.  It's rare for people to use a fixed size.  Let BE manage the growth and refer to my point above.

Look into the deduplication option for BE2010.  Thiswill maximize your disk space usage, and for your file server you can use client side dedupe to reduce the backup window and not have to transfer 3TB of data over the LAN.

View solution in original post

9 REPLIES 9

AmolB
Moderator
Moderator
Employee Accredited Certified

You can create 1 B2D for daily jobs and another one for the weekly jobs.

You can set N number of concurrent jobs to the B2D folder depending on the server's config.

You can create duplicate jobs which will duplicate the data from the disk to the tape.

Hywel_Mallett
Level 6
Certified

Should I create a Backup to Disk file for each job?

Do you mean a backup-to-disk folder? If so, I wouldn't.

Should I create one file and send all my Backup Jobs to it?

This is what I'd do. Create one B-2-D folder and let Backup Exec manage it.

What is the recomended number of concurrent jobs to send to a backup file?

Enough that you can get good throughput, but not so high that you get disk contention. wink It depends upon things like how well your disks perform, so only experimentation can give the right results. I only ever have 2 concurrent B-2-D jobs running, so it's not such an issue for me.

We are backing up a number of Domain Controllers, SQL servers. Exchange server. 3Tb File server and a number of application servers. Currently doing full backups to Tape each night our backups are running for 18 hours a day.  I am hopping that backing up to Disk will improve this backup window?

Probably. At least it will allow you to run jobs concurrently.

Once the backup to disk has finished we are then going to backup the backup to disk folders to tape.

Is there a way of automating this, so as each backup to disk job finishes the backup to tape job starts to back up backup to disk file? 

Use policies. You can add a template to a policy to backup your files, then add a second template which duplicates the data. Then just set the second template to run when the first one finishes.

KevanD
Level 3

Just trying to get to grips with this.

What would be the best practice for this.

Should I create a Back up to Disk file for each day of the week, or Create a backup to disk file that is large enough that I can append each backup to it for 7 days and then set it to over write?

If the file begins to be over written does that over write just the first job on the file or are all the jobs on the file instantly lost once overwrite begins?

Thank you for your help.

 

Kevan

AmolB
Moderator
Moderator
Employee Accredited Certified

You can just create 1 folder and then create media sets depending on how long you want to

preserve data. The media sets need to be associated with the jobs.

Once the overwrite protection is over for any .bkf file Backup Exec will start overwrting.

KevanD
Level 3

OK

When I create a Back up to Disk Folder What should I set the Maximum size of Back up to Disk file to be?

What are the advantages / dissadvantages of changing it from the Default 4Gb?

Regards

 

Kevan

Kiran_Bandi
Level 6
Partner Accredited

You can set it according to your average backup size.

If it is set to 4 GB and if your full backup comes around 22GB, then b .bkf files will be created.

Regards...

AmolB
Moderator
Moderator
Employee Accredited Certified

When the size is set as 4GB then BE will create a 4GB of .blk file. The data is stored in the .bkf

files. If the backup is around 100GB then BE will create 25 .bkf files.

If you have 1TB of disk then set the size around 30-40GB.

KevanD
Level 3

OK

 

I currently run about 12 Backup Jobs each night backing up various servers.  Some Jobs backup similar types of servers. Other Jobs back up a single server like Exchange.  I have one job that backs up my 5 SQL servers.

In Total we are backing up about 3.5Tb of Data to Tape each day. (just).  Each job differs in size.  The main file server is about 2.5Tb.  The other jobs vary from 50Gb to 500Gb.

If I create a Single Backup to Disk Folder what would you recomend I set the backup to disk file size to be.

Or would you in this scenario create more than 1 Back up to Disk Folder?

If I then set the Over write Protection Policy to be 1 week. The Files in the Backup Folder will begin to be overwritten as needed each week.  Do I need to make the files Apendable.

I found all this easy to visualise when backing up to Tape. We could append to our tapes for 24 hours to make sure various backup jobs got sent to the tape and then set Over write protection for 28 days.

 

If we are creating lots of small .bkf files I am not sure why we would need to Append to them.  Unless I am missing something?

 

Thank you for all your help and suggestions. 

 

Regards

 

Kevan

 

teiva-boy
Level 6

For your B2D, I've done a lot of testing and tuning and here are some highlights of my results.

Start with decently fast disk.  Use some sort of RAID.  10 would be ideal as you need fast writes and data protection.  But understand that you will be losing a lot in terms of capacity...  

Use the largest possible block size NTFS has to offer.  NTFS performs much better than most any NAS/CIFS share.  

Start your concurrent stream count at (number of disks in raid set - parity disks) = n.  so a 3drive raid 5 would be 2, a 4 drive RAID6 would still be 2.  You start there, and watch perfmon to see if there is disk contention.  You can up the number or lower it as needed.

Defrag often, almost weekly.

A fixed size for the BKF's is ideal to reduce fragmentation, however will consume all of your space now, even if not used yet.  It's rare for people to use a fixed size.  Let BE manage the growth and refer to my point above.

Look into the deduplication option for BE2010.  Thiswill maximize your disk space usage, and for your file server you can use client side dedupe to reduce the backup window and not have to transfer 3TB of data over the LAN.