cancel
Showing results for 
Search instead for 
Did you mean: 

Backup Job Concept/Best Practice One Big Job Vs Many Smaller Jobs

TBCAdmin
Level 2
Ok,  I've been searching and reading forums trying to find a solution, but I think this is more of a conceptional question rather than technical.  It relates specifically to job setup and a single job vs many smaller jobs.

Platform:
2003 server box to quantum automated tape drive.
Symantec Backup Exec 2010
15ish servers with Hyper V, oracle, sql and exchange

Problem:
My biggest issue is I get many exceptions after the job completes with a range of issues, from GRT unable to perform functions to just unable to open files.  I have one big job that handles all the servers at my site.

Some of the issues are not correctable.  For example I 2000 server from what I understand cant use the GRT in hyper-v.  Thats fine, but I dont think I can disable the GRT function for just one machine in the over all backup job.  

I guess the frustrating thing is this.  Once you select an option for a backup job it applies to the entire job.  So I cant toggle GRT, or VSS options per machine across a single job, if selected it affects all devices in that job.

So my conceptional question is this.  Should I run one big job or many small jobs?  I always favored one big job, because for one its much easier to manage.  Its also easier to track statuses.  Once all the bugs have been worked out, it seems much easier at a glance to ensure that the backup completed successfully.  I also like to track one big job for total backup size.  I know its a simple thing but its nice to look in once place and quickly be able to tell what my total backup size for the network is.  

Am I wrong?  If I need to adjust small services do I have to have multiple backup jobs?

Thanks for your time,
Jason
The Blood Center
5 REPLIES 5

RahulG
Level 6
Employee
Well big job would definately be a good idea, But ideally its recomeneded to have 2 tyoes of backup
1 . Normal file level backup ,system state shadowcopy component i.e the backup job in which you need the AOFO options set
2. The backup Job for which you have agents i.e SQL , exchange hyper V ..This may also include a GRT backup or you can set up a seprate backup for GRT.
But again there are different ways you can get the Jobs configured

Ken_Putnam
Level 6
So my conceptional question is this.  Should I run one big job or many small jobs?


YMMV, and it probably comes down to personal preference, but

I have always prefered multiple jobs chained together  (You can do this with policies to make management easier)  If your management tracks successful jobs vs failures, you will report a lot more successes with smaller jobs  (I have seldom had very many successes in a row with large multiserver jobs).  Also, with smaller jobs, you can zero in much  faster on causes,  rather than having to scan one huge log file


Also, as you have discovered, options selected for the job pertain to ALL the machines in the job.   In addition, Veritas and now Symantec have always recommeded separate jobs for flat-file and database backups for ease of troubleshooting, if/when a problem arises

TBCAdmin
Level 2
I'll run the big job for another week and try to do as many tweaks as possible and see if I end up with a usable job. otherwise I'll split as your suggestion.  Thank you very much for taking your time to leave feedback it was very helpful.

Jason

teiva-boy
Level 6
I always split jobs into multiple smaller ones!!!  

For example: 100 server environment.
One big job, with one server failure is a 100% job failure
100 individual jobs with one failed job is a 99% success rate.

It's unrealistic to have to manage 100 jobs though.  

Perhaps better grouping based on function and/or OS (e.g. SQL databases in one selection list, Exchange in another, etc, then the server OS itself in another)

From a very highlevel, applications like SQL, Exchange etc should NOT have AOFO enabled, but you do need it for the filesystems.  Thus immediately, you should have two different jobs; one for apps/db the other for the OS/Systemstate.

If Win2k, you have to use VSP, since VSS is not available.  

If VMware, it needs to be in it's own policy, same with Hyper-V.

That said, multiple smaller jobs is the best way to go in all cases.  If not for configuration reasons (AOFO, GRT, VSS etc) then for operational SLA metrics, and even performance.  As you can send multiple jobs to disk simultaneously.  

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

I would split anything you know always generates an exception into a separate job but then perhaps bundle other resouces into a job per resource type - so Exchange information stores in one job , File server backups in one job etc

Then when you do have to review job logs and research an error it is a little easier to manage.

However this is a persoanl preference not any specific best practice.

That said if you have to log a case with support we have to look at each error condition separately as otherwise it is very confusing should you have one error that is a defect and one error that is just a configuration problem in the same job. As such we will often ask you to split jobs and may even require separate support cases. This is so we can debug/research/escalate (to appropriate skilled engineers) one issue at a time - this does not mean you can't merge back into less jobs once the issue is resolved though. ;)