cancel
Showing results for 
Search instead for 
Did you mean: 

Will backup jobs span drives in a storage pool when one fills?

moniker131
Level 4

I have a problem with backup exec running multiple jobs on one disk even though I have about 10 disks in my storage pool. The disk fills up, and the jobs go into a queued state with no activity. I have read, albeit for older versions, that once a job is set to a specific disk in a storage pool, that it will not use another disk if that disk fills up. Is that true for BE 16?

 

EDIT: I accepted Colin's answer as the solution because it is the best answer to my question. However, there is a very valuable bit of information below where Colin tested a setup with verying sizes of disk and currency = 1. FYI in case someone else comes across this.

1 ACCEPTED SOLUTION

Accepted Solutions

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

OK just for info whilst we don't officially test spanning in disk pools (since discontinuing the cascaded pool type), my current tests show it works for standard file system selections (i.e. selections via the drive letter in Windows), which is one form of non-GRT. however, a VM backup with GRT disabled, which is another form of non-GRT, also worked.

As such the basics of what you are trying to do would appear to be able to do it (even if we don't officially test) , and as such you should continue to work with your open support case.

View solution in original post

11 REPLIES 11

pkh
Moderator
Moderator
   VIP    Certified

Yes. BE does not span disks. You got to ensure that there is sufficient disk space for the entire backup.

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

By default Backup Exec chooses which disk device to start a job on based on a round robin mechanism and not space free.

You can change the behaviour to choosing the device with most free space using a BEMCLI (Powershell plugin) command similar to

Get-BEStorageDevicePool -Name "<poolname>" | Set-BEStorageDevicePool -SelectionMethod MostFreeSpaceFirst

Whilst this will not help if you fill a disk mid-job, it should minimize how often this happens by making the jobs use the disk, in the pool, with the most space.

Note: historically we did used to allow disk spanning mid-job, unfortunately the GRT processes don't work with such spanning, hence the ability is no longer present.

 

I was thinking about doing this, and setting concurrency to 1 for all disks. Does it use the disk with the most space that is free, or will they queue up waiting for the disk with most space? For example, let's say I have a disk with 1TB free, a disk with 200GB free, and another disk with 500GB. I have two jobs that start at the same time, and one starts on the 1TB free disk. Will the next job run on the 500GB free disk or wait for the 1TB disk free?

Is disk spanning removed from all jobs, or just GRT? I have an active case for this issue, and the support technician told me it is supposed to move onto the next disk when one fills.

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

I have not tested to see what happens with most free space and limiting concurency to 1, gut feeling is that the device to use decision is probably made independently of the concurrency, so the second job would queue for the same device, even if by the end of the first job a different device has the most free space.

The feature we had in much older versions of Backup Exec that allowed spanning was called a Cascaded Drive Pool (as far as I can remember) However once we removed the feature we would have stopped any form of QA testing (GRT or non-GRT). As such, whilst spanning for non-GRT may have still operated briefly after we discontinued Cascaded Pools there would have been no guarantee that it would continue to work and I have also not tested the behaviour in any recent versions of Backup Exec.

Thanks for the information. Any documentation that mentions this feature being removed, or that this is not how Backup Exec is intended to work? It is very fustrating to have support tell me one thing, and an employee on the Vox forum contradict that.

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

As cascaded drive pools were removed over 8 years (and probably over 5 major versions) ago I doubt any documentation remains to cover any change of behaviour in this area. Can you send me a private message with your case number so I can review it?

 

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

I am going to see if I can setup a test of what happens in the latest BE version - might take me a while as I need to get some smaller than usual disks onto a test server in order to do it

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

1st test completed

- 3 drives of different sizes setup as B2D - all with concurrency = 1

- Pool containing 3 drives of different sizes (Large, Medium Small)

- Set Pool to use MostFreeSpace First

- 2 jobs created to use Pool

- started job 1 - which used Large drive

- started job 2 whillst job 1 was stil running and Medium drive was used straight away - so no job queuing and I am pleasantly surprised by that result (which is very useful to know and could help you and others)

Currently now intending to fill one of the drives with a non-GRT job - will update when I have that result

 

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

OK just for info whilst we don't officially test spanning in disk pools (since discontinuing the cascaded pool type), my current tests show it works for standard file system selections (i.e. selections via the drive letter in Windows), which is one form of non-GRT. however, a VM backup with GRT disabled, which is another form of non-GRT, also worked.

As such the basics of what you are trying to do would appear to be able to do it (even if we don't officially test) , and as such you should continue to work with your open support case.

Awesome, thanks so much for this. We may go that route. I found that our jobs were retrying after one VM snapshot failed, so would double it's 1TB+ backup size. I disabled the retries and will handle failures with one time backups. If that doesn't keep our disks from filling up, I may setup what you just tested.