As per "When you submit a backup job to a storage device pool, by default the job is sent to the first available storage device in that pool." "
We use the next available device in the pool as a decision taken at the start of the job ONLY so we do meet that description as the single Job has been assigned to the next device as it starts.
As per "As other jobs are created and started, they can run concurrently on other storage devices in the storage device pool. "
If you actually create multiple jobs and start them in parallel (say starts times only 5 mins apart) you should see as many jobs start as you have available drives to run them on. With any extra jobs above the mnumber of drives going queued. Hence we meet that bit iof the description .
What we don't do is allow a single job to fill a tape in one drive and then move over to using a tape in the other drive. Once a tape is full in that first drive (in the middle of a single job) someone has to insert another tape into that drive for that job to continue.
Tape Device Pools at there to load balance (or availability balance) at the start of jobs only, they do not allow spanning (cacading) between devices mid-job.
So the best practice / recommended solution is invest in a library if your daily jobs are likely to use more than 1 tape, however if you must use multiple stand alone drives then the only option is to split the selections into separate jobs so that no single job fills a tape.
Couple of extra points:
If you append to tapes instead of overwriting them, then the media insert request for filing the tape will happen earlier in the job, so best way to miminuze this effect is to overwrite tapes at start of jobs (which you were doing anyway by erasing the tapes)
You can't really plan to use 100% capacity of any tapes if not using a library unless you are willing to have someone onsite to insert changed tapes at required moments
For info : I think we are working on an edge conditon where if one drive is asking for a media insert other jobs that could use the other drives will not start until request has been acknowledged. I believe we will have a fix for that in a future update (when released the fix will be for for current BE versions not older ones) I am not sure if you experienced anything related to this, however if you did have multiple jobs where a job was scheduled to start at a later time, and this start time was then after one of your earlier jobs got stuck asking for a media insert then that later job probably won't start. It is therefore possible that an effect relating to this problem would stop you using all your devices.