It is a little difficult to explain all the possible combinations of possibilities.
Generally, you ned to look to see if the delays are waiting for full or waiting for empty buffers.
Think of the buffer as a bucket.
Oh, the rules are that a buckets can only be filled whenit is empty or emptied when it is full.
If the data is slow from the client (eg bad network) then the buckets fill up slowly, but when full, they get emptied to the tape drives real quick. then there are no buckets full, so bptm has to wait.
Hence, waiting for full buffer.
If the issue is the tape drive side - then the buckets fill up real quick, but take ages to empty, so the process that fills the buckets has to wait for one to be empty (and thus available to refill).
Hence, waiting for empty buffer.
The process that fills the buckets is different for a local backup (media server backing itself up) or a rempote client (over the network).
For a local backup, bpbkar fills the buffer directly, this this waiting for empty line would be seen in the bpbkar log.
For a remote client, bpbkar sends data to the tcp port, and a child bptm process takes it from there and sends it to the buffer. hence, the bptm log would have the 'waiting for empty lines'.
There is a touch file NOshm it is a big mis-understanding that this turns off shared memory (buffers) it doesn't. It makes a local backup behave like a remote backup, so bpbkar would send the data to a port, not the buffer, and a child bptm process takes it from there.
As Nicolai says, more buffers shouldn't cause an issue, if they can't be filled (not enough data) then they just sit there empty and the only issue is more memory is used.
Ideally, you want the total delays for waiting for full and to be 0 - this is not likely to happen but if it did, it would mean there is a perfect balaance of buckets being filled / emptied there are alwasys buckets that can be emptied, nd always some that can be filled - a constant stream. In this case, adding more buffers may help increase the performance as there may be spare capacity in the ability to send data from the client. If the number is increased and no performance gains are made then this would mean you had already achived the max possible.
Other points to consider,
How many delays are 'bad'. Each delay is 15ms, so the total number of seconds can be worked out.
If the backup would take a few hours (if running well) and the total delays only add up to a few minutes, then there is probably no real issue. If however the backup should take say 1 hours, but the delays add up to 10 minutes, then this is quite a % of the total time.
Generally a few 1000 delays on a backup that takes a couple of hours or more could be considered acceptable, but a 100 000 would be an issue - it's inposisble to say really, without considereing each backup separately.
If the buffers are tuned well (so for LTO drives that would be size 262144 and number 128 or 256 ) ... then gernerally, the two comman causes of waiting for full buffer issues are read spped of the client disks, or network problems.
Waiting for empty issues are more rare, and in my experience have come down to faulty tape drives.
Hope this provides a little insight.
Martin