LTO9 tape performance poor - bottlenecks?
Hi all
We've recently completed a staged total replacement of our environment from NBU8.2, writing first to HPE MSA2040 storage and then out to LTO5 drives; to NBU9.1 writing to HPE MSA2060 storage and out to LTO9 drives
Write performance on the 2040/LTO5 setup was approximate 50GB/hr/drive * 4 drives (* 2 setups, but each library was writing data from a single MSA); and this jumped to approximately 200GB/hr/drive when we moved to a 2060/LTO5 setup. This improvement was badly needed as I had reduced our tapeouts to a bare minimum to ensure they actually got written out. Turning everything that should be written out back on got us close to exhausting the write-out capacity of the setup again.
Now we have LTO9 drives, I am seeing approx 250GB/hr/drive * 3 drives on each site, which is less than we were getting with the four LTO5 drives. Some writes peak to 300GB/hr, but this isn't common. This is causing backlog issues again.
This is a fraction of the rated read speeds for the MSA2060, the write speeds on the LTO9 libraries and the fibrechannel throughput with nothing showing saturation at any point. Back when we had the MSA2040s I would frequently see waits recorded in the duplication job logs but this isn't the case anymore
Is there something obvious I am missing such as a rate limit set somewhere? Or is there a cap on encryption speed? We use the drive native encryption handed via an ENCR_ media pool and I can confirm that the LTO9 drives are encrypting.