Lots of things to cover there but here are a few ideas which may help.
Firstly, in relation to one shelf getting used and the other not I am unsure of why this would be the case unless you added the second shelf at a later date and most data was already on the first shelf (being de-dupe it may not be adding to the capacity of the appliance and just carry on using the first shelf) The one thought is that the first shelf wil also most likely have the de-dupe database on it which is doing most of the work as if you are getting good de-dupe there shouldn't be much data written but the database should be getting a hammering!
Second - do check that the appliance itself is OK. Two things to check here, 1. The disks, a lost disk will substantially slow the thing down. 2. The RAID battery - massive slow down if this starts to go flat. Use:
/opt/MegaRAID/MegaCli/MegaCli64 -adpbbucmd -getbbustatus -a0
to check its state.
If you are running a lot of jobs with a lot of data then it may be that queue processing, rebasing and garbage collection is starting to slow things down.
Queue processing happens all the time (twice a day) but it never seems to clear enough down so it is well worth running this more often manually to keep it lean (/usr/openv/pdde/pdcr/bin/crcontrol --processqueue) and it may help speed it back up. Worth checking regularly how big your queue is - i feel that it should run far more often than every 12 hours and that actually midnight is not usually a great time to run as backups are usually running then - it would be best to run it 4 or 5 times a day during the day time, or at least when backups are not running.
rebasing and garbage collection do not run so often (monthly) so when these kick in they can also slow it down for a day or two so you may want to run these manually more regularly too.
Running duplications at the same time as backups will have an adverse affect too - when 7.6 comes out we will be able to schedule the SLPs but for now you may want t just restrict the I/O on the disk pool to prevent too many operations running on the pool at the same time.
Reducing the fragment size of the de-dupe disk storage unit also seems to help - i try and keep them to aroung 5000MB for best performance (mainly helps with duplications / replications)
The current thinking is an optimum limit of one VM per data store at a time for best performance when backing up - so anything more than that will have an impact on performance
Again, when 7.6 is here you will get the option of using Accelerator for VMWare backups which will change everything!
Hope some of this helps