Basically the recommendations came initailly from testing done on different hardware with a number of different specification remote servers (some using client side some not) They were then adjusted for the BE 2010 R2 release based on feedback we received against customer environments using the Pre-R2 version.
At a basic (very simplistic) level there are kind of two key processes going on against DeDup:
One is a form of compression (similar to ZIP) in that we are trying to identify chunks of data that are the same to only back them up once (most of this is the client side operation if so configured, but it does have to reference what is stored on the media server already so journalling updates do take place)
The other process is storing and tracking the chunks of data (maintaining catalogs and journals etc) which is mostly a media server process.
As such the minumim specification for what is a very intensive activity is there for a reason.
It is possible to see DeDup working with less spec - but scalability and performance would/could be a real problem. In fact some of our internal test setups do run in less - but a lot of them only backup 2-3 remote servers and only run test backups infrequently to meet specific tests - we also do not do any kind of performance testing with these under spec configurations as we expect the tests to not be particularly fast.