I agree with Ken, network throughput has nothing to do with compression ratio. As far as I'm aware the backup tapes will write the data as quickly as they are provided the data from the backup software (assuming the tapes are faster), and the backup software will only provide that data as quickly as it can once it has performed any required compression.
In fact in a relevent but not entirely connected test, I had a system a while ago with an LTO1 library backing up over a 100Mb connection, when I upgraded the network to a 1Gb network, I saw a 3 fold increase in backup speed (so obviously the network was previously the bottle neck) but I didn't see any change in compression ratio between the two networks backing up the same data.
In addition to any driver or patch issues that Ken mentioned which would effect the overall level of compression, the real issue is the type of data that you are compressing with your job. For instance you say that you did a local backup of just over 4 gig on the local backup server, which had a very low compression ratio. Was that backing up a load of data you had copied to the server, or backing up the server itself? If it was the latter than remember that most of the files within the Windows installation, and the Backup Exec installation will likely be binary files in the first place, and therefore unlikely to be capable of compression. In fact if you try to compress a file which is already as compressed as it will go you are likely to see an increase rather than decrease in the file size, due to the overhead of the compression information added to the file, which would fit in with the results you are seeing.
In answer to your original question though, personally I generally see a compression ratio on my backups of between 1.1:1 to 1.2:1, depending on the type of server and content stored on it (eg the web server and file server generally do a lot better than the Exchange server.