Netbackup DDBoost/OST behavior
My understanding of Netbackup using DDBoost/OST to a DataDomain appliance is that the client being backed up should be able to directly write to the DataDomain appliance. What I'm seeing, however, is that the client is passing it's data through the media server, and the media server is writing it to the DataDomain appliance. This is causing performance to suffer, since most of my media servers only have a 1gig link & most will run a few backups at once. Average throughput of a backup right now is ~8MB/s when writing to that DataDomain DDBoost storage unit. I see a lot of data moving in & out of the media server during the backup process, which makes me come to this conclusion. On the DataDomain admin webpage, I also do NOT see a connection from the client itself on the DDBoost page, but I DO see connections from the media servers using them.
My question is, is this behavior normal for Netbackup or should the client be making a direct connection to the DataDomain appliance? If the client should be making a direct connection, is there a specific plugin we have to install on the clients to enable this? If the client should NOT be making a direct connection, is it possible to activate a secondary gigabit link on the media server to where the data flowing in would come in on the existing link and the data going out would flow on the secondary one?
It is normal. Clients can not write data directly to DD. Only the hosts which have media server role with OST plugin can write data to DD. This includes common media server covered by Entetprise Server license and SAN media server covered by Enterprise Client license.
is it possible to activate a secondary gigabit link on the media server to where the data flowing in would come in on the existing link and the data going out would flow on the secondary one?
It is possible by configuring 2nd NIC on other subnet(say storage subnet) and move DD into storage subnet.But this requires large reconfiguration tasks like network recabling, (sometimes)physical movement, name resolver change.
Why don't you consider to configure avtive-active IPMP with 2 NICs before?

