cancel
Showing results for 
Search instead for 
Did you mean: 

Why Backup through DDBoost over fiber slow as DDboost over Ethernet ?!!

NEBC_Admin
Level 3

We have already DDBoost configured to be use with Netbackup server (Master/Media) over Ethernet.

Recently, we have configured DDBoost over Fiber to be used with the same Netbackup server.

i have done some test to show comparison in transfer rate for backup and restore in case of DDBoost over Ethernet against DDBoost over Fiber and unfortunatly found no difference between them.

I was expecting great transfer rate improvement after using fiber but that didn't happened.

What could be the reason for that ?!!! 

this is a table of test i have done for Thin disk VM and Thick disk VM

 

TestVM (Thin)FiberBackup26 MB/sec
 FiberRestore15 MB/sec
 EthernetBackup26 MB/sec
 EthernetRestore15 MB/sec

 

TestVM (Thick)FiberBackup111 MB/sec
 FiberRestore17 MB/sec
 EthernetBackup26 MB/sec
 EthernetRestore15 MB/sec
4 REPLIES 4

Tape_Archived
Moderator
Moderator
   VIP   

Looks like you will have to check how the data is passed from ESX host to your media server.

You though have fibre connectivity from Media server to Data Domain the rate of backup data coming from ESX host to Media server will be determined by your internal network and processing capacity of ESX host, DD Boost is processing the data it's getting & it seem Ethernet connectivity is sufficient to do this backup. 

How many such VM's you are backing in your environment?? Can you provide throughput for all of your VM data that your are backing instead of single TestVM.

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified

The test you have conducted just demonstrates that the physical backup transport layer is not the bottleneck. FC and Ethernet both work well, so you need to verify other items in the data path: vCenter, ESXi, media server, backend storage, etc

when i backup 2 or 3 VMs in the same time, the total bandwidth used is a larger number.

i have run 3 VMs backup concurrently before and the total bandwidth used was about 250 MB/sec.

So, why it is not using that much of bandwidth in case of 1 VM being backed up ??

Mouse
Moderator
Moderator
Partner    VIP    Accredited Certified

As I mentioned before, the protocol and network bandwidth is not the bottleneck, as your tests have shown. What we can conclude from the fact that backing up more VMs simultaneously leads to higher backup throughput so again this proves that the protocol and network bandwidth available has nothing to do with limitations on the data path.

In order to find out what is limiting single stream performance, you need to move your attention from Data Domain attachment method which is proven to be irrelevant to other components that sit on the data path or influence the performance, such as vCenter, ESXi, undelying storage of your datastore and VMFS performance in general, media server, etc.