07-16-2018 12:16 AM
We have already DDBoost configured to be use with Netbackup server (Master/Media) over Ethernet.
Recently, we have configured DDBoost over Fiber to be used with the same Netbackup server.
i have done some test to show comparison in transfer rate for backup and restore in case of DDBoost over Ethernet against DDBoost over Fiber and unfortunatly found no difference between them.
I was expecting great transfer rate improvement after using fiber but that didn't happened.
What could be the reason for that ?!!!
this is a table of test i have done for Thin disk VM and Thick disk VM
TestVM (Thin) | Fiber | Backup | 26 MB/sec |
Fiber | Restore | 15 MB/sec | |
Ethernet | Backup | 26 MB/sec | |
Ethernet | Restore | 15 MB/sec |
TestVM (Thick) | Fiber | Backup | 111 MB/sec |
Fiber | Restore | 17 MB/sec | |
Ethernet | Backup | 26 MB/sec | |
Ethernet | Restore | 15 MB/sec |
07-16-2018 09:09 AM
Looks like you will have to check how the data is passed from ESX host to your media server.
You though have fibre connectivity from Media server to Data Domain the rate of backup data coming from ESX host to Media server will be determined by your internal network and processing capacity of ESX host, DD Boost is processing the data it's getting & it seem Ethernet connectivity is sufficient to do this backup.
How many such VM's you are backing in your environment?? Can you provide throughput for all of your VM data that your are backing instead of single TestVM.
07-16-2018 11:31 PM
The test you have conducted just demonstrates that the physical backup transport layer is not the bottleneck. FC and Ethernet both work well, so you need to verify other items in the data path: vCenter, ESXi, media server, backend storage, etc
07-17-2018 02:13 AM
when i backup 2 or 3 VMs in the same time, the total bandwidth used is a larger number.
i have run 3 VMs backup concurrently before and the total bandwidth used was about 250 MB/sec.
So, why it is not using that much of bandwidth in case of 1 VM being backed up ??
07-17-2018 02:29 AM
As I mentioned before, the protocol and network bandwidth is not the bottleneck, as your tests have shown. What we can conclude from the fact that backing up more VMs simultaneously leads to higher backup throughput so again this proves that the protocol and network bandwidth available has nothing to do with limitations on the data path.
In order to find out what is limiting single stream performance, you need to move your attention from Data Domain attachment method which is proven to be irrelevant to other components that sit on the data path or influence the performance, such as vCenter, ESXi, undelying storage of your datastore and VMFS performance in general, media server, etc.