Forum Discussion

soadotjpa's avatar
soadotjpa
Level 3
13 years ago

Slow VMWare backups on NFS volumes using nbd - Seeking Help or Alternatives

Hello All,

I've installed a new Netapp 2240-4 and am in the process of migrating VMs from my old iSCSI SAN to NFS Datastores on the new Netapp.  Once I switched from san transport mode to nbd transport mode I experienced fairly severe backup throughput degredation.  I understand that this is because traffic is now being streamed from the ESX service console port instead of the storage fabric.  I'm seeking assistance in figuring out if I can support hotadd transport mode to get this traffic off of the network my current service console port is on.  I am a bit unclear how to implement a hotadd server to make use of my tape backup device.  If my configuration won't support hotadd, I am looking for assistance in ways to improve the service console throughput to increase my backup speeds.

 

The Environment:

All network links are 1Gb

NetBackup 7.5.0.4 Master/Media Server

  • Physical Windows Server 2003
  • SAS Direct attached tape drive
  • All Backups to Tape
  • NIC1 - Production Network - 192.168.10.28
  • NIC2 - Storage Network - 192.168.12.28

NetApp 2240-4 Dual Controller

  • All Data Ports - Storage Network - 192.168.12.x
  • All NFS Flexvols
  • No advanced licensing features.  (No snapmirror, flexclone, snaprestore, etc. . .)

VMWare ESX 4.1 Update 3

  • Service Console - vswitch0 - vnic0 - Production Network - 192.168.10.22
  • VM Traffic - vswitch1 - vnic1, vnic2 - Production Network - 192.168.10.23, 192.168.10.24
  • VMKernel - vswitch2 - vnic3 - VMotion Network - 192.168.11.22
  • VMKernel - vswitch3 - vnic4, vnic5 - Storage Network - 192.168.12.23, 192.168.12.24

VMWare vCenter Server v4.1.0

  • Virtualized
  • NIC1 - Production Network - 192.168.10.35

Thank you all for the assistance.  Please let me know if there is anymore data I can provide.

 

3 Replies

  • If you have used the SAN transport mode, HotAdd should be fairly straightforward for you.
    Essentially, they both do the same thing. The both mount VM snapshots as volumes to the backup host, then you back them up from this backup host.

    The difference with HotAdd is that the mounting is done internally inside the ESX host. I would think this is very efficient since data doesn't even need to traverse the lower OSI layers.

    Basically you have one VM that you setup as a HotAdd backup host (mount host). You could either just install the Nbu client on it, or you could install a full fledged media server.

    The thing about going the HotAdd route, is that it is not very useful without Nbu deduplication (MSDP).
    You would want the HotAdd backup host to do the deduplication on the backups before sending them to storage.
    If your HotAdd backup host does not perform deduplication before sending them out, then the whole setup wouldn't be any quicker than what you are experiencing now with the NBD transport mode.

    Typically, you would have a physical media server where the deduplication pool is attached to, and have the VM HotAdd backup host send already-deduplicated data to that media server.
    Sometimes backups could finish even faster than when using SAN mode, when data change is low.

    Since you are currently not using Nbu Dedupe, if you decide to go with this route, you will have to think about assigning disk storage space to the media server. You will find recommendations on this in the Nbu Dedupe admin guide. (Basically recommend FC SAN; if iScsi, must be 10Gbps)
    Also, the Nbu deduplication-pool host only supports media servers running 64bit OS, so you will have to migrate your current 32bit media/master server to, say, Win2008 R2, or simply deploy an additional 64bit media server.
    Or, you could also go for one of the Nbu Appliances for hosting the deduplication pool.
    There will also be new Nbu licenses you will have to take in to consideration.

    I hope that helped give you some directions. Please refer to the VMware admin guide and the Deduplication admin guide for details.

  • Thanks for the reply RLeon.

    I created a VM and installed a NetBackup client.  I then tested a backup of a different machine using hotadd and the new VM as the backup host.  This caused backup traffic to traverse the production network of the VM backup host to the Master/Media Server.  Hooray!  No service console traffic.  The traffic was heading over the production VM network, however, I'm still getting terrible speeds of around 9000 KB/s.

    I would like to see this traffic go over the storage network instead of the production network to see if the speeds increase.  Is this just as easy as adding a network interface to the vmware backup host vm that is on the storage network?  My master/media server already has a nic on this network as well.

    Thanks!

     

  • I would like to see this traffic go over the storage network instead of the production network to see if the speeds increase.

    As I explained in my first reply, it probably won't get any faster, unless deduplication is involved.

    To route traffic through a specific network:
    For each of the NIC on your HotAdd backup host, the media server must be able to identify it as a unique name.

    For example, if your backup host bu-host1 has two NICs:
      NIC1: 192.168.1.10/24 (production network)
      NIC2: 192.168.2.10/24 (storage network)

    And if you want Nbu traffic to use NIC2, then in your host files, you should idenfity them using different names, e.g.:
      bu-host1 192.168.1.10
      bu-host1-nbu 192.168.2.10

    Then for every reference to this backup host in the Nbu environment (inside policies, restores and everything), you use the bu-host1-nbu name, and never use the bu-host1 name.