cancel
Showing results for 
Search instead for 
Did you mean: 

Fairly slow VMware VADP guest restore in NBU v7.0.1

sdo
Moderator
Moderator
Partner    VIP    Certified

Hi Forum,

Env: Win 2003 Ent R2 SP2 x64, NBU v7.0.1 + a few EEBs, 2 x dual port Emulex FC HBA, twin fabric, single initiator zoning, persistent binding on tape HBA ports, all 4 Gb/s SAN, direct path through SAN switch blade ASIC, 10 Gb LAN (core), NetApp FAS 3170 SAN disk storage Data OnTAP v7.3.2P3, multiple VMware ESXi v4.1 hosts on UCS B440 M1 blades, EMC EDL 4206 as VTL.

I've tested plain NTFS backups of files of 10 GB of random data at 150 MB/s and restore of same at 100 MB/s (to the same storage array and same aggregate as below...).

Backups of a VM guest via nbd and san transport both run at 95 MB/s.

However, an ndb transport restore of a guest VM (via and ESX Restore Host) runs at 36 MB/s - sort of acceptable - but a san transport restore runs at 16 MB/s - not good.

Any tips for performance?

Thanks,

Dave.

7 REPLIES 7

Anonymous
Not applicable

There was this techdoc when using VCB and converter. A restore used the LAN. When a backup went over SAN. But you are using vstorage api now - right?

 

NetBackup for VMware: Do FullVM image-level restores use the SAN or the LAN?

teiva-boy
Level 6

If both SAN transport and NBD run at the same speeds, there is a bottleneck somewhere.  SAN should be faster in every which way in most cases.  

If this bottleneck exists, it'll show up everywhere in your backup infrastructure, let alone other aspects of your environment.

 

"NetBackup, showing the weakness of IT infrastructure since 1990."  ;)  

sdo
Moderator
Moderator
Partner    VIP    Certified

...all these results...

- backups reading:  from same server, through same HBA mappings, from same storage array, and same “volume” (i.e. exactly the same disk spindles)...

- backups writing:   to same SAN based VTL on same (different to disk) HBA ports...

- and vice-versa for restores.

- all single initiator zones...

- i.e. the only difference in all of the following tests is the file system and transport (i.e. NTFS vs. VMFS and/or diff operating system of Win vs. negotiating with “ESX Restore Host” (i.e. not a VC)):

  • Plain NTFS backup   150 MB/s    (10 GB file containing 100% random data)
  • Plain NTFS restore   100 MB/s    (same file)
  • VMware backup         95 MB/s    (“san”  transport)
  • VMware backup         35 MB/s    (“nbd” transport)
  • VMware restore         35 MB/s    (“nbd” transport)
  • VMware restore         16 MB/s    (“san”  transport)

...I still can’t fathom why the “san” transport restore of guest VM is so slow.

I've just installed EEB 2105102.7 (re ET2152576) re technote http://www.symantec.com/docs/TECH145126 and set the "HKLM\Software\VERITAS\NetBackup\CurrentVersion\Config\BACKUP\setbEagerlyScrub" registry key, but I still only get 16 MB/s restore for "san" transport.

I've just tried another "nbd" restore and this still achieves 30+ MB/s

sdo
Moderator
Moderator
Partner    VIP    Certified

Is anyone else willing to state the speeds that they achieve?  Please.

Just so that I know whether it should actually go faster.

Thanks,
Dave.

bruce79
Not applicable

We are experiencing the same issues!

Similar issues as you guys, backups via SAN are brilliant! but restores are painful.

After raising a call with Symantec, EEB2152576 was installed along with registry key and this made no difference.

Having spoken to Symantec support today, one thing that came out was that the above EEB only addresses restores of VM's with THICK disks! if your VM's use THIN disks then the EEB does nothing!

The support representative sent me the below VMware link...

http://kb.vmware.com/kb/1035096

The KB basically says that NBD should be the transport medium of VM's with THIN disks due to vSphere SDK API! and this what we was seeing in the vCenter when a VM was restoring..."allocateBlock and ClearLazyZero" every second until the job completed!

I'm not sure if this is the same issue/scenario that you guys have, but i'm going to test this evening as a restore of a VM using THIN disks was giving me 1.5MB a sec on a restore! The ESXi (4.0u2) and NBU 7.01 VM proxy (NOT VCB) both have Qlogic FC8 cards and our EMC storage has two SP's both FC8! so there's no bottlenecks.

Hope this helps, and will report my findings...

robbie8302
Level 2

sdw, we're experiencing the same speeds that you are.  SAN restores are slower than NBD restores, and even at that the NBD restores seem to max out around that 30MB/s mark.

robbie8302
Level 2

The slow speeds we're getting through NBD restores are also being done directly to an ESXi restore host not to vCenter.