cancel
Showing results for 
Search instead for 
Did you mean: 

Slow vm restores

BirtyB
Level 4

Running NBU 8.1 on Server 2016, restoring a vm from MSDP direct to Compellent SC4020 SAN we are underwhelmed with the performance. We see KB/Sec 18713. We get similar performance when using LAN transport. Neither CPU, RAM, Disk or LAN is anywhere near maxed out on restore host likewise the SAN.  The only thing we haven't tried is enabling flow control so that's tomorrow's plan of attack.

We are happy with backup performance.

Any ideas?

6 REPLIES 6

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
Have a look at this doc:
https://www.veritas.com/support/en_US/article.TECH183072
Extract:
SAN transport is not always the best choice for restores. It offers the best performance on thick disks, but the worst performance on thin disks, because of the way vStorage APIs work. For thin disk restore, LAN(NBD) is faster.

Mike_Gavrilov
Moderator
Moderator
Partner    VIP    Accredited Certified

What is your backup speed?

Have you tried to play with buffer size and number for the restore?

Could you share your current buffer values?

Did you try to restore the VM to another place? You wrote that you had the same speed using LAN and it's very slow even for LAN. May be your Storage is experiencing performance issue? Do you have another ESX/Datastore?

Do you have same restore speed for another VMs? Because it might be a problem with particular VM. Try to restore another VM at the same DataStore but with different name and without NIC.

 

 

 

 

Thank you both for your replies.

@Marianne Indeed, SAN restore does appear to offer the poorest performance.  We saw KB/Sec 35003 when using HotAdd, which is an improvement however still not satisfactory.  We did try a "thick" SAN restore but only achieved KB/Sec 31887.

@Mike The fastest backup rate we have achieved so far is KB/Sec 198041 and this was a SAN backup using NUMBER_DATA_BUFFERS = 1024.  We have been playing around with buffers and although these appear to have great effect on backup performance, we do not see the same gains with restores.  We have SIZE_DATA_BUFFERS_DISK = 1048576.

We have tried various combinations of restores to different environments / datastores / storage / ESX servers.  We confirmed with Steelbytes utility that we have a read speed of 1GB/Sec on the MSDP storage.

As said, we are reasonably happy with backup performance just not restore.  It seems that the buffers have little or no effect on restore speeds.  We also triued a restore with removing the NIC but saw no improvement.

Could I ask you both what sort of performance you have seen when restoring vm's?

I think I'll raise a ticket with Veritas support.

Thanks for your help,

Graham

 

Nicolai
Moderator
Moderator
Partner    VIP   

if a network issue is in suspicion I highly recommend iperf.

I have multiple times used iperf to find "somthing not right" with the network.

iperf has a reciver and sender mode - ip traffic is then generated and a bandwith is reported. If that bandwith is consistent low - forget about netbackup sending a higer volume of traffic.

A example : run test for 60 seconds and report bandith every 2. seconds.

# iperf -c duplo08 -p 5002 -t 60 -i 2
------------------------------------------------------------
Client connecting to acme.com, TCP port 5002
TCP window size: 512 KByte (default)
------------------------------------------------------------
[ 3] local x.x.x.x port 23471 connected with x.x.x.x port 5002
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 2.0 sec 1.32 GBytes 5.68 Gbits/sec
[ 3] 2.0- 4.0 sec 1.32 GBytes 5.67 Gbits/sec
[ 3] 4.0- 6.0 sec 1.32 GBytes 5.68 Gbits/sec
[ 3] 6.0- 8.0 sec 1.32 GBytes 5.67 Gbits/sec 

(shortned)

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Another TN with possible reasons for slow VMware restores:

https://www.veritas.com/support/en_US/article.TECH169860

According to this doc, expected restore speed (because of the way VMware APIs works) is about a third of backup speed.

So, best to log a call with VMware as well.

Update! We previously only tested "Lazy" thick provisioned restores.  We tried a "Eager" thick restore and achieved KB/Sec 81390 which is much better - but still not fantatastic!  The more I read the more I think it might actually be vmware throttling it.  This is an interesting read albeit for a rival product!!

Thanks for all your help guys.

Graham