cancel
Showing results for 
Search instead for 
Did you mean: 

Hyper-V VM backups

rbkguy
Level 4

Windows Netbackup Master - 7.7.2

Windows Media servers - 7.7.2 MSDP and Tape

Windows 2012 R2 Hyper-V Cluster with 4 nodes (60 Clients)

 

I trying to figure out if there is a way to speed up some backups that have large VHDX volumes(800GBs plus). Most are take 2-4 hours and I am using client side deduplication. I have been reading about making the cluster nodes media server and wondering that will make this faster?

 

Would just installing the media server on the nodes help or would I need to create storage units on my media servers with the hyper-v nodes acting as the load balancers? Or are they other options that are worth my time like alternate client? I can just SAN transport or SAN clients as we are a Microsoft shop ;(

 

Thanks for your help!

.

11 REPLIES 11

sdo
Moderator
Moderator
Partner    VIP    Certified

You might be better off using plain client inside the Hyper-V VMs.

At one site we have 550+ Hyper-V VMs and all are saved using plain client, i.e. no use of Hyper-V VM backups at all.

Some points to be aware of:

.

Hyper-V backups including deleted blocks...

https://www.veritas.com/community/forums/hyper-v-backups-including-deleted-blocks

.

Hyper-V backups with allow file recovery enabled, implications for catalog space?

https://www.veritas.com/community/forums/hyper-v-backups-allow-file-recovery-enabled-implications-catalog-space

.

Are Hyper-V incremental backups auto expired when dependent full is expired?

https://www.veritas.com/community/forums/are-hyper-v-incremental-backups-auto-expired-when-dependent-full-expired

.

Hyper-V individual file restore limit of 2GB?

https://www.veritas.com/community/forums/hyper-v-individual-file-restore-limit-2gb

.

Some questions re Hyper-V backups...

https://www.veritas.com/community/forums/some-questions-re-hyper-v-backups

.

Win2012 R2 Hyper-V and NetBackup v7.6.0.2

https://www.veritas.com/community/forums/win2012-r2-hyper-v-and-netbackup-v7602

.

Perhaps try a test of changing one Hyper-V VM style backup to:  plain client + Accelerator + NTFS change journal + client-side de-dupe + BMR + TIR 10 days + daily differential.   Works good at the big Hyper-V site.

Michael_G_Ander
Level 6
Certified

What do you have of connect ion between your Hyper-V and the media servers ?

Have you changed the buffer sizes and number as recommended for MDSP/disk and tape storage units ?

What are the theoretical or even better actually speeds of your different components like CSV disk, networks, dedup disk ? Your backup never gets better than the slowest component along the path.

I would be wary of installing a media server on a Hyper-V host, as they both are pretty memory intensive

Offhost backup might improve the throughput, but only if the network or Hyper-V hosts is the bottleneck.

Some thing complete outside netbackup that could improve speed is jumbo frames, but it requires all the components in the network supports it.

Curious, does client side deduplication gives an improvement on the VHDX backups ? thinking that the whole VHX would be marked as changed when something changed inside it.

 

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

rbkguy
Level 4

I know that is not the ideal configuration but we have (4) 1GB nics team on each node as well as the media servers.

 

SIZE_DATA_BUFFERS with a value of 262144

NUMBER_DATA_BUFFERS with a value of 64

SIZE_DATA_BUFFERS_DISK with a value of 1048576

NUMBER_DATA_BUFFERS_DISK with a value of 256

 

with bpbkar waiting 246 times for empty buffer, delayed 6641 times.

I have seen a few people talk about jumbo frames that I will have to look into as well as the requirements for offhost backups

 

 

sdo
Moderator
Moderator
Partner    VIP    Certified

Those wait and delay counts are very low.  I don't think they are a problem.  There's not a lot wrong with 4x1Gb in a team, as long as you have appropriate balancing algo set in the LAN switch to suit the 'src-' of most clients.  Not many inidividual clients can push over 1Gb/s anyway.

Can you give some more specific numbers, i.e. exact backup job durations, job KB size, file counts, VHDX sizes, actual occupancy/usage %age.

sdo
Moderator
Moderator
Partner    VIP    Certified

Jumbo frames are difficult to set-up.  All, and I do mean *all* devices in the subnets(s) using jumbo frames must *all* be pre-configured AND must be able to support the largest frame size that you decide upon.  Not all NICs can handle 9KB frames, some can only do c. 6KB or 7KB, some LAN switch ports can not do the full 9KB either.  I'll say it again, *all* LAN cable end-points (NICs and switch ports) must *all* be able to handle the same maximum frame size that you choose.  If you have even one NIC or port in the subnet that cannot handle it, or is misconfigured, then strange things will happen.

sdo
Moderator
Moderator
Partner    VIP    Certified

What is your NetBackup Client side buffer size set to?  The client usually defaults to 32 or 64 or 128, depending upon which version was originally installed, but the recommended setting is now zero, i.e. with a client buffer setting of zero, then the client will no longer instruct the TCP stack which size to choose, and the TCP stack will select and use the most appropriate size to achieve buffer scaling.  And we let TCP stacks do this these days because the TCP driver stacks on modern OS are much better than they used to be.

sdo
Moderator
Moderator
Partner    VIP    Certified

Re client buffer size:

Best practices for NET_BUFFER_SZ and Buffer_size, why can't NetBackup change the TCP send/receive space​

http://www.veritas.com/docs/000026262

Michael_G_Ander
Level 6
Certified

Have you checked that you get the expected throughput from the teamed netcards ? Have experienced a teamed setup that ran only one 1Gbit link because of the of the way the netcards and switch was configured. Which about matches you reported time of 2 hours for 800GB

Regarding NET_BUFFER_SZ and Buffer_Size, I can tell you that 0 not always gives the best results on WIndows 2012 R2, as all other performance tuning it is a case of trial and error.

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

sdo
Moderator
Moderator
Partner    VIP    Certified

Here's a brief intro to some of the different load balancing alogorithms that may be available on your LAN switches:

25 Jan 2016

rbkguy
Level 4

I have been monitoring the servers with Quest spotlight and the nic teaming is working fine. I have set the NET_BUFFER_SZ to 0 and the client communications to 0 for the clients as wells as 1024 for Raw partition read buffer size.

 

If NET_BUFFER_SZ set to 0 is not a good value for 2012 I will have to add the to my research

sdo
Moderator
Moderator
Partner    VIP    Certified

Hi - I'm quite certain that "raw partition read buffer size" is used when doing FlashBackup.

And I'm certain that it was also used in the days of VCB Windows based proxy backups for VADP SAN based backups of VMware VMs - and so I'm fairly sure (but not 100%) that it is also used by Windows and Linux (including Appliances) VMware SAN based backup hosts (be they Enterprise Client or Media Server).

But this next one I'm not sure about at all... i.e. I do not know, and I'm hoping someone else can confirm, whether the "raw partition read buffer size" is used by Hyper-V VSS based VM backups?   And I question whether it is used for this style of backup... because a Hyper-V VSS based VM backup will very likely be calling the MS APIs for VSS and most probably not making private read IOs down to the LUN layer.   Which then leads me on to ask... whether anyone knows whether, and where, there is a tunable (either NetBackup based or more likely perhaps a registry key/value for VSS) for VSS read IO block/chunk/fragment size?