cancel
Showing results for 
Search instead for 
Did you mean: 

Throughput Benchmarking

backup-botw
Level 6

We are going to be looking into doing some upgrades in my organization and one thing we want to switch out is our LTO 4 drives for LTO 6's as well as getting off of an older VNX device that hosts our disk pools and onto something newer and faster. With that said I am looking to do some benchmarking. Is there something I can do at the command line level that basically does a test backup/restore of files of differing sizes? Basically I want to run a backup/restore of a 1TB, 5TB and 10TB file both to disk as well as to tape and get our current stats as the environment sits right now so as we do demo's with vendors in the coming weeks I know better as to what I am looking to beat if you will.

Also if anyone has any experience with the Oracle ZFS Storage ZS4-4 appliance I would be interested in hearing about that as well.

1 ACCEPTED SOLUTION

Accepted Solutions

Will_Restore
Level 6
GEN_DATA Concept:
A need was identified to provide a means of generating test data to process through NetBackup. This data should be:
  • Repeatable and controllable.
  • As 'light-weight' as possible during generation.
  • Indistinguishable from regular data, to allow for further processing, such as duplications, verifies, restores, etc.

 

Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning

https://www.veritas.com/support/en_US/article.000091135

 

 

View solution in original post

19 REPLIES 19

revarooo
Level 6
Employee
Just configure a test policy with a very short retention and backup those files to disk then to tape storage units and take notes of the results. Theres no other way to test. You can test reading those files and also sending to null stu https://www.veritas.com/support/en_US/article.TECH169602

sdo
Moderator
Moderator
Partner    VIP    Certified

I see a couple of test scenarios:

1) VM backup to tape tests.

2) VM restore from tape tests.

3) Test pure streaming to tape.

.

You should be able to:

1a) craft some pre-prepared backup policies

1b) craft some pre-prepared commands to execute those policies

1c) scrape the bperror logs for performance stats

.

similar for set 2)

.

For set 3) If you have a Linux/Unix host (backup client or NetBackup Server) anywhere, or several, you can use GEN_DATA to really stress test your tape drives... assuming the clients and media servers can keep up with the tape drives:

Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning

https://www.veritas.com/support/en_US/article.TECH75213

...preferably a Unix/Linux based NetBackup Media Server with access to the tape drives, because then the NetBackup Media Server can generate oodles of data in RAM and seriously pump it down to the tape drives - wherease if you have Unix/Linux backup clients then you may need to use several of them in your backup policies that use GEN_DATA but then multi-plex them together... again to seriously push the tape drives to their limits.

backup-botw
Level 6

When you say "those files" which files are you referring to? Can I just create some kind of dummy files in the size increments I have described above?

Will_Restore
Level 6
GEN_DATA Concept:
A need was identified to provide a means of generating test data to process through NetBackup. This data should be:
  • Repeatable and controllable.
  • As 'light-weight' as possible during generation.
  • Indistinguishable from regular data, to allow for further processing, such as duplications, verifies, restores, etc.

 

Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning

https://www.veritas.com/support/en_US/article.000091135

 

 

revarooo
Level 6
Employee
You can use GEN_DATA to ask bpbkar to generate data to send. This however is not read from a filesystem. You can create files using the unix command 'dd' dd if=/dev/zero of=/filename bs=1024 count=100000 For example

sdo
Moderator
Moderator
Partner    VIP    Certified

For the backups in set 1 - I think the combination of a bpbackup command to call a backup policy to backup a VM but also WITH revaroo's advice to use a null storage unit sounds like a good 'VM backup' speed read test (of course you won't be able to use any of these backups as a restore test).... because AFAIK one cannot do a "bpbkar test to null" for a VM backup.

1) pure read test - use CLI bpbackup a VM from storage to null storage unit

2) pure write test - use policy GEN_DATA to tape

3) backup test - use CLI bpbackup a VM from storage to tape

4) restore test - use CLI nbrestorevm a VM from tape to storage

 

sdo
Moderator
Moderator
Partner    VIP    Certified

4) Using the nbrestorevm command to restore virtual machines into vSphere:

http://www.veritas.com/docs/000074167

 

sdo
Moderator
Moderator
Partner    VIP    Certified

If you can get a LUN presented from old and new storage arrays, to a Windows host, then I could share with you some tips on using an MS tool named "sqlio" to excercise a LUN - it looks like a pretty simple tool, but don't be fooled, used well it can do a lot for you - and would let you compare apples with apples, and can be easily run repeatedly.   In fact I might even have an old script kicking around which would perform a range of IO tests using sqlio and size and collect all the stats together... I'll see if I can find it.    The sqlio tool is quite similar to the NetBackup 'nbperfchk' command.

But I guess the real tool to use would be IOmeter, which is very good, a little tricky to setup the first time you use it, and is LUN destructive, so make sure that you only use IOmeter on a system that cannot see a real production LUNs.

mph999
Level 6
Employee Accredited

1c - You can get the throughput from bptm log ....

mph999
Level 6
Employee Accredited

1c - You can get the throughput from bptm log ....

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

My 2c:

Backup devices are hardly ever the bottleneck. 
If you are not currently maxing out the backup devices (e.g. sustained throughput in excess of 100MB/sec) then new, faster devices are not going to ensure faster backups.

I would think twice about Oracle ZFS Appliance.

If you look at 'Supported OpenStorage Functionality' in the HCL, you will see that it's ONLY OST feature is Opt_Dup.

If you look at other Appliances, (including NetBackup Appliances) you will see these features: 
A.I.R., Accelerator, Accel_VMware, Granular_Recovery, GRT_VMware, IR_VMware, Opt_Dup, Opt_Synth, Targeted_A.I.R., Accel_NDMP, Direct_Copy, Direct_Copy_shared, etc.

Michael_G_Ander
Level 6
Certified

Another thing that might be interesting to test is a backup/restore of many mixed size files, to simulate a backup and restore of a file server. Here tests with sqlio and a single big file, does not always give the true picture.

Agree with Marianne, that faster backup devices is not only thing to look at to ensure faster backups and probably more important faster restores.

The only test that does not seem to be mentioned is using bpbkar -nocont to read the file system, which should give you an idea of fast the data can be delivered from the client(s)

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

backup-botw
Level 6

Thank you very much for pointing that out! I had not looked at the latest HCL.

Is there a preferred storage device?

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
'Is there a preferred storage device?' Are you asking Veritas NetBackup fanatics? NetBackup Appliances! What else? Jokes aside - look at OST features in HCL. Decide which ones are required and/or nice to have in your environment. Look for OST vendor devices that suport the features you want/need. Find out about vendor support in your city/town for the vendors/devices that you have shortlisted. Get an idea of sizing and price for your needs from the different vendors. Find out if the different vendors are prepared to do POC. Be fare and use the same test criteria for all vendors. Do not be intimidated by the biggest vendors in the market with the best stats on paper. I have seen many disillusioned customers after a year or two, battling with devices filling up because maintenance /cleanup does not run often enough, devices/environment too busy to finish cleanup leaving tons of orphaned images on the Appliance that need to be cleaned up by hand... unresolved support calls with vendor, and eventually throwing them out...

backup-botw
Level 6

I think we would honestly go the Veritas Appliance route if it wasnt for the licensing model. The cost just isnt effective at all especially when you are looking to swap out an entire environment. We have a ton of licensing in place now and if we opted to switch to an appliance setup we pretty much get nothing for the licensing we already have and with the relationship we have with another vendor our disk comes at a much better price than anything Veritas can come with.

In regards to the HCL I did see the DD on there, but I dont see anything like VNX or VMAX. Does anyone have any information on those two devices? I was surprised to not see them mentioned in the HCL at all.

sdo
Moderator
Moderator
Partner    VIP    Certified

The VNX and VMAX are NAS and block storage arrays.  They're not "appliances" in the way that some products from EMC DataDomain, Veritas Appliances, ExaGrid, Quantum DXi, HP StoreOnce... can be - or this is probably better expressed as... EMC have not spent effort developing "OST" for VNX and VMAX because they're both different types of products.

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
I feel we're going in the wrong direction now, no longer covering what you actually asked in the opening post. I just NEED to tell you that you do NOT have to change to full Capacity licensing model for NBU Appliances. NBU Appliance simply needs tier 2 Enterprise Server and Data Optimization Option license for front-end TB. You also need these licenses with media servers that use 3rd-party dedupe appliances.

Michael_G_Ander
Level 6
Certified

If you just want replace the block storage disk system under your disk pools, you probably want to look at ease of migration from you current VNX to the new disk system together with the performance stats. If the vendor talks about storage virtualization make sure to go through the limitation/caveat list, not all of them can handle the typically dedup disk size.

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

backup-botw
Level 6

I think the discussion is right on target with my question. Doing benchmarking to do vendor device comparisons as we plan to migrate to new disk related hardware or an appliance. All of the information is certainly helping me out for sure and I enjoy the various feedback. Definitely appreciate the diiferent viewpoints and extra information as for me its all tied together.