Forum Discussion

backup-botw's avatar
9 years ago

Throughput Benchmarking

We are going to be looking into doing some upgrades in my organization and one thing we want to switch out is our LTO 4 drives for LTO 6's as well as getting off of an older VNX device that hosts our disk pools and onto something newer and faster. With that said I am looking to do some benchmarking. Is there something I can do at the command line level that basically does a test backup/restore of files of differing sizes? Basically I want to run a backup/restore of a 1TB, 5TB and 10TB file both to disk as well as to tape and get our current stats as the environment sits right now so as we do demo's with vendors in the coming weeks I know better as to what I am looking to beat if you will.

Also if anyone has any experience with the Oracle ZFS Storage ZS4-4 appliance I would be interested in hearing about that as well.

  • GEN_DATA Concept:
    A need was identified to provide a means of generating test data to process through NetBackup. This data should be:
    • Repeatable and controllable.
    • As 'light-weight' as possible during generation.
    • Indistinguishable from regular data, to allow for further processing, such as duplications, verifies, restores, etc.

     

    Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning

    https://www.veritas.com/support/en_US/article.000091135

     

     

  • My 2c:

    Backup devices are hardly ever the bottleneck. 
    If you are not currently maxing out the backup devices (e.g. sustained throughput in excess of 100MB/sec) then new, faster devices are not going to ensure faster backups.

    I would think twice about Oracle ZFS Appliance.

    If you look at 'Supported OpenStorage Functionality' in the HCL, you will see that it's ONLY OST feature is Opt_Dup.

    If you look at other Appliances, (including NetBackup Appliances) you will see these features: 
    A.I.R., Accelerator, Accel_VMware, Granular_Recovery, GRT_VMware, IR_VMware, Opt_Dup, Opt_Synth, Targeted_A.I.R., Accel_NDMP, Direct_Copy, Direct_Copy_shared, etc.

  • Another thing that might be interesting to test is a backup/restore of many mixed size files, to simulate a backup and restore of a file server. Here tests with sqlio and a single big file, does not always give the true picture.

    Agree with Marianne, that faster backup devices is not only thing to look at to ensure faster backups and probably more important faster restores.

    The only test that does not seem to be mentioned is using bpbkar -nocont to read the file system, which should give you an idea of fast the data can be delivered from the client(s)

  • Thank you very much for pointing that out! I had not looked at the latest HCL.

    Is there a preferred storage device?

  • 'Is there a preferred storage device?' Are you asking Veritas NetBackup fanatics? NetBackup Appliances! What else? Jokes aside - look at OST features in HCL. Decide which ones are required and/or nice to have in your environment. Look for OST vendor devices that suport the features you want/need. Find out about vendor support in your city/town for the vendors/devices that you have shortlisted. Get an idea of sizing and price for your needs from the different vendors. Find out if the different vendors are prepared to do POC. Be fare and use the same test criteria for all vendors. Do not be intimidated by the biggest vendors in the market with the best stats on paper. I have seen many disillusioned customers after a year or two, battling with devices filling up because maintenance /cleanup does not run often enough, devices/environment too busy to finish cleanup leaving tons of orphaned images on the Appliance that need to be cleaned up by hand... unresolved support calls with vendor, and eventually throwing them out...
  • I think we would honestly go the Veritas Appliance route if it wasnt for the licensing model. The cost just isnt effective at all especially when you are looking to swap out an entire environment. We have a ton of licensing in place now and if we opted to switch to an appliance setup we pretty much get nothing for the licensing we already have and with the relationship we have with another vendor our disk comes at a much better price than anything Veritas can come with.

    In regards to the HCL I did see the DD on there, but I dont see anything like VNX or VMAX. Does anyone have any information on those two devices? I was surprised to not see them mentioned in the HCL at all.

  • The VNX and VMAX are NAS and block storage arrays.  They're not "appliances" in the way that some products from EMC DataDomain, Veritas Appliances, ExaGrid, Quantum DXi, HP StoreOnce... can be - or this is probably better expressed as... EMC have not spent effort developing "OST" for VNX and VMAX because they're both different types of products.

  • I feel we're going in the wrong direction now, no longer covering what you actually asked in the opening post. I just NEED to tell you that you do NOT have to change to full Capacity licensing model for NBU Appliances. NBU Appliance simply needs tier 2 Enterprise Server and Data Optimization Option license for front-end TB. You also need these licenses with media servers that use 3rd-party dedupe appliances.
  • If you just want replace the block storage disk system under your disk pools, you probably want to look at ease of migration from you current VNX to the new disk system together with the performance stats. If the vendor talks about storage virtualization make sure to go through the limitation/caveat list, not all of them can handle the typically dedup disk size.

  • I think the discussion is right on target with my question. Doing benchmarking to do vendor device comparisons as we plan to migrate to new disk related hardware or an appliance. All of the information is certainly helping me out for sure and I enjoy the various feedback. Definitely appreciate the diiferent viewpoints and extra information as for me its all tied together.