02-16-2016 12:36 PM
We are going to be looking into doing some upgrades in my organization and one thing we want to switch out is our LTO 4 drives for LTO 6's as well as getting off of an older VNX device that hosts our disk pools and onto something newer and faster. With that said I am looking to do some benchmarking. Is there something I can do at the command line level that basically does a test backup/restore of files of differing sizes? Basically I want to run a backup/restore of a 1TB, 5TB and 10TB file both to disk as well as to tape and get our current stats as the environment sits right now so as we do demo's with vendors in the coming weeks I know better as to what I am looking to beat if you will.
Also if anyone has any experience with the Oracle ZFS Storage ZS4-4 appliance I would be interested in hearing about that as well.
Solved! Go to Solution.
02-16-2016 01:59 PM
https://www.veritas.com/support/en_US/article.000091135
02-16-2016 01:02 PM
02-16-2016 01:04 PM
I see a couple of test scenarios:
1) VM backup to tape tests.
2) VM restore from tape tests.
3) Test pure streaming to tape.
.
You should be able to:
1a) craft some pre-prepared backup policies
1b) craft some pre-prepared commands to execute those policies
1c) scrape the bperror logs for performance stats
.
similar for set 2)
.
For set 3) If you have a Linux/Unix host (backup client or NetBackup Server) anywhere, or several, you can use GEN_DATA to really stress test your tape drives... assuming the clients and media servers can keep up with the tape drives:
Documentation: How to use the GEN_DATA file list directives with NetBackup for UNIX/Linux Clients for Performance Tuning
https://www.veritas.com/support/en_US/article.TECH75213
...preferably a Unix/Linux based NetBackup Media Server with access to the tape drives, because then the NetBackup Media Server can generate oodles of data in RAM and seriously pump it down to the tape drives - wherease if you have Unix/Linux backup clients then you may need to use several of them in your backup policies that use GEN_DATA but then multi-plex them together... again to seriously push the tape drives to their limits.
02-16-2016 01:39 PM
When you say "those files" which files are you referring to? Can I just create some kind of dummy files in the size increments I have described above?
02-16-2016 01:59 PM
https://www.veritas.com/support/en_US/article.000091135
02-16-2016 02:19 PM
02-16-2016 03:49 PM
For the backups in set 1 - I think the combination of a bpbackup command to call a backup policy to backup a VM but also WITH revaroo's advice to use a null storage unit sounds like a good 'VM backup' speed read test (of course you won't be able to use any of these backups as a restore test).... because AFAIK one cannot do a "bpbkar test to null" for a VM backup.
1) pure read test - use CLI bpbackup a VM from storage to null storage unit
2) pure write test - use policy GEN_DATA to tape
3) backup test - use CLI bpbackup a VM from storage to tape
4) restore test - use CLI nbrestorevm a VM from tape to storage
02-16-2016 04:03 PM
4) Using the nbrestorevm command to restore virtual machines into vSphere:
http://www.veritas.com/docs/000074167
02-16-2016 04:12 PM
If you can get a LUN presented from old and new storage arrays, to a Windows host, then I could share with you some tips on using an MS tool named "sqlio" to excercise a LUN - it looks like a pretty simple tool, but don't be fooled, used well it can do a lot for you - and would let you compare apples with apples, and can be easily run repeatedly. In fact I might even have an old script kicking around which would perform a range of IO tests using sqlio and size and collect all the stats together... I'll see if I can find it. The sqlio tool is quite similar to the NetBackup 'nbperfchk' command.
But I guess the real tool to use would be IOmeter, which is very good, a little tricky to setup the first time you use it, and is LUN destructive, so make sure that you only use IOmeter on a system that cannot see a real production LUNs.
02-16-2016 10:58 PM
1c - You can get the throughput from bptm log ....
02-16-2016 11:01 PM
1c - You can get the throughput from bptm log ....
02-17-2016 12:19 AM
My 2c:
Backup devices are hardly ever the bottleneck.
If you are not currently maxing out the backup devices (e.g. sustained throughput in excess of 100MB/sec) then new, faster devices are not going to ensure faster backups.
I would think twice about Oracle ZFS Appliance.
If you look at 'Supported OpenStorage Functionality' in the HCL, you will see that it's ONLY OST feature is Opt_Dup.
If you look at other Appliances, (including NetBackup Appliances) you will see these features:
A.I.R., Accelerator, Accel_VMware, Granular_Recovery, GRT_VMware, IR_VMware, Opt_Dup, Opt_Synth, Targeted_A.I.R., Accel_NDMP, Direct_Copy, Direct_Copy_shared, etc.
02-17-2016 02:17 AM
Another thing that might be interesting to test is a backup/restore of many mixed size files, to simulate a backup and restore of a file server. Here tests with sqlio and a single big file, does not always give the true picture.
Agree with Marianne, that faster backup devices is not only thing to look at to ensure faster backups and probably more important faster restores.
The only test that does not seem to be mentioned is using bpbkar -nocont to read the file system, which should give you an idea of fast the data can be delivered from the client(s)
02-17-2016 09:31 AM
Thank you very much for pointing that out! I had not looked at the latest HCL.
Is there a preferred storage device?
02-17-2016 10:47 AM
02-17-2016 11:00 AM
I think we would honestly go the Veritas Appliance route if it wasnt for the licensing model. The cost just isnt effective at all especially when you are looking to swap out an entire environment. We have a ton of licensing in place now and if we opted to switch to an appliance setup we pretty much get nothing for the licensing we already have and with the relationship we have with another vendor our disk comes at a much better price than anything Veritas can come with.
In regards to the HCL I did see the DD on there, but I dont see anything like VNX or VMAX. Does anyone have any information on those two devices? I was surprised to not see them mentioned in the HCL at all.
02-17-2016 11:51 AM
The VNX and VMAX are NAS and block storage arrays. They're not "appliances" in the way that some products from EMC DataDomain, Veritas Appliances, ExaGrid, Quantum DXi, HP StoreOnce... can be - or this is probably better expressed as... EMC have not spent effort developing "OST" for VNX and VMAX because they're both different types of products.
02-17-2016 08:33 PM
02-18-2016 12:13 AM
If you just want replace the block storage disk system under your disk pools, you probably want to look at ease of migration from you current VNX to the new disk system together with the performance stats. If the vendor talks about storage virtualization make sure to go through the limitation/caveat list, not all of them can handle the typically dedup disk size.
02-18-2016 06:39 AM
I think the discussion is right on target with my question. Doing benchmarking to do vendor device comparisons as we plan to migrate to new disk related hardware or an appliance. All of the information is certainly helping me out for sure and I enjoy the various feedback. Definitely appreciate the diiferent viewpoints and extra information as for me its all tied together.