cancel
Showing results for 
Search instead for 
Did you mean: 

Benchmark Results show VxFS faster than ZFS

charmer
Not applicable
Employee
Last week Symantec published some benchmark results comparing Storage Foundations and ZFS that suggest VxFS is around 3 times faster than ZFS for workloads  typical of many commercial applications.  These results  contrast sharply with some benchmark results published by Sun which  suggest that VxFS is about 1/3 the speed of ZFS.

I'm sure this is going to leave a lot of people scratching their heads and asking "how can the results be so different?".   The complete answer to that question is quite long, but I can try to offer a summary.  Unfortunately, that will leave out many important details.  I hope to address those in another article.

The short answer is that Symantecs' results are based on running benchmarks that are very similar to the industry standard TPC-C and SPEC SFS benchmarks and cranking them up to the limits of the hardware.

The Sun results are based on running  small micro-benchmarks  on a very large system  and measuring the performance.  (Micro-benchmarks are tests designed to  test one specific thing  to the exclusion of others.  They're great for optimizing the system a piece at a time, but not so good for predicting over all application performance, unless that's the only the application does.)

Sun's Createfiles Test

For example, the test labeled "creatfiles"  consists of creating 100,000 files and writing  an average of 16 Kbyte to each one.  Now, 100,000 sounds like a lot, but multiplying these two together says the total amount of file data involved is only 1.5 Gbyte.  Since the test doesn't involve any synchronous writes of the data and the box has 96 Gbyte of RAM,  nothing about this test requires that any of that data be written to disk during the measurement period.  Similarly, the box has 24 processors but only 16 threads are used to create and write to the files.  presumably the other processors are either idle or running the background daemons by the file system.

The interesting question is, of course, what happens when there's enough data that you run out of memory and something *does* need to be flushed to disk.  If I had been designing the filebench runs, I would have been sure to create workloads that  were larger than memory and/or consumed all available CPU.   (I'm hoping to get time to try that.)  But until that happens, it's worth remembering that the results published by Symantec definitely required that files be read from and written to disk.

Even given the "memory only" nature of this micro-benchmark, VxFS shows up a good deal worse than I would have expected. When I get some time I'm going to run Filebench on a larger system and try figure out exactly why VxFS looks so bad. I'll look at some of the other tests at the same time to get a better understanding of exactly what they do.

Why not SpecSFS and TPC-C results?

I'm sure some readers are asking, "Why didn't Symantec post actual TPC Benchmark C and SPEC SFS results?". The answer is that it's very difficult.

For example, TPC-C result can only be published after they've been submitted to the TPC-C council who requires they be run in a controlled and audited fashion. Further, if you want to run them using Oracle (and that's our standard database for benchmarking), then you can't publish them without Oracle's permission (as part of the contract you sign when you buy the database). Oracle has an understandable interest in insuring that only accurate TPC-C results that involve their database are published, but the combination of these requirements make it difficult and expensive to publish actual TPC-C results.

It's easier to publish SPEC SFS results, though it can still be problematic. For example, any member of SPEC may contest published SPEC benchmark results and and "request that the result be withdrawn from the public forum in which it appears and that the bench-marker correct any deficiency in product or process before submitting or publishing future results". While benchmark results that are unfavorable to another vendors products can eventually make it through this process and be published, it can be time consuming (I'm told it can take a year).

For these reasons Symantec chose to publish results using benchmarks that are very similar to the industry standard benchmarks, but they cannot be considered TPC-C or SPEC SFS results.  Since these results are not official SPEC SFS and TPC-C benchmark results, they should be viewed with a much greater degree of skepticism then would be applied to official results.

Conclusion

Benchmark results are only useful for predicting the performance of an application to the extent that they simulate the behavior of the application.  If you  have an application that doesn't require synchronous writes, where the total amount of file data is smaller than your system memory, and you don't need to use that memory for anything else (like the application itself), then the Sun's results will probably offer a more accurate prediction of your applications performance. If the application read and writes files whose total size, plus the size of the application, is larger than memory than the Symantec results may offer a more accurate prediction.

In short, care is required when interpreting benchmark results.

Notes

SPEC and SFS are trademarks of the Standard Performance Evaluation Corporation

TPC and TPC Benchmark C are trademarks of the Transaction Processing Performance Council