cancel
Showing results for 
Search instead for 
Did you mean: 
Oscar_Wahlberg
Level 3

Introduction

This whitepaper compares how Veritas Storage Foundation and Solaris ZFS perform for commercial workloads. The paper describes not only the raw performance numbers, but also the system and storage resources required to achieve those numbers.

Pure performance numbers are interesting, but what is often ignored is how much CPU time, memory or I/O to disk was required to reach that number. In a performance benchmark situation, the resources used are less relevant, but for sizing server and storage infrastructure in the data center it is vital information. To see the bigger picture both performance numbers and resource utilization need to be considered.

This white paper covers two of the most common deployment scenarios for Veritas Storage Foundation, as a file server and as a database server.

Executive Summary

Solaris ZFS is a new file system with an integrated volume manager that was released with Update 2 of Solaris 10 in mid-2006. It is a strong replacement for Sun's previous products Sun Volume Manager and UFS, but its design and immaturity still make it a less than ideal fit for enterprise deployment. Storage Foundation is based on the Veritas Volume Manager and Veritas File System and is a proven leader in storage management software and mission-critical enterprise class deployments.

As shown in this document, Veritas Storage Foundation consistently performs about 2.7 times more operations per second than ZFS in Solaris 10 Update 3 for NFS file serving and as much as 2.3-5 times more transactions per second for OLTP databases. ZFS also required more system resources from the server and storage systems.

The non-overwriting design of the file system together with aggressive read-ahead algorithms and sub-optimal handling of synchronous writes has ZFS reading up to 48 times more data than Veritas Storage Foundation for the same file serving workload. For synchronous writes the current implementation of ZFS also requires two separate writes, one to the file system log and one to the file. In single host environments and direct attached storage this behavior is only limiting a single host, but in a data center and in a SAN (Storage Area Network) environment, the increased bandwidth requirement will have a significant impact on the size and performance of the storage infrastructure.

To read the complete article, please download the PDF.

Version history
Last update:
‎02-28-2009 10:00 PM
Updated by: