The question: Is CFS scalable? What performance hit is there from running VxFS in a clustered configuration?
Often times in sales situations, we are asked what the performance implications are of running CFS.
Customers are eager to know what the performance hit would be from operating in a clustered environment. This is particularly interesting to customers who are considering deploying our CFS HA solution as an upgrade to the regular Storage Foundation HA solution. They want want to know what CFS is going to cost in performance, and so did we.
The test: Run a workload on 1, 2, 4, 8, and 16 nodes and measure throughput.
With the outstanding efforts of the Performance Enigneering Group, we were able to measure i/o workloads in a clustered environment ranging from 1 node to 16 nodes. For completeness we ran tests that measured a workload either through streaming i/o to our cluster file system or to NFS running on our cluster file system. We ran these tests at different node counts to see what performance impact we could measure. Furthermore, when we ran the tests, all nodes were performing streaming i/o concurrently. In other words, this was not just an active/passive configuration (which has lesser demands) that one might see in a common CFS HA setup.
The results: 99% scalability at 16 nodes!!!
What we found was the following:
For streaming i/o:
- with 1 node we got an average of 146.9 MB/sec, this would become our baseline to compare against.
- with 2 nodes running we still achieved 100% throughput. In fact, with 4,8, and even 16 nodes we still saw 99% of this level. At 16 nodes our tests were pushing 145.4 MB/sec. The performance impact was almost nothing.
For more information:
Our TPM Oscar has produced a white paper detailing the results of this test and we will be publishing it shortly. In the meantime, we are happy to let our prospective cusomters know that we have an extremely scalable solution even at high node counts.
When the paper and press release become available, this article will be updated to point to the link.