Software Defined Storage at the Speed of Flash

Symantec and Intel introduced a Solution Overview for a storage management architecture using Software Defined Storage and Intel PCIe drives with NVMe support in another blog entry. The proposed solution offered better performance with a fraction of the costs of all flash arrays. Although the workload discussed is Oracle, this architecture could be used for any other database or application. It combines the best of breed storage and availability solutions from Symantec, providing an end-to-end software defined solution. 

The attached reference architecture paper describes in detail that configuration and goes deeper when comparing the solution with some All Flash Array (AFA) that have provided public white papers with some performance metrics. An AFA solution provides just storage, with no capabilities to run applications, so you still need servers to run your applications or databases, HBAs and switches to connect to the array, which increases the costs and the operational management costs. The hyper-converged solution described in this paper uses unique Software Defined capabilities from Symantec to do not only combine all the local storage together but also run the database and applications in the same servers, keeping control, visibility and high availability for those applications. Application availability solutions from Symantec support hundreds of applications that can easily being integrated in the proposed solution.

Finally a cost and performance comparison is outlined in the paper, showing how software is the way to unlock existing and new hardware capabilities.

service groups.png


Dear  ccarrero,

Thank you for posting this article. I found it very interesting and useful !

I am implementing several Oracle CFS single-instance clusters similar to your lab test, so very interested in how you configured, and results obtained. Ours are 2-node, 4-node and 6-node CFS Oracle clusters, with SAN attached spinning disk and SSD disk on Linux with SFCFSHA 6.1.1 .

Can you share any tips on SFCFSHA configuration/tuning for Oracle (in addition to using the Symantec ODM libraries) ?

As an example, what Oracle and VxFS blocksizes were used for the various Oracle data filesystems vs Redo log filesystems. My somewhat limited Oracle experience leads me to believe that VxFS blocksize for Oracle data should match the Oracle blocksize (up to VxFS maximum of 8KB), while VxFS blocksize for the Redo filesystem should be 1024 bytes, because Redo is written in small blocks by Oracle. And an over-riding question of... does it really make much difference anyway (vs default VxFS blocksize) ?

Another area under consideration is balancing CFS primaries for each filesystem, across cluster nodes... Is that worth pursuing for performance reasons ?

Thank You, Ken W, Datalink Corp

Hi Ken,

Apologizes I had not seen this comment before. I just noticed it now.

Please take a look to this deployment guide that I wrote a whiel ago. Altought it is for a different setup I included more tips about the Oracle configuration itself:

I followed the same rules for this new setup.

Basically I used 8k blocks for the file system holding data file and 4k for the file system with redo log.

The default value is only based on the size of the file system, and not how you are going to use it. So it is a good recomendation to align it to your application needs. In this particular case, Oracle.

Balancing the CFS Prmiaries across different nodes is a good practice towards a balanced configuration. As you are using Single Instance, just make sure the file node running the instance is the CFS Primary for that file system. When that instance needs more extents, it will make a local request instead of having to ask to a different node. In any case, with Oracle, because most of the space is allocated upfront, I expect this will have a very minimal impact. But it will not hurt.

Make sure Fast Failover is enabled in your configuration (page 52 of the previous link)

Thank you for stopping by!