cancel
Showing results for 
Search instead for 
Did you mean: 

Huge disparity in IO performance on primary/secondary nodes

wacky
Level 2
Guys,
 
We're running Oracle RAC with cvm/cfs under VCS 3.5 on a 2 node cluster. Recently we've noticed huge disparity in IO performance between primary and seconday nodes for a filesystem. Here's a brief session of my findings:
 

root@node0 > /opt/VRTS/bin/fsclustadm -v showprimary /OraBackups

node0

root@node0 > cd /OraBackups

root@node0 > time mkfile 1g test

real 0m20.142s

user 0m0.090s

sys 0m11.442s

root@node1 > /opt/VRTS/bin/fsclustadm -v showprimary /OraBackups

node0

root@node1 > cd /OraBackups

root@node1 > time mkfile 1g test1

real 0m57.633s

user 0m0.200s

sys 0m10.262s

Has anyone else noticed this? If so, is there anything we can do to improve performance on the secondary?

Regards

Wacky

2 REPLIES 2

jsenicka
Level 2
Employee
A couple things to note here.
1. mkfile is a very poor performance test for VxFS, because it creates a file, then essentially extends it block by block, rtaher than just creating a file of a specific size.
2. On a CFS secondary, with version 3.5, you would have to make a metadata call across the wire to the primary for every block extend.
3. Oracle does not write in this manner at all, so the performance testing here is not relevant to Oracle.


wacky
Level 2
Thanks Jim for your input. I've a couple of additional questions:
1. You imply that the problem may be version related. Is the 'problem' fixed in any of the versions later than 3.5 ?
2. If the secondary has to make a call across the wire to the primary for each metadata block extend, does this call go over the cluster interconnects and if it does, then is  the speed of the interconnect important?
 
Regards
Wacky