cancel
Showing results for 
Search instead for 
Did you mean: 

ZFS on Linux slow client performance

severgun
Level 3

Hello.

I have fileserver with ZFS file system on CentOS 7 Linux.
ZFS pool configured with:
compression = lz4
xattr = sa
atime=off
acltype = aclposix

I created 2 different test files with dd bs=1M count=10000 if=/dev/urandom of=testfile on XFS and ZFS partitions.

time /usr/openv/netbackup/bin/bpbkar -nocont /archive/testfile > /dev/null

XFS
Firts try:
real 0m48.221s
user 0m0.121s
sys 0m14.606s
Second try less than 4sec

ZFS
real 1m9.107s
user 0m0.153s
sys 0m21.183s
First and second tries give same time

cat file to /dev/null take 27 sec
htop/iotop shows 300+MB/s read but seems like 150MB/s. 
Actual backup on tape shows even worth speed (50-55MB/s) at Admin Console
No errors in bpbkar logs with verbose=5

5 REPLIES 5

Nicolai
Moderator
Moderator
Partner    VIP   

Are you using compression on the ZFS pool ?

compression = lz4

if yes - every read will have to be processed by the CPU - reducing disk speed considerably. Try a disk pool without compressions - if possible.

Second try is faster because data is loaded into cache.

tunix2k
Level 5
Partner Accredited

Hi,

 

lz4 is the most effective compression you can use in ZFS.

Like @Nicolai said you shoul test ZFS with and ZFS without compression. Watch the CPU load.

Additional you may playaround with recordsize (this is the blocksize). Changing recordsize and compression will effect for new files. The Blocks are written to ZFS will not decompressed/compress.

example: (like I would do in Solaris)

zfs create -o mountpoint=/nbutest/lz4zfs mypool/lz4_vol

zfs create -o mountpoint=/nbutest/plainzfs mypool/plan_vol

Is your ZFS in CentOS a kernel module or do you using fuse ? This may have an impact, specially if you haave higher CPU load.

 

ciao

tunix2k 

@Nicolai@tunix2k

1) Yes ZFS compression is on and I need it. Tried without compression like @tunix2k suggest. Almost same time results.
2) ZFS by ZFS on Linux 0.7.3 kernel module. 
3) CPU load 10-30% peaking to 90 on 1 core. I understand that every compression/decompression is CPU hungry.
4) Record size? Hmm... okey. But why cat or copy(even through network) of this file processed much faster than bpbkar

Nicolai
Moderator
Moderator
Partner    VIP   

You can get both :)

Either good bandwith (without compression) or good space optimation (with less bandwith).

 

Let me repeat.

Other file operations like cp perform better then bpbkar on same file on same filesystem. That is why I placed question here. May be I missed something in netbackup config.