Forum Discussion

severgun's avatar
Level 3
7 years ago

ZFS on Linux slow client performance


I have fileserver with ZFS file system on CentOS 7 Linux.
ZFS pool configured with:
compression = lz4
xattr = sa
acltype = aclposix

I created 2 different test files with dd bs=1M count=10000 if=/dev/urandom of=testfile on XFS and ZFS partitions.

time /usr/openv/netbackup/bin/bpbkar -nocont /archive/testfile > /dev/null

Firts try:
real 0m48.221s
user 0m0.121s
sys 0m14.606s
Second try less than 4sec

real 1m9.107s
user 0m0.153s
sys 0m21.183s
First and second tries give same time

cat file to /dev/null take 27 sec
htop/iotop shows 300+MB/s read but seems like 150MB/s. 
Actual backup on tape shows even worth speed (50-55MB/s) at Admin Console
No errors in bpbkar logs with verbose=5

5 Replies

  • Are you using compression on the ZFS pool ?

    compression = lz4

    if yes - every read will have to be processed by the CPU - reducing disk speed considerably. Try a disk pool without compressions - if possible.

    Second try is faster because data is loaded into cache.

    • tunix2k's avatar
      Level 5



      lz4 is the most effective compression you can use in ZFS.

      Like Nicolai said you shoul test ZFS with and ZFS without compression. Watch the CPU load.

      Additional you may playaround with recordsize (this is the blocksize). Changing recordsize and compression will effect for new files. The Blocks are written to ZFS will not decompressed/compress.

      example: (like I would do in Solaris)

      zfs create -o mountpoint=/nbutest/lz4zfs mypool/lz4_vol

      zfs create -o mountpoint=/nbutest/plainzfs mypool/plan_vol

      Is your ZFS in CentOS a kernel module or do you using fuse ? This may have an impact, specially if you haave higher CPU load.




      • severgun's avatar
        Level 3


        1) Yes ZFS compression is on and I need it. Tried without compression like tunix2k suggest. Almost same time results.
        2) ZFS by ZFS on Linux 0.7.3 kernel module. 
        3) CPU load 10-30% peaking to 90 on 1 core. I understand that every compression/decompression is CPU hungry.
        4) Record size? Hmm... okey. But why cat or copy(even through network) of this file processed much faster than bpbkar