cancel
Showing results for 
Search instead for 
Did you mean: 

ZFS BACKUP SLOW THROUGHPUT

CesarJimGon
Level 4

Hi.

I have a Netbackup 7.7.3 environment with a Solaris 11 SPARC Master Server which backups an Oracle ZFS storage, besides another applications.

The ZFS shares are mapped to the Master Server by NFS, so this shares are seen as FS. The information of these ZFS shares goes to LTO5 tapes. I use 2 LTO5 tape drives for this backup operation.

The communication from the ZFS to the Master Server goes through Infiniband.

Two months ago, the throughput of the backups have been too slow. Before that, I registered throughtputs of 180 Mb/sec. That was the average throughput. Now, the throughput is 50 Mb/sec max.

I haven't done any changes on the backup configuration.

Could you help me with some tips to improve the throughput?

1 ACCEPTED SOLUTION

Accepted Solutions

Thanks Marianne.

I checked the NFS configuration in the Master Server and compared it to another Master Server with the same ZFS backup solution. I found there were missing options:

root@uxvtbck1 # more /vfstab
/vfstab: No such file or directory
root@uxvtbck1 # more vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
/devices        -               /devices        devfs   -       no      -
/proc           -               /proc           proc    -       no      -
ctfs            -               /system/contract ctfs   -       no      -
objfs           -               /system/object  objfs   -       no      -
sharefs         -               /etc/dfs/sharetab       sharefs -       no      -
fd              -               /dev/fd         fd      -       no      -
swap            -               /tmp            tmpfs   -       yes     -

/dev/zvol/dsk/rpool/swap        -               -               swap    -       no      -
192.168.10.50:/export/respaldo1 -       /respaldo1      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo2 -       /respaldo2      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo3 -       /respaldo3      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo4 -       /respaldo4      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo5 -       /respaldo5      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo6 -       /respaldo6      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo7 -       /respaldo7      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo8 -       /respaldo8      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio

The missing options were proto=tcp,vers=3,forcedirectio

These options were added to the /etc/vfstab file of the Master Server. Before that, the ZFS mount points were unmounted. After the configuration these mount points were mounted again.

I'm running a full backup right now. The throughput is 138 Mb/sec. The backups run faster.

View solution in original post

12 REPLIES 12

Marianne
Level 6
Partner    VIP    Accredited Certified

@CesarJimGon

Have you tried to test read speed outside of NBU?

Something like :

time tar cBf  /dev/null </mountpoint>
Or instead of mountpoint, a subdirectory that is about 5 - 10GB.
Get the size before you start in order to work out read speed. 

Hi Marianne.

I'm testing your suggestion, but I don't think I'm doing it right.

Checking size of test directory

du -as /usr/openv/netbackup
35739031        /usr/openv/netbackup

Read test

time tar cBf /usr/openv/netbackup
tar: Missing filenames

real    0m0.020s
user    0m0.003s
sys     0m0.008s

Marianne
Level 6
Partner    VIP    Accredited Certified
You missed /dev/null.
You need to specify /dev/null as backup device, meaning you are simply reading the files and not really writing to anything.
Please choose a folder on the NFS mount.

 Hi Marianne

I send the output

cd /zfsbackup6
zfsbackup6# ls -l
total 5540
drwxr-xr-x 7 nobody nobody 13368 Dec 6 2014 BKP_TRC
drwxr-xr-x 14 1001 1001 14 Feb 16 2016 bkup
drwxr-xr-x 3 1000 1001 4 May 27 2015 listeners
-rw------- 1 nobody nobody 53687091200 Nov 19 2014 load
drwxr-xr-x 3 1001 1001 6 Aug 31 2015 Patches
drwxr-xr-x 2 1001 1001 2 Sep 29 2015 siebel
drwxr-xr-x 4 1000 1001 4 May 29 2015 Traces

/zfsbackup6# time tar cBf /dev/null/zfsbackup6/siebel
tar: Missing filenames

real 0m0.011s
user 0m0.003s
sys 0m0.008s

df -k
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool/ROOT/solaris 286949376 55595990 29243717 66% /
/devices 0 0 0 0% /devices
/dev 0 0 0 0% /dev
ctfs 0 0 0 0% /system/contract
proc 0 0 0 0% /proc
mnttab 0 0 0 0% /etc/mnttab
swap 32643296 2832 32640464 1% /system/volatile
objfs 0 0 0 0% /system/object
sharefs 0 0 0 0% /etc/dfs/sharetab
fd 0 0 0 0% /dev/fd
rpool/ROOT/solaris/var
286949376 592731 29243717 2% /var
swap 32824512 184048 32640464 1% /tmp
rpool/VARSHARE 286949376 2263739 29243717 8% /var/share
rpool/export 286949376 3496 29243717 1% /export
rpool/export/home 286949376 32 29243717 1% /export/home
rpool/export/home/admin
286949376 389 29243717 1% /export/home/admin
rpool/repo 286949376 11558600 29243717 29% /repo
rpool 286949376 73 29243717 1% /rpool
rpool/VARSHARE/zones 286949376 31 29243717 1% /system/zones
rpool/VARSHARE/pkg 286949376 32 29243717 1% /var/share/pkg
rpool/VARSHARE/pkg/repositories
286949376 31 29243717 1% /var/share/pkg/repositories
192.168.10.99:/export/zfsbackup6
8933536765 1934208810 6999327955 22% /zfsbackup6

Marianne
Level 6
Partner    VIP    Accredited Certified
You missed the space after /dev/null.

time tar cBf /dev/null /zfsbackup6/siebel

Thanks Marianne, and sorry

This is the output.

-mount path

# time tar cBf /dev/null /zfsbackup6/siebel

real 0m0.016s
user 0m0.004s
sys 0m0.009s

-Local path

# time tar cBf /dev/null /usr/openv/netbackup

real 0m55.718s
user 0m9.801s
sys 0m42.954s

 

Anshu_Pathak
Level 5

First you need to isolate if it is a read problem or write problem. In your scenario media sever and client is same so we can try NOSHM touch file. If file is present remove it, if not then create it.

The location of the file is either <install_path>\veritas\netbackup\ for Windows, or /usr/openv/netbackup/ for UNIX.
The touch file name is NOSHM with no extension.

https://www.veritas.com/support/en_US/article.000026448

There is another simple way to figure out the issue, configure null storage unit on media server and send backup to it. Though the process looks like too many steps, it is very easy. (1)Copy libstspinulldiskMT.so & libstspinulldisk.so (2) restart nbrmms service (3) Configure storage server, disk pool & storage unit (4) Take backup.

https://www.veritas.com/support/en_US/article.000013708

If above two suggestion do not help then its time to get into logs, bpbkar (from client) and bptm (from media server) logs. 

 

Marianne
Level 6
Partner    VIP    Accredited Certified
What is the size of the siebel directory?
Maybe check the size of all subdirectories
(du -sk /zfsbackup6/*) and test with a bit more data?

Hi Marianne

I run the command on /zfsbackup6. This is the output

time tar cBf /dev/null /zfsbackup6/bkup
tar: /zfsbackup6/bkup/SBLTEST_0/20151223/lvqpi843_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb01/oracle.20180731.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb01/oracle.20180719.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb01/oracle.20180717.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb01/oracle.20180724.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb02/oracle.20180724.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb02/oracle.20180717.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb02/oracle.20180719.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb02/oracle.20180731.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb03/oracle.20180719.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb03/oracle.20180731.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb03/oracle.20180724.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb03/oracle.20180717.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb04/oracle.20180719.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb04/oracle.20180731.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb04/oracle.20180724.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/U01/uxcamdb04/oracle.20180717.tar.gz too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r5t8okd9_4_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r5t8okd9_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r6t8okd9_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r7t8okd9_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r6t8okd9_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r6t8okd9_2_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r6t8okd9_11_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180723/r5t8okd9_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5et9b30k_4_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5ft9b30k_5_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5ft9b30k_14_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5et9b30k_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5ft9b30k_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5et9b30k_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180730/5ft9b30k_13_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d5t865q9_13_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d4t865q9_5_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d5t865q9_5_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d4t865q9_4_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d5t865q9_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d5t865q9_11_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d6t865q9_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180716/d4t865q9_1_1 too large to archive. Use E function modifier.
tar: unable to open /zfsbackup6/bkup/SBL/20150704/hjqb81gt_1_1: Permission denied
tar: /zfsbackup6/bkup/SBL/20180719/jdt8e314_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jet8e315_5_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jft8e315_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jdt8e314_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jet8e315_10_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jet8e315_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jet8e315_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/SBL/20180719/jdt8e314_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_22_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_11_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_20_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_14_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_8_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_16_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180716/vut868jp_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_22_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_19_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_24_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180730/kgt9b5p9_16_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180729/j8t98g29_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_16_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_6_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_11_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/aqt8on45_2_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_22_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_18_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180723/apt8on45_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180727/gnt936pq_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180803/qht9llgg_1_1 too large to archive. Use E function modifier.
tar: unable to open /zfsbackup6/bkup/ACR/20150704/ocqb81im_1_1: Permission denied
tar: /zfsbackup6/bkup/ACR/20180718/3ft8bfdt_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180804/rqt9o9qu_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180722/9ht8m188_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180731/mqt9dod5_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180801/o2t9gd43_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180725/eat8tu8l_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180717/28t88qsr_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_5_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_15_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_14_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_21_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_3_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_12_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180719/4mt8e75t_23_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180805/t2t9qulf_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/20180721/73t8irph_1_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/2018010801/4vso6lf1_7_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/2018010801/4vso6lf1_9_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/2018010801/4vso6lf1_2_1 too large to archive. Use E function modifier.
tar: /zfsbackup6/bkup/ACR/2018010801/4vso6lf1_1_1 too large to archive. Use E function modifier.

real 191m40.829s
user 6m12.601s
sys 43m22.584s

Marianne
Level 6
Partner    VIP    Accredited Certified

@CesarJimGon

You can see that you have files that are too large for regular OS tar command. Have you tried again using 'E function modifier' as suggested?

I do not have a place to test. Please try :
time tar cBEf /dev/null /zfsbackup6/bkup
(see tar man page : https://docs.oracle.com/cd/E19109-01/tsolaris8/817-0879/6mgl9vnhn/index.html )

You then need to check the size of the bkup directory: 
du -sk /zfsbackup6/bkup

Convert the time in 'real' output to seconds.
Work out read speed :

Kbytes/seconds

puneet102
Level 4

Hello Cesar,

During the backup use below command to check the disk utilization. 

iostat 5

Also you can try hdparm command to check the disk performance in Linux:

 

$sudo hdparm -Tt /dev/sda

where sda is your disk where your backup is writing. Please refer the below two links and share the output of disk performance. 

 

Reference:

https://docs.oracle.com/cd/E23824_01/html/821-1451/spmonitor-4.html#scrolltoc

https://askubuntu.com/questions/87035/how-to-check-hard-disk-performance

 

 


Thanks
Puneet Dixit

Thanks Marianne.

I checked the NFS configuration in the Master Server and compared it to another Master Server with the same ZFS backup solution. I found there were missing options:

root@uxvtbck1 # more /vfstab
/vfstab: No such file or directory
root@uxvtbck1 # more vfstab
#device         device          mount           FS      fsck    mount   mount
#to mount       to fsck         point           type    pass    at boot options
#
/devices        -               /devices        devfs   -       no      -
/proc           -               /proc           proc    -       no      -
ctfs            -               /system/contract ctfs   -       no      -
objfs           -               /system/object  objfs   -       no      -
sharefs         -               /etc/dfs/sharetab       sharefs -       no      -
fd              -               /dev/fd         fd      -       no      -
swap            -               /tmp            tmpfs   -       yes     -

/dev/zvol/dsk/rpool/swap        -               -               swap    -       no      -
192.168.10.50:/export/respaldo1 -       /respaldo1      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo2 -       /respaldo2      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo3 -       /respaldo3      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo4 -       /respaldo4      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo5 -       /respaldo5      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo6 -       /respaldo6      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.50:/export/respaldo7 -       /respaldo7      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio
192.168.10.51:/export/respaldo8 -       /respaldo8      nfs     -       yes     rw,bg,hard,nointr,rsize=1048576,wsize=1048576,proto=tcp,vers=3,forcedirectio

The missing options were proto=tcp,vers=3,forcedirectio

These options were added to the /etc/vfstab file of the Master Server. Before that, the ZFS mount points were unmounted. After the configuration these mount points were mounted again.

I'm running a full backup right now. The throughput is 138 Mb/sec. The backups run faster.