NBU 10.3 tape to tape duplicate optimization.
Hi people: I'm reading the tunning guide and other documentation but feeling a little bit lost. We have a 4 LTO9 drives library (Spec of Library). It's connected via FC to our master NBU (on windows 2022) (sorry I'm bit older to catch new terms). The master is using 1Gpbs ethernet interface to the vms and rest of the infrastructure. Today I'm duplicating a 14TB bkp tape-to-tape. But feel the need some optimization since have this messages: Jul 28, 2025 11:50:14 AM - begin Duplicate Jul 28, 2025 11:50:17 AM - requesting resource mynbu01-hcart3-robot-tld-0 Jul 28, 2025 11:50:17 AM - requesting resource ADR025 Jul 28, 2025 11:50:17 AM - reserving resource ADR025 Jul 28, 2025 11:50:21 AM - Info bpduplicate (pid=6972) The data-in-transit encryption (DTE) is enabled, as the DTE mode of the backup image is set to 'On' Jul 28, 2025 11:50:21 AM - Info bptm (pid=14752) start Jul 28, 2025 11:50:21 AM - Info bptm (pid=14752) Encrypting data-in-transit Jul 28, 2025 11:50:21 AM - started process bptm (pid=14752) Jul 28, 2025 11:50:21 AM - resource ADR025 reserved Jul 28, 2025 11:50:21 AM - granted resource 000025 Jul 28, 2025 11:50:21 AM - granted resource HPE.ULTRIUM9-SCSI.001 Jul 28, 2025 11:50:21 AM - granted resource mynbu01-hcart3-robot-tld-0 Jul 28, 2025 11:50:21 AM - granted resource ADR025 Jul 28, 2025 11:50:21 AM - granted resource HPE.ULTRIUM9-SCSI.002 Jul 28, 2025 11:50:22 AM - Info bptm (pid=14752) start backup Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) start Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) Encrypting data-in-transit Jul 28, 2025 11:50:22 AM - started process bptm (pid=19164) Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) reading backup image Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) using 30 data buffers Jul 28, 2025 11:50:22 AM - Info bptm (pid=14752) Waiting for mount of media id 000025 (copy 1) on server mynbu01.mydomain.local. Jul 28, 2025 11:50:22 AM - started process bptm (pid=14752) Jul 28, 2025 11:50:22 AM - mounting 000025 Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) Waiting for mount of media id ADR025 (copy 2) on server mynbu01.mydomain.local. Jul 28, 2025 11:50:22 AM - started process bptm (pid=19164) Jul 28, 2025 11:50:22 AM - mounting ADR025 Jul 28, 2025 11:50:22 AM - Info bptm (pid=19164) INF - Waiting for mount of media id ADR025 on server mynbu01.mydomain.local for reading. Jul 28, 2025 11:50:22 AM - Info bptm (pid=14752) INF - Waiting for mount of media id 000025 on server mynbu01.mydomain.local for writing. Jul 28, 2025 11:51:07 AM - mounted ADR025; mount time: 0:00:45 Jul 28, 2025 11:51:07 AM - Info bptm (pid=19164) ADR025 Jul 28, 2025 11:51:07 AM - Info bptm (pid=19164) INF - Waiting for positioning of media id ADR025 on server mynbu01.mydomain.local for reading. Jul 28, 2025 11:51:07 AM - positioning ADR025 to file 11 Jul 28, 2025 11:51:36 AM - Info bptm (pid=14752) media id 000025 mounted on drive index 1, drivepath {1,0,2,0}, drivename HPE.ULTRIUM9-SCSI.001, copy 1 Jul 28, 2025 11:51:53 AM - positioned ADR025; position time: 0:00:46 Jul 28, 2025 11:51:53 AM - begin reading Jul 28, 2025 12:40:12 PM - Info bptm (pid=19164) waited for empty buffer 55084 times, delayed 161109 times Jul 28, 2025 12:40:12 PM - end reading; read time: 0:48:19 Jul 28, 2025 12:40:12 PM - positioning ADR025 to file 12 Jul 28, 2025 12:40:12 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 12:40:12 PM - begin reading Jul 28, 2025 1:23:37 PM - Info bptm (pid=19164) waited for empty buffer 52280 times, delayed 136968 times Jul 28, 2025 1:23:37 PM - end reading; read time: 0:43:25 Jul 28, 2025 1:23:37 PM - positioning ADR025 to file 13 Jul 28, 2025 1:23:37 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 1:23:37 PM - begin reading Jul 28, 2025 2:17:07 PM - Info bptm (pid=19164) waited for empty buffer 51567 times, delayed 166587 times Jul 28, 2025 2:17:07 PM - end reading; read time: 0:53:30 Jul 28, 2025 2:17:07 PM - positioning ADR025 to file 14 Jul 28, 2025 2:17:07 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 2:17:07 PM - begin reading Jul 28, 2025 3:21:39 PM - Info bptm (pid=19164) waited for empty buffer 56165 times, delayed 190912 times Jul 28, 2025 3:21:39 PM - end reading; read time: 1:04:32 Jul 28, 2025 3:21:39 PM - positioning ADR025 to file 15 Jul 28, 2025 3:21:39 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 3:21:39 PM - begin reading Jul 28, 2025 4:21:36 PM - Info bptm (pid=19164) waited for empty buffer 57278 times, delayed 142774 times Jul 28, 2025 4:21:36 PM - end reading; read time: 0:59:57 Jul 28, 2025 4:21:36 PM - positioning ADR025 to file 16 Jul 28, 2025 4:21:37 PM - positioned ADR025; position time: 0:00:01 Jul 28, 2025 4:21:38 PM - begin reading Jul 28, 2025 4:56:32 PM - Info bptm (pid=19164) waited for empty buffer 41788 times, delayed 110933 times Jul 28, 2025 4:56:33 PM - end reading; read time: 0:34:55 Jul 28, 2025 4:56:33 PM - positioning ADR025 to file 17 Jul 28, 2025 4:56:33 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 4:56:33 PM - begin reading Jul 28, 2025 5:39:12 PM - Info bptm (pid=19164) waited for empty buffer 48429 times, delayed 133305 times Jul 28, 2025 5:39:12 PM - end reading; read time: 0:42:39 Jul 28, 2025 5:39:12 PM - positioning ADR025 to file 18 Jul 28, 2025 5:39:12 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 5:39:12 PM - begin reading Jul 28, 2025 6:34:23 PM - Info bptm (pid=19164) waited for empty buffer 64518 times, delayed 172092 times Jul 28, 2025 6:34:23 PM - end reading; read time: 0:55:11 Jul 28, 2025 6:34:23 PM - positioning ADR025 to file 19 Jul 28, 2025 6:34:25 PM - positioned ADR025; position time: 0:00:02 Jul 28, 2025 6:34:25 PM - begin reading Jul 28, 2025 8:12:14 PM - Info bptm (pid=19164) waited for empty buffer 101831 times, delayed 305001 times Jul 28, 2025 8:12:14 PM - end reading; read time: 1:37:49 Jul 28, 2025 8:12:14 PM - positioning ADR025 to file 20 Jul 28, 2025 8:12:14 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 8:12:14 PM - begin reading Jul 28, 2025 9:44:53 PM - Info bptm (pid=19164) waited for empty buffer 99616 times, delayed 245508 times Jul 28, 2025 9:44:53 PM - end reading; read time: 1:32:39 Jul 28, 2025 9:44:53 PM - positioning ADR025 to file 21 Jul 28, 2025 9:44:53 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 9:44:53 PM - begin reading Jul 28, 2025 11:50:55 PM - Info bptm (pid=19164) waited for empty buffer 128929 times, delayed 350979 times Jul 28, 2025 11:50:55 PM - end reading; read time: 2:06:02 Jul 28, 2025 11:50:55 PM - positioning ADR025 to file 22 Jul 28, 2025 11:50:55 PM - positioned ADR025; position time: 0:00:00 Jul 28, 2025 11:50:55 PM - begin reading Jul 29, 2025 1:22:57 AM - Info bptm (pid=19164) waited for empty buffer 127122 times, delayed 284730 times Jul 29, 2025 1:22:57 AM - end reading; read time: 1:32:02 Jul 29, 2025 1:22:57 AM - positioning ADR025 to file 23 Jul 29, 2025 1:22:57 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 1:22:57 AM - begin reading Jul 29, 2025 2:16:30 AM - Info bptm (pid=19164) waited for empty buffer 73439 times, delayed 171201 times Jul 29, 2025 2:16:31 AM - end reading; read time: 0:53:34 Jul 29, 2025 2:16:31 AM - positioning ADR025 to file 24 Jul 29, 2025 2:16:31 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 2:16:31 AM - begin reading Jul 29, 2025 3:58:59 AM - Info bptm (pid=19164) waited for empty buffer 137768 times, delayed 323641 times Jul 29, 2025 3:58:59 AM - end reading; read time: 1:42:28 Jul 29, 2025 3:58:59 AM - positioning ADR025 to file 25 Jul 29, 2025 3:58:59 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 3:58:59 AM - begin reading Jul 29, 2025 5:42:05 AM - Info bptm (pid=19164) waited for empty buffer 131438 times, delayed 329688 times Jul 29, 2025 5:42:05 AM - end reading; read time: 1:43:06 Jul 29, 2025 5:42:05 AM - positioning ADR025 to file 26 Jul 29, 2025 5:42:05 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 5:42:05 AM - begin reading Jul 29, 2025 6:52:23 AM - Info bptm (pid=19164) waited for empty buffer 88174 times, delayed 222527 times Jul 29, 2025 6:52:23 AM - end reading; read time: 1:10:18 Jul 29, 2025 6:52:23 AM - positioning ADR025 to file 27 Jul 29, 2025 6:52:23 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 6:52:23 AM - begin reading Jul 29, 2025 8:09:06 AM - Info bptm (pid=19164) waited for empty buffer 110740 times, delayed 257401 times Jul 29, 2025 8:09:06 AM - end reading; read time: 1:16:43 Jul 29, 2025 8:09:06 AM - positioning ADR025 to file 28 Jul 29, 2025 8:09:06 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 8:09:06 AM - begin reading Jul 29, 2025 9:57:42 AM - Info bptm (pid=19164) waited for empty buffer 129522 times, delayed 344650 times Jul 29, 2025 9:57:42 AM - end reading; read time: 1:48:36 Jul 29, 2025 9:57:42 AM - positioning ADR025 to file 29 Jul 29, 2025 9:57:42 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 9:57:42 AM - begin reading Jul 29, 2025 11:34:56 AM - Info bptm (pid=19164) waited for empty buffer 119928 times, delayed 314205 times Jul 29, 2025 11:34:56 AM - end reading; read time: 1:37:14 Jul 29, 2025 11:34:56 AM - positioning ADR025 to file 30 Jul 29, 2025 11:34:56 AM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 11:34:56 AM - begin reading Jul 29, 2025 1:57:12 PM - Info bptm (pid=19164) waited for empty buffer 146636 times, delayed 462434 times Jul 29, 2025 1:57:12 PM - end reading; read time: 2:22:16 Jul 29, 2025 1:57:12 PM - positioning ADR025 to file 31 Jul 29, 2025 1:57:12 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 1:57:12 PM - begin reading Jul 29, 2025 4:06:34 PM - Info bptm (pid=19164) waited for empty buffer 134277 times, delayed 423117 times Jul 29, 2025 4:06:35 PM - end reading; read time: 2:09:23 Jul 29, 2025 4:06:35 PM - positioning ADR025 to file 32 Jul 29, 2025 4:06:35 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 4:06:35 PM - begin reading Jul 29, 2025 5:58:56 PM - Info bptm (pid=19164) waited for empty buffer 134166 times, delayed 353150 times Jul 29, 2025 5:58:56 PM - end reading; read time: 1:52:21 Jul 29, 2025 5:58:56 PM - positioning ADR025 to file 33 Jul 29, 2025 5:58:56 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 5:58:56 PM - begin reading Jul 29, 2025 7:43:59 PM - Info bptm (pid=19164) waited for empty buffer 149262 times, delayed 318682 times Jul 29, 2025 7:43:59 PM - end reading; read time: 1:45:03 Jul 29, 2025 7:43:59 PM - positioning ADR025 to file 34 Jul 29, 2025 7:44:01 PM - positioned ADR025; position time: 0:00:02 Jul 29, 2025 7:44:01 PM - begin reading Jul 29, 2025 9:02:57 PM - Info bptm (pid=19164) waited for empty buffer 135835 times, delayed 272807 times Jul 29, 2025 9:02:57 PM - end reading; read time: 1:18:56 Jul 29, 2025 9:02:57 PM - positioning ADR025 to file 35 Jul 29, 2025 9:02:57 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 9:02:57 PM - begin reading Jul 29, 2025 10:26:39 PM - Info bptm (pid=19164) waited for empty buffer 132874 times, delayed 267257 times Jul 29, 2025 10:26:39 PM - end reading; read time: 1:23:42 Jul 29, 2025 10:26:39 PM - positioning ADR025 to file 36 Jul 29, 2025 10:26:39 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 10:26:39 PM - begin reading Jul 29, 2025 11:43:23 PM - Info bptm (pid=19164) waited for empty buffer 130568 times, delayed 262648 times Jul 29, 2025 11:43:23 PM - end reading; read time: 1:16:44 Jul 29, 2025 11:43:23 PM - positioning ADR025 to file 37 Jul 29, 2025 11:43:23 PM - positioned ADR025; position time: 0:00:00 Jul 29, 2025 11:43:23 PM - begin reading Jul 30, 2025 1:04:25 AM - Info bptm (pid=19164) waited for empty buffer 141535 times, delayed 283765 times Jul 30, 2025 1:04:25 AM - end reading; read time: 1:21:02 Jul 30, 2025 1:04:25 AM - positioning ADR025 to file 38 Jul 30, 2025 1:04:25 AM - positioned ADR025; position time: 0:00:00 Jul 30, 2025 1:04:25 AM - begin reading Jul 30, 2025 2:24:22 AM - Info bptm (pid=19164) waited for empty buffer 137087 times, delayed 275441 times Jul 30, 2025 2:24:22 AM - end reading; read time: 1:19:57 Jul 30, 2025 2:24:22 AM - positioning ADR025 to file 39 Jul 30, 2025 2:24:22 AM - positioned ADR025; position time: 0:00:00 Jul 30, 2025 2:24:22 AM - begin reading Jul 30, 2025 3:40:34 AM - Info bptm (pid=19164) waited for empty buffer 128606 times, delayed 258613 times Jul 30, 2025 3:40:34 AM - end reading; read time: 1:16:12 Jul 30, 2025 3:40:34 AM - positioning ADR025 to file 40 Jul 30, 2025 3:40:34 AM - positioned ADR025; position time: 0:00:00 Jul 30, 2025 3:40:34 AM - begin reading Jul 30, 2025 4:54:11 AM - Info bptm (pid=19164) waited for empty buffer 118125 times, delayed 238708 times Jul 30, 2025 4:54:11 AM - end reading; read time: 1:13:37 Jul 30, 2025 4:54:11 AM - positioning ADR025 to file 41 Jul 30, 2025 4:54:11 AM - positioned ADR025; position time: 0:00:00 Jul 30, 2025 4:54:11 AM - begin reading Jul 30, 2025 6:16:58 AM - Info bptm (pid=19164) waited for empty buffer 132649 times, delayed 266171 times With your expertise can we squish some optimization of the hardware playing with the size and/or numbers of buffers or the NBU is having the max of the drives? The server has 128GB of RAM and two Processor Intel(R) Xeon(R) Silver 4310 CPU @ 2.10GHz, 2095 Mhz, 12 Core(s), 24 Logical Processor(s). The original bkp took almost 96 hours, but since LTO9 have speeds around 400MB/s I was hoping at least a 24 hours tape-to-tape copy but at this time 47% completed at 54:40 hours:minutes. Seems that this copy will take a bit more time. Sorry about the length of the question but willing to hear your thoughts from you!89Views0likes6CommentsDuplicate job copy data from original location instead of source job location
We use BackupExec 22 Rev 1193.1009. BackupExec server (hardware server ProLiant BL460c Gen9) is connected to SAN and LTO8 library. Speed of Ethernet connection is 10Gb/s At first backups are saved to SAN deduplication storage. Then - duplicate job runs. We intended that duplicate job would run from SAN backup, not from original location. The speed between SAN and tape is much faster than speed between original data location and tape. We intended to make a system where 8 backups will run simultaneously to SAN. They are slow (because they copy data from original location) but we do not care - because we can run many backups simultaneously. THEN - when first stage backups are finished - they duplicate to tape. They run only in 3 streams (because library only has 3 drives), but we do not care - because the speed from SAN to LTO8 tape is fast. But it seems, that duplicate backups run from original location, instead of SAN, where first stage of backups are saved. This is VERY VERY bad to us, because we need to backup about a hundred servers and some of them are large. If duplicate backup - copy data from original location, instead from SAN - we loose speed in about 6-8 times I think. I opened topic about a year ago about this issue and I was informed that duplicate backups run from primary backup, not from original data location. Does duplicate JOB copy data from source VM or fro... - VOX (veritas.com) I tried to enable\disable "Direct copy to tape" checkbox, BacupExec still copy data in duplicate backup from original location. Also I tried to save first stage of backup to regular disk storage (without deduplication). This is disk storage on BackupExec server itself - so speed must be very high. But BacupExec still copy data in duplicate backup from original location, not from disk storage. I also checked the bandwidth of optical Ethernet which is connected to BackupExec server and it is utilized no more than on 30% I tried to look through all options of backup, but didn't found anything related to problem - from where duplicate backups run: from original location or from first stage backup. Is it possible to configure duplicate backups to run from first stage backups location? How to do it? This will increase overall speed of my backup system in 6-8 times.2.6KViews0likes17CommentsWhy Verify speed is 10 times faster then write speed on LTO8 backing up ESXi?
We experiance very slow write speed when backing up from ESXi hosts (about 2500 MB/m). But when it comes to verify the speed jumps to 20000 MB/m. This is not a problem with write head of LTO8 tape drive, because if I try to write big file on LTO8 drive from local BackupExec server - the speed is 10000 MB/m. Why verify speed of ESXi hosts differ so much from write speed?Solved1.2KViews0likes3CommentsBackup policy recovering speed slowly (disk)
Dear NetBackupers, We have a NetBackup version 8.1.1 server and mostly Solaris clients for backup, server is connected with clients via LAN fibre optics and backup storage Dell EMC Data Domain 2500 is mounted via NFS to our NetBackup server also fiber optics. Our issue manifests as two specific policies interleave and the first that is run looses its backup speed and doesn't achieve max potential speed thus backup time of policy that usually finishes in 20 hours lasts around 30 hours. How could we configure client or server settings that when one of the affected policies finishes other regains max potential speed? We have done server tuning as recommended by backup storage vendor as this: ----------------------------------------------------------------------------------------------------- ✔Include the following lines in /etc/system for TCP: set ndd:tcp_wscale_always=1 set ndd:tcp_tstamp_if_wscale=1 set ndd:tcp_max_buf=16777216 set ndd:tcp_cwnd_max=8388608 set ndd:tcp_xmit_hiwat=2097152 set ndd:tcp_recv_hiwat=2097152 for NFS: set nfs:nfs3_nra=16 set nfs:nfs3_max_transfer_size=1048576 set nfs:nfs3_bsize=1048576 set nfs:nfs3_max_threads=256 set rpcmod:clnt_max_conns=64 ✔Add the NFS client mount options "rsize=1048576,wsize=1048576" ✔NFS patch 147268-01 or kernel patch 147440-05 (or later) is installed for Solaris. ---------------------------------------------------------------------------------------------------------------- Changes were done on the media server of NetBackup as we had some troubles with policies speed after that speeds were better but we are still observing before mentioned problems with those two policies when they interleave and the backup speeds in my opinion are not recovering quicky enough when one of those backup policies finishes. Should we make reccomended changes on NetBackup clients also so backup speeds would recover more quickly? TLDR: We can reschedule policies with little effort but how to configure them to achieve maximum speed in less time.1.5KViews0likes4CommentsOptimized Synthetic Backups: Taking the art and science of synthesizing backups to a cut above
The concept of synthesizing full backups from previous full and new incremental backups had been in the industry for a while. The idea is to run incremental backups on client system so that its CPU and memory resources are used minimally. In addition, as the full data set is not moved over network, it is a great way to save network bandwidth.