03-11-2020 07:14 AM
Hi !
sorry for my english, i'm french speaking.
Our Master (AIX), Media(Linux RH) and client(AIX) are running NB 8.1.1
When the config is set to "Always use ÙClient-side deduplication" for the client, the backup start BUT, before the writing is started, we can wait about 3 minutes...
...
2020-03-11 09:39:45 - Info {Media Server} (pid=97579) Using OpenStorage client direct to backup from client {Client} to {Media Server}
2020-03-11 09:43:07 - begin writing
2020-03-11 09:44:59 - Info bpbrm (pid=97547) from client {Client}: TRV - [/HNAS-NFS-AUTO] is on file system type autofs. Skipping
...
I did some test, always the same result.
Next, if I use the same client, same data to backup and change to "Always use the media server", the backup start immediately, no delay of 3 minutes.
...
2020-03-11 09:35:56 - Info bptm (pid=97062) backup child process is pid 97092
2020-03-11 09:35:56 - Info bpbrm (pid=97038) from client SFIBD4479.INTEGPP2.RQPES: TRV - [/HNAS-NFS-AUTO] is on file system type autofs. Skipping
2020-03-11 09:35:56 - begin writing
...
I checked the firewall, nothing from this side. both, Media and client have this file set to "0" in the folder /usr/openv/netbackup/NET_BUFFER_SZ
If someone have any idea
Thanks for your help or hint
03-11-2020 08:36 AM
Do you have firewall in place that prevent communication on port 10082 and 10102 ?
https://www.veritas.com/support/en_US/doc/80731497-127431417-0/v124789758-127431417
03-11-2020 04:15 PM
03-12-2020 03:46 AM
Hello,
what about backup time as a whole - I guess that with the client dedup the overall backup time is shorter, despite the slow start - is not? Then I won't worry about this detail..
Regards
Michal
03-12-2020 04:31 PM
For client-side dedupe the client has to verify segment fingerprints with the MSDP deduplication database; whilst I believe a proportion of existing fingerprints may be cached on the client, new data will require a round trip to the Media Server to check whether it already has the new data's fingerprints. I would expect this would cause a delay before "begin writing" can actually start, but as others have said, overall backup times should be shorter and the actual amount of data transferred will almost always be much less. I would not worry about this...
Andrew
03-13-2020 01:39 AM - edited 03-13-2020 06:20 AM
Hi there, just a couple of questions, re:
2020-03-11 09:39:45 - Info {Media Server}...
2020-03-11 09:43:07 - begin writing
...for selection path: /HNAS-NFS-AUTO
.
1) When using client-side dedupe is the delay of 3m 22s always exactly the same duration or very nearly the same duration?
2) Is the problem only ever experienced by this one AIX backup client, or are many different clients that use client-side dedupe affected, or are all clients that use client-side dedupe affected?
3) Does the same or similar delay problem occur with a different path from the same AIX client? i.e. does the same problem occur when the path is not an NFS mount from your HNAS?
.
Personally, I'm kind of with @andrew_mcc1 , because the "apparent" delay could just be a feature combination of "a large NFS structure" PLUS "not much changing from day-to-day" in so far as maybe it takes around three or more minutes for the client to walk the HNAS hosted folder structure before bpbkar finds anything new or changed that needs to be sent to the media server, and maybe it is only when the media server actually receives it first piece of backup data that the media server logs "begin writing".
05-14-2020 05:23 AM
I'm sorry for the delay, I was out of office for long time.
I do a lot of test with client-side Dedup, always have this delay of 3-4 min. for 2 clients (both are in the same cluster). All others clients work fine with same setting. We have anothers AIX cluster and all clients work fine with client-side dedup.
This morning I tried to take only 1 file, /etc/passwd, same delay. Taking 9 minutes overall for a backup of 32ko, it's a little bit huge
Thanks !
06-02-2022 11:59 PM
Hi,
We are seeing the same malfunction in 8.3.0.1 on some clients.
Has anyone found the source of this problem.
Thank you.
06-03-2022 12:28 AM
here is an example
we lose 3 min before writing and 2 minutes after!
26 mai 2022 20:33:04 - connected; connect time: 0:00:00
26 mai 2022 20:33:05 - Info bptm (pid=259454) using 262144 data buffer size
26 mai 2022 20:33:05 - Info bptm (pid=259454) using 16 data buffers
26 mai 2022 20:33:05 - Info msdpname (pid=259454) Using OpenStorage client direct to backup from client clientname to msdpname
26 mai 2022 20:36:13 - begin writing
26 mai 2022 20:36:19 - Info dbclient (pid=38921) dbclient(pid=38921) wrote first buffer(size=262144)
26 mai 2022 20:40:31 - Info dbclient (pid=38921) done. status: 0
26 mai 2022 20:42:26 - Info msdpname (pid=259454) StorageServer=PureDisk:msdpname; Report=PDDO Stats (multi-threaded stream used) for (msdpname): scanned: 69408 KB, CR sent: 17894 KB, CR sent over FC: 0 KB, dedup: 74.2%, cache hits: 1 (0.2%), where dedup space saving:0.0%, compression space saving:74.2%
26 mai 2022 20:42:27 - Info msdpname (pid=259454) Using the media server to write NetBackup data for backup clientname_1653589973 to msdpname
26 mai 2022 20:42:29 - Info bptm (pid=259454) EXITING with status 0 <----------