Configuring MSDP system call failed (11)
Hi, I have created a standalone Netbackup on a CENTOS computer. I tried to create a MSDP storage and it did not workout, I believe the password was the issue, so I tried to do it again and I received the below error: "Configuring media server deduplication pool nb2.teclab.eu system call failed(11) Reconfiguration failed for storage server PureDisk:nb2.teclab.eu. system call failed" I have gone through all available documents but still not able to sort out the issue. MSDP not created any files in the defined location. I tried to create an advanced disk and basic disk, both are working fine. is there a way to find out more about this error, possibly a log folder which I need to log in to find out more about the issue. Thank you3.1KViews0likes5Commentsnbcheck returned exit status 127 during client install
nb client 9.1 oracle linux 8 error: NetBackup_9.1_CLIENTS2/NBClients/anb/Clients/usr/openv/netbackup/client/Linux/RedHat2.6.32/nbcheck returned exit status 127. tried so far: 1. remove noexec from fstab & remount /dev/shm 2. change tmpdir with "export TMPDIR=/root/tmp/" both with same result. Any other ideas without rebooting the server?Solved3KViews0likes1CommentNetbackup client connectivity issue
bptestbpcd from master server to client is failing with below error, although able to connect all the backup ports and name resolution is fine and also able to issue certificate successfully. Still backups are failing with error 58 can't connect client vnet_registerPBXServer: ../../libvlibs/vnet_pbx.c.94: pbxRegisterExEx failed with error 110:Connection timed outdaemon_select_and_accept_ex: vnet_registerPBXServer() failed: 47 BPTESTBPCD /usr/openv/netbackup/bin/admincmd/bptestbpcd -verbose -debug -client XXXXXXXX.XXXX.XXXX 14:32:29.524 [30023] <2> bptestbpcd: VERBOSE = 0 14:32:29.525 [30023] <2> ConnectionCache::connectAndCache: Acquiring new connection for host XXXXXXXX.XX.XXXX, query type 223 14:32:29.528 [30023] <2> logconnections: BPDBM CONNECT FROM XXX.XXX.XXX.XXX TO XXX.XXX.XXX.XXX fd = 3 14:32:29.529 [30023] <2> db_CLIENTsend: reset client protocol version from 0 to 9 14:32:29.533 [30023] <2> db_getCLIENT: db_CLIENTreceive: no entity was found 227 14:32:29.533 [30023] <2> closeConnection: Caching connection for query type 223 for reuse 14:32:29.542 [30023] <2> vnet_pbxConnect_ex: pbxConnectExEx() failed: 104 14:32:29.542 [30023] <2> vnet_pbxConnect_ex: ../../libvlibs/vnet_pbx.c.674: pbxSetAddrEx/pbxConnectExEx return error 104:Connection reset by peer 14:32:29.542 [30023] <8> do_pbx_service: [vnet_connect.c:4012] vnet_pbxConnect() failed, status=18, errno=2, use_vnetd=0, cr->vcr_service=bpcd 14:32:29.542 [30023] <8> async_connect: [vnet_connect.c:3543] do_service failed 18 0x12 14:32:29.544 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:30.547 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:32.549 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:36.551 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:44.554 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:44.554 [30023] <16> connect_to_service: connect failed STATUS (18) CONNECT_FAILED status: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx status: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd 14:32:44.554 [30023] <16> connect_to_service: JSON data = {"allow_large_status": {"timestamp": 1672218149, "who": "vnet_tss_init", "line_number": 32, "comment": "allow vnet status > 255", "data": true}, "direct_connect": {"timestamp": 1672218149, "who": "connect_to_service", "line_number": 2199, "comment": "connect parameters", "data": {"who": "vnet_connect_to_bpcd", "host": "XXXXXXXX.XXXX.XXXX", "service": "bpcd", "override_required_interface": null, "extra_tries_on_connect": 0, "getsock_disable_to": 0, "overide_connect_timeout": 0, "connect_options": {"server": null, "callback_kind": {"number": 1, "symbol": "NBCONF_CALLBACK_KIND_VNETD", "description": "Vnetd"}, "daemon_port_type": {"number": 0, "symbol": "NBCONF_DAEMON_PORT_TYPE_AUTOMATIC", "description": "Automatic"}, "reserved_port_kind": {"number": 0, "symbol": "NBCONF_RESERVED_PORT_KIND_LEGACY", "description": "Legacy"}}}}, "status": {"timestamp": 1672218164, "who": "connect_to_service", "line_number": 2465, "comment": "vnet status", "data": 18}, "connect_recs": {"timestamp": 1672218164, "who": "vnet_tss_get", "line_number": 97, "comment": "connect rec status messages", "data": "connect failed STATUS (18) CONNECT_FAILED\n\tstatus: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx\n\tstatus: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd"}} 14:32:44.554 [30023] <8> vnet_connect_to_bpcd: [vnet_connect.c:623] connect_to_service() failed 18 0x12 14:32:44.554 [30023] <16> local_bpcr_connect: vnet_connect_to_bpcd(XXXXXXXX.XXXX.XXXX) failed: 18 14:32:44.554 [30023] <2> local_bpcr_connect: Can't connect to client XXXXXXXX.XXXX.XXXX 14:32:44.554 [30023] <2> ConnectToBPCD: bpcd_connect_and_verify(XXXXXXXX.XXXX.XXXX, XXXXXXXX.XXXX.XXXX) failed: 25 14:32:44.554 [30023] <16> bptestbpcd main: JSON proxy message = {"allow_large_status": {"timestamp": 1672218149, "who": "vnet_tss_init", "line_number": 32, "comment": "allow vnet status > 255", "data": true}, "direct_connect": {"timestamp": 1672218149, "who": "connect_to_service", "line_number": 2199, "comment": "connect parameters", "data": {"who": "vnet_connect_to_bpcd", "host": "XXXXXXXX.XXXX.XXXX", "service": "bpcd", "override_required_interface": null, "extra_tries_on_connect": 0, "getsock_disable_to": 0, "overide_connect_timeout": 0, "connect_options": {"server": null, "callback_kind": {"number": 1, "symbol": "NBCONF_CALLBACK_KIND_VNETD", "description": "Vnetd"}, "daemon_port_type": {"number": 0, "symbol": "NBCONF_DAEMON_PORT_TYPE_AUTOMATIC", "description": "Automatic"}, "reserved_port_kind": {"number": 0, "symbol": "NBCONF_RESERVED_PORT_KIND_LEGACY", "description": "Legacy"}}}}, "status": {"timestamp": 1672218164, "who": "connect_to_service", "line_number": 2465, "comment": "vnet status", "data": 18}, "connect_recs": {"timestamp": 1672218164, "who": "vnet_tss_get", "line_number": 97, "comment": "connect rec status messages", "data": "connect failed STATUS (18) CONNECT_FAILED\n\tstatus: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx\n\tstatus: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd"}} <16>bptestbpcd main: Function ConnectToBPCD(XXXXXXXX.XXXX.XXXX) failed: 25 14:32:44.554 [30023] <16> bptestbpcd main: Function ConnectToBPCD(XXXXXXXX.XXXX.XXXX) failed: 25 <16>bptestbpcd main: cannot connect on socket 14:32:44.559 [30023] <16> bptestbpcd main: cannot connect on socket <2>bptestbpcd: cannot connect on socket 14:32:44.559 [30023] <2> bptestbpcd: cannot connect on socket <2>bptestbpcd: EXIT status = 25 14:32:44.559 [30023] <2> bptestbpcd: EXIT status = 252.7KViews0likes7Commentsthe certificate revocation list could not be downloaded
hi all So I used NetBackup approx. a month ago, logged in without issue (it's been in the same environment for years). I've tried to login to the console this morning but it errors with.. Status Code: 552 The Certificate Revocation List could not be downloaded. Therefore the certificate revocation status could not be verified I've tried various solutions suggested from other posts, tried an install repair but nothing has worked so far. Can anyone help?2.5KViews0likes3CommentsLTO9 tape performance poor - bottlenecks?
Hi all We've recently completed a staged total replacement of our environment from NBU8.2, writing first to HPE MSA2040 storage and then out to LTO5 drives; to NBU9.1 writing to HPE MSA2060 storage and out to LTO9 drives Write performance on the 2040/LTO5 setup was approximate 50GB/hr/drive * 4 drives (* 2 setups, but each library was writing data from a single MSA); and this jumped to approximately 200GB/hr/drive when we moved to a 2060/LTO5 setup. This improvement was badly needed as I had reduced our tapeouts to a bare minimum to ensure they actually got written out. Turning everything that should be written out back on got us close to exhausting the write-out capacity of the setup again. Now we have LTO9 drives, I am seeing approx 250GB/hr/drive * 3 drives on each site, which is less than we were getting with the four LTO5 drives. Some writes peak to 300GB/hr, but this isn't common. This is causing backlog issues again. This is a fraction of the rated read speeds for the MSA2060, the write speeds on the LTO9 libraries and the fibrechannel throughput with nothing showing saturation at any point. Back when we had the MSA2040s I would frequently see waits recorded in the duplication job logs but this isn't the case anymore Is there something obvious I am missing such as a rate limit set somewhere? Or is there a cap on encryption speed? We use the drive native encryption handed via an ENCR_ media pool and I can confirm that the LTO9 drives are encrypting.2.5KViews0likes8CommentsHigh disk space usage - tried manual reclaim
Hi, Disk space usage is high on my PureDisk media server, I've checked and on disk I currently should have less than 1TB of data used for Copy 1 however after doing manual reclaim (processqueue + garbage collection then processqueue twice), only reclaimed about 500MB. On disk I only store copy 1 and oldest backup is from last week (Catalog GUI). Also spotted a lot of old files back to 2014 in /storage/data directory even a tough all copy 2 are on tapes: -rw-r----- 1 root root 256M Jan 15 2014 6239.bin -rw-r----- 1 root root 256M Jan 15 2014 6238.bin -rw-r----- 1 root root 256M Jan 15 2014 6237.bin -rw-r----- 1 root root 256M Jan 15 2014 6236.bin -rw-r----- 1 root root 256M Jan 15 2014 6235.bin -rw-r----- 1 root root 1.3M Jan 15 2014 6225.bhd -rw-r----- 1 root root 256M Jan 15 2014 6225.bin -rw-r----- 1 root root 694K Jan 15 2014 6224.bhd -rw-r----- 1 root root 558K Jan 15 2014 6223.bhd -rw-r----- 1 root root 828K Jan 15 2014 6222.bhd -rw-r----- 1 root root 717K Jan 15 2014 6221.bhd -rw-r----- 1 root root 256M Jan 15 2014 6224.bin -rw-r----- 1 root root 256M Jan 15 2014 6223.bin -rw-r----- 1 root root 256M Jan 15 2014 6222.bin -rw-r----- 1 root root 256M Jan 15 2014 6221.bin -rw-r----- 1 root root 867K Jan 15 2014 6209.bhd -rw-r----- 1 root root 256M Jan 15 2014 6209.bin -rw-r----- 1 root root 571K Jan 15 2014 6199.bhd -rw-r----- 1 root root 256M Jan 15 2014 6199.bin -rw-r----- 1 root root 410K Jan 15 2014 6198.bhd -rw-r----- 1 root root 256M Jan 15 2014 6198.bin -rw-r----- 1 root root 362K Jan 15 2014 6197.bhd -rw-r----- 1 root root 256M Jan 15 2014 6197.bin -rw-r----- 1 root root 256M Jan 3 2014 1596.bin -rw-r----- 1 root root 256M Jan 3 2014 745.bin -rw-r----- 1 root root 637K Sep 29 2013 3997.bhd -rw-r----- 1 root root 16M Sep 29 2013 3997.bin n5220w:/disk/data # du -h 8.0K ./749/0/_0 8.0K ./749/0 8.0K ./749 1003M ./journal 3.1T . Below output /usr/openv/pdde/pdcr/bin/crcontrol --dsstat: ************ Data Store statistics ************ Data storage Raw Size Used Avail Use% 4.5T 4.4T 3.3T 1.0T 76% Number of containers : 18600 Average container size : 177409748 bytes (169.19MB) Space allocated for containers : 3299821316678 bytes (3.00TB) Space used within containers : 3205753046031 bytes (2.92TB) Space available within containers: 94068270647 bytes (87.61GB) Space needs compaction : 11411996203 bytes (10.63GB) Reserved space : 199918100480 bytes (186.19GB) Reserved space percentage : 4.0% Records marked for compaction : 309850 Active records : 43121730 Total records : 43431580 What is possibly wrong? I'm concerned it could be due to some orphaned images, Can anyone please advise ?2.2KViews0likes16CommentsDisk Pool Migration
Hello everybody, I am splitting my master server with a new media. I have the new media server installed and added to the master. I want to "migrate" a diskpool/STU from my master to the new media. This diskpool is a unit added to the master in unit T:\ . How can i migrate this diskpool?. Mi idea is disconnect this unit from master, connect to media and create a new diskpool, update my STU and change my policies, but i don't know if is the correct procedure. Regards,Solved1.8KViews0likes7Comments