nbcheck returned exit status 127 during client install
nb client 9.1 oracle linux 8 error: NetBackup_9.1_CLIENTS2/NBClients/anb/Clients/usr/openv/netbackup/client/Linux/RedHat2.6.32/nbcheck returned exit status 127. tried so far: 1. remove noexec from fstab & remount /dev/shm 2. change tmpdir with "export TMPDIR=/root/tmp/" both with same result. Any other ideas without rebooting the server?Solved2.8KViews0likes1CommentNetbackup client connectivity issue
bptestbpcd from master server to client is failing with below error, although able to connect all the backup ports and name resolution is fine and also able to issue certificate successfully. Still backups are failing with error 58 can't connect client vnet_registerPBXServer: ../../libvlibs/vnet_pbx.c.94: pbxRegisterExEx failed with error 110:Connection timed outdaemon_select_and_accept_ex: vnet_registerPBXServer() failed: 47 BPTESTBPCD /usr/openv/netbackup/bin/admincmd/bptestbpcd -verbose -debug -client XXXXXXXX.XXXX.XXXX 14:32:29.524 [30023] <2> bptestbpcd: VERBOSE = 0 14:32:29.525 [30023] <2> ConnectionCache::connectAndCache: Acquiring new connection for host XXXXXXXX.XX.XXXX, query type 223 14:32:29.528 [30023] <2> logconnections: BPDBM CONNECT FROM XXX.XXX.XXX.XXX TO XXX.XXX.XXX.XXX fd = 3 14:32:29.529 [30023] <2> db_CLIENTsend: reset client protocol version from 0 to 9 14:32:29.533 [30023] <2> db_getCLIENT: db_CLIENTreceive: no entity was found 227 14:32:29.533 [30023] <2> closeConnection: Caching connection for query type 223 for reuse 14:32:29.542 [30023] <2> vnet_pbxConnect_ex: pbxConnectExEx() failed: 104 14:32:29.542 [30023] <2> vnet_pbxConnect_ex: ../../libvlibs/vnet_pbx.c.674: pbxSetAddrEx/pbxConnectExEx return error 104:Connection reset by peer 14:32:29.542 [30023] <8> do_pbx_service: [vnet_connect.c:4012] vnet_pbxConnect() failed, status=18, errno=2, use_vnetd=0, cr->vcr_service=bpcd 14:32:29.542 [30023] <8> async_connect: [vnet_connect.c:3543] do_service failed 18 0x12 14:32:29.544 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:30.547 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:32.549 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:36.551 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:44.554 [30023] <8> async_connect: [vnet_connect.c:3566] getsockopt SO_ERROR returned 111 0x6f 14:32:44.554 [30023] <16> connect_to_service: connect failed STATUS (18) CONNECT_FAILED status: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx status: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd 14:32:44.554 [30023] <16> connect_to_service: JSON data = {"allow_large_status": {"timestamp": 1672218149, "who": "vnet_tss_init", "line_number": 32, "comment": "allow vnet status > 255", "data": true}, "direct_connect": {"timestamp": 1672218149, "who": "connect_to_service", "line_number": 2199, "comment": "connect parameters", "data": {"who": "vnet_connect_to_bpcd", "host": "XXXXXXXX.XXXX.XXXX", "service": "bpcd", "override_required_interface": null, "extra_tries_on_connect": 0, "getsock_disable_to": 0, "overide_connect_timeout": 0, "connect_options": {"server": null, "callback_kind": {"number": 1, "symbol": "NBCONF_CALLBACK_KIND_VNETD", "description": "Vnetd"}, "daemon_port_type": {"number": 0, "symbol": "NBCONF_DAEMON_PORT_TYPE_AUTOMATIC", "description": "Automatic"}, "reserved_port_kind": {"number": 0, "symbol": "NBCONF_RESERVED_PORT_KIND_LEGACY", "description": "Legacy"}}}}, "status": {"timestamp": 1672218164, "who": "connect_to_service", "line_number": 2465, "comment": "vnet status", "data": 18}, "connect_recs": {"timestamp": 1672218164, "who": "vnet_tss_get", "line_number": 97, "comment": "connect rec status messages", "data": "connect failed STATUS (18) CONNECT_FAILED\n\tstatus: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx\n\tstatus: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd"}} 14:32:44.554 [30023] <8> vnet_connect_to_bpcd: [vnet_connect.c:623] connect_to_service() failed 18 0x12 14:32:44.554 [30023] <16> local_bpcr_connect: vnet_connect_to_bpcd(XXXXXXXX.XXXX.XXXX) failed: 18 14:32:44.554 [30023] <2> local_bpcr_connect: Can't connect to client XXXXXXXX.XXXX.XXXX 14:32:44.554 [30023] <2> ConnectToBPCD: bpcd_connect_and_verify(XXXXXXXX.XXXX.XXXX, XXXXXXXX.XXXX.XXXX) failed: 25 14:32:44.554 [30023] <16> bptestbpcd main: JSON proxy message = {"allow_large_status": {"timestamp": 1672218149, "who": "vnet_tss_init", "line_number": 32, "comment": "allow vnet status > 255", "data": true}, "direct_connect": {"timestamp": 1672218149, "who": "connect_to_service", "line_number": 2199, "comment": "connect parameters", "data": {"who": "vnet_connect_to_bpcd", "host": "XXXXXXXX.XXXX.XXXX", "service": "bpcd", "override_required_interface": null, "extra_tries_on_connect": 0, "getsock_disable_to": 0, "overide_connect_timeout": 0, "connect_options": {"server": null, "callback_kind": {"number": 1, "symbol": "NBCONF_CALLBACK_KIND_VNETD", "description": "Vnetd"}, "daemon_port_type": {"number": 0, "symbol": "NBCONF_DAEMON_PORT_TYPE_AUTOMATIC", "description": "Automatic"}, "reserved_port_kind": {"number": 0, "symbol": "NBCONF_RESERVED_PORT_KIND_LEGACY", "description": "Legacy"}}}}, "status": {"timestamp": 1672218164, "who": "connect_to_service", "line_number": 2465, "comment": "vnet status", "data": 18}, "connect_recs": {"timestamp": 1672218164, "who": "vnet_tss_get", "line_number": 97, "comment": "connect rec status messages", "data": "connect failed STATUS (18) CONNECT_FAILED\n\tstatus: FAILED, (24) BAD_VERSION; system: (2) No such file or directory; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA pbx\n\tstatus: FAILED, (42) CONNECT_REFUSED; system: (111) Connection refused; FROM 0.0.0.0 TO XXXXXXXX.XXXX.XXXX XXX.XXX.XXX.XXX bpcd VIA vnetd"}} <16>bptestbpcd main: Function ConnectToBPCD(XXXXXXXX.XXXX.XXXX) failed: 25 14:32:44.554 [30023] <16> bptestbpcd main: Function ConnectToBPCD(XXXXXXXX.XXXX.XXXX) failed: 25 <16>bptestbpcd main: cannot connect on socket 14:32:44.559 [30023] <16> bptestbpcd main: cannot connect on socket <2>bptestbpcd: cannot connect on socket 14:32:44.559 [30023] <2> bptestbpcd: cannot connect on socket <2>bptestbpcd: EXIT status = 25 14:32:44.559 [30023] <2> bptestbpcd: EXIT status = 252.4KViews0likes7CommentsConfiguring MSDP system call failed (11)
Hi, I have created a standalone Netbackup on a CENTOS computer. I tried to create a MSDP storage and it did not workout, I believe the password was the issue, so I tried to do it again and I received the below error: "Configuring media server deduplication pool nb2.teclab.eu system call failed(11) Reconfiguration failed for storage server PureDisk:nb2.teclab.eu. system call failed" I have gone through all available documents but still not able to sort out the issue. MSDP not created any files in the defined location. I tried to create an advanced disk and basic disk, both are working fine. is there a way to find out more about this error, possibly a log folder which I need to log in to find out more about the issue. Thank you2.4KViews0likes5CommentsLTO9 tape performance poor - bottlenecks?
Hi all We've recently completed a staged total replacement of our environment from NBU8.2, writing first to HPE MSA2040 storage and then out to LTO5 drives; to NBU9.1 writing to HPE MSA2060 storage and out to LTO9 drives Write performance on the 2040/LTO5 setup was approximate 50GB/hr/drive * 4 drives (* 2 setups, but each library was writing data from a single MSA); and this jumped to approximately 200GB/hr/drive when we moved to a 2060/LTO5 setup. This improvement was badly needed as I had reduced our tapeouts to a bare minimum to ensure they actually got written out. Turning everything that should be written out back on got us close to exhausting the write-out capacity of the setup again. Now we have LTO9 drives, I am seeing approx 250GB/hr/drive * 3 drives on each site, which is less than we were getting with the four LTO5 drives. Some writes peak to 300GB/hr, but this isn't common. This is causing backlog issues again. This is a fraction of the rated read speeds for the MSA2060, the write speeds on the LTO9 libraries and the fibrechannel throughput with nothing showing saturation at any point. Back when we had the MSA2040s I would frequently see waits recorded in the duplication job logs but this isn't the case anymore Is there something obvious I am missing such as a rate limit set somewhere? Or is there a cap on encryption speed? We use the drive native encryption handed via an ENCR_ media pool and I can confirm that the LTO9 drives are encrypting.2.1KViews0likes8Commentsthe certificate revocation list could not be downloaded
hi all So I used NetBackup approx. a month ago, logged in without issue (it's been in the same environment for years). I've tried to login to the console this morning but it errors with.. Status Code: 552 The Certificate Revocation List could not be downloaded. Therefore the certificate revocation status could not be verified I've tried various solutions suggested from other posts, tried an install repair but nothing has worked so far. Can anyone help?2.1KViews0likes3CommentsHigh disk space usage - tried manual reclaim
Hi, Disk space usage is high on my PureDisk media server, I've checked and on disk I currently should have less than 1TB of data used for Copy 1 however after doing manual reclaim (processqueue + garbage collection then processqueue twice), only reclaimed about 500MB. On disk I only store copy 1 and oldest backup is from last week (Catalog GUI). Also spotted a lot of old files back to 2014 in /storage/data directory even a tough all copy 2 are on tapes: -rw-r----- 1 root root 256M Jan 15 2014 6239.bin -rw-r----- 1 root root 256M Jan 15 2014 6238.bin -rw-r----- 1 root root 256M Jan 15 2014 6237.bin -rw-r----- 1 root root 256M Jan 15 2014 6236.bin -rw-r----- 1 root root 256M Jan 15 2014 6235.bin -rw-r----- 1 root root 1.3M Jan 15 2014 6225.bhd -rw-r----- 1 root root 256M Jan 15 2014 6225.bin -rw-r----- 1 root root 694K Jan 15 2014 6224.bhd -rw-r----- 1 root root 558K Jan 15 2014 6223.bhd -rw-r----- 1 root root 828K Jan 15 2014 6222.bhd -rw-r----- 1 root root 717K Jan 15 2014 6221.bhd -rw-r----- 1 root root 256M Jan 15 2014 6224.bin -rw-r----- 1 root root 256M Jan 15 2014 6223.bin -rw-r----- 1 root root 256M Jan 15 2014 6222.bin -rw-r----- 1 root root 256M Jan 15 2014 6221.bin -rw-r----- 1 root root 867K Jan 15 2014 6209.bhd -rw-r----- 1 root root 256M Jan 15 2014 6209.bin -rw-r----- 1 root root 571K Jan 15 2014 6199.bhd -rw-r----- 1 root root 256M Jan 15 2014 6199.bin -rw-r----- 1 root root 410K Jan 15 2014 6198.bhd -rw-r----- 1 root root 256M Jan 15 2014 6198.bin -rw-r----- 1 root root 362K Jan 15 2014 6197.bhd -rw-r----- 1 root root 256M Jan 15 2014 6197.bin -rw-r----- 1 root root 256M Jan 3 2014 1596.bin -rw-r----- 1 root root 256M Jan 3 2014 745.bin -rw-r----- 1 root root 637K Sep 29 2013 3997.bhd -rw-r----- 1 root root 16M Sep 29 2013 3997.bin n5220w:/disk/data # du -h 8.0K ./749/0/_0 8.0K ./749/0 8.0K ./749 1003M ./journal 3.1T . Below output /usr/openv/pdde/pdcr/bin/crcontrol --dsstat: ************ Data Store statistics ************ Data storage Raw Size Used Avail Use% 4.5T 4.4T 3.3T 1.0T 76% Number of containers : 18600 Average container size : 177409748 bytes (169.19MB) Space allocated for containers : 3299821316678 bytes (3.00TB) Space used within containers : 3205753046031 bytes (2.92TB) Space available within containers: 94068270647 bytes (87.61GB) Space needs compaction : 11411996203 bytes (10.63GB) Reserved space : 199918100480 bytes (186.19GB) Reserved space percentage : 4.0% Records marked for compaction : 309850 Active records : 43121730 Total records : 43431580 What is possibly wrong? I'm concerned it could be due to some orphaned images, Can anyone please advise ?2KViews0likes16CommentsDB2 Archive Backup Not running after upgrading the Client to NetBackup 10
After upgrading the NetBackup client to NetBackup 10 Vendor method is not working for archive backup. We have upgrade the client from NetBackup 8.1.1 to 10. Only one node is upgraded. Still archive backups get success in node 2. In node one Archive backup not starts. Vendor method is configured for the LOGARCHMETH2. Please check the following output for the configured archive methods and status. $ db2 get db cfg for xmndb | grep -i logarc First log archive method (LOGARCHMETH1) = DISK:/dbarchfs/ARCH_LOGS/ Archive compression for logarchmeth1 (LOGARCHCOMPR1) = OFF Options for logarchmeth1 (LOGARCHOPT1) = Second log archive method (LOGARCHMETH2) = VENDOR:/usr/openv/netbackup/bin/nbdb2.sl64 Archive compression for logarchmeth2 (LOGARCHCOMPR2) = OFF Options for logarchmeth2 (LOGARCHOPT2) = $ db2pd -logs -db XMNDB Database Member 0 -- Database XMNDB -- Active -- Up 386 days 04:28:00 -- Date 2022-06-22-18.09.06.272877 Logs: Current Log Number 197201 Pages Written 135447 Cur Commit Disk Log Reads 1 Cur Commit Total Log Reads 3 Method 1 Archive Status Success Method 1 Next Log to Archive 197201 Method 1 First Failure n/a Method 2 Archive Status Failure Method 2 Next Log to Archive 197201 Method 2 First Failure 197099 Log Chain ID 1 Current LSO 201380592994029 Current LSN 0x0000003488E917A2 Address StartLSN StartLSO State Size Pages Filename 0x0A000401352DEDA0 0000003488D87D5D 201380040909505 0x00000000 262144 262144 S0197201.LOG 0x0A000401348FCA00 0000000000000000 201381109408449 0x00000000 262144 262144 S0197202.LOG 0x0A00040134396360 0000000000000000 201382177907393 0x00000000 262144 262144 S0197203.LOG 0x0A000401348FADE0 0000000000000000 201383246406337 0x00000000 262144 262144 S0197204.LOG 0x0A0004013439CCC0 0000000000000000 201384314905281 0x00000000 262144 262144 S0197205.LOG 0x0A00040134900000 0000000000000000 201385383404225 0x00000000 262144 262144 S0197206.LOG 0x0A000401352EA360 0000000000000000 201386451903169 0x00000000 262144 262144 S0197207.LOG 0x0A000401352E92C0 0000000000000000 20138752041.7KViews1like5Comments