Recent Content
"the trustAnchors parameter must be non-empty" alert after Java Admin Console login
Symptoms Few times I got "ThetrustAnchors parameter must be non-empty" alert after Java Admin Console login and whole "Security Management" section was unavailable Diagnosis In Java Admin console log it looked like: [6/15/21 10:57:43 PM MSK {1623787063262}] [262144] SecureTransport-> Setting SNI to web server to load client compatible certificate:[NBCA] Exception occured while Login to WebServices org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://nbumaster:1556/netbackup/loginwithbpjavasessiontoken": Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty; nested exception is javax.net.ssl.SSLException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty [6/16/21 1:13:58 AM MSK {1623795238399}] [262144] LoginBannerAdapter->initializePortal () [6/16/21 1:13:58 AM MSK {1623795238403}] [262144] LoginBannerAdapter-> getLoginBannerConfiguration () [6/16/21 1:13:58 AM MSK {1623795238404}] [262144] LoginBannerPortal -> readLoginBannerConfiguration () [6/16/21 1:13:58 AM MSK {1623795238459}] [262144] SecureTransport-> atDataDir path isC:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore [6/16/21 1:13:58 AM MSK {1623795238460}] [262144] SecureTransport-> Certificate location is: C:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore\..\certstore\trusted\ [6/16/21 1:13:58 AM MSK {1623795238461}] [262144] SecureTransport-> Path not accessible:C:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore\..\certstore\trusted (SR-1)readFile: (Sent:Wed Jun 16 01:13:58 MSK 2021, Recv:Wed Jun 16 01:13:58 MSK 2021) Protocol Code: 4 Status: 2 Time Taken: 96ms Error Msg: No such file or directory In this case it happened because of localized name of the user and the folder with certificates became unavailable or it can be just a permission issue Solution Create another user (it's also possible to re-configure and use another folder) or fix issues with the folder0likes0CommentsCertificates and CLIENT_NAME on the master-server
Symptoms Client is unable to get a certificate (CACertificate can be received) with unusual error: nbu-client # /usr/openv/netbackup/bin/nbcertcmd -getCertificate nbcertcmd: The -getCertificate operation failed for server nbu-mas.domain.local EXIT STATUS 5908: Unknown error occurred. In nbcertcmd log: 13:38:27.725 [4785.4785] <2> getHostIdCertStatus: Checking if hostID exist of host nbu-mas.domain.local 13:38:27.725 [4785.4785] <2> readJsonMapFile: Json mapping file [/usr/openv/var/vxss/certmapinfo.json] does not exist 13:38:27.725 [4785.4785] <2> readCertMapInfoInstallPath: Mapping file does not exists 13:38:27.725 [4785.4785] <2> getHostIdCertStatus: getHostID failed, error :5949. .............................................................. 13:38:30.364 [4785.4785] <2> curlSendRequest: actual http response : 500 expected http result: 200 13:38:30.364 [4785.4785] <2> parse_json_error_response: Error code returned by server is :5908 13:38:30.364 [4785.4785] <2> parse_json_error_response: Developer error message return by server :com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input at [Source: (String)""; line: 1, column: 0] 13:38:30.364 [4785.4785] <16> nbcert_curl_gethostcertificate: Failed to perform getcertificate, with error code : 5908 13:38:30.364 [4785.4785] <2> NBClientCURL:~NBClientCURL: Performing curl_easy_cleanup() 13:38:30.364 [4785.4785] <16> GetHostCertificate: nbcertcmd command failed to get certificate. retval = 5908 Diagnosis Everything looks fine except the ability to get a certificate. Rest API looks fine: [root@nbu-client nbcert]# curl -X GET https://nbu-mas.domain.local:1556/netbackup/security/certificates/crl --insecure -H 'Accept: application/pkix-crl' -H 'Authorization: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' > /tmp/crl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 316 100 316 0 0 2088 0 --:--:-- --:--:-- --:--:-- 2078 but master-server wasn't able to get certificate even for itself: [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbcertcmd -getcertificate -force nbcertcmd: The -getCertificate operation failed for server nbu-mas.domain.local. EXIT STATUS 5986: Certificate request for host was rejected as the host could not be validated as a master server. Solution The root cause of the problem is that the master server's CLIENT_NAME record in bp.conf was mistakenly removed. Return it back and restart nbwmc service to make it work: [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbwmc terminate [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbwmc start Starting NetBackup Web Management Console could take a couple of minutes ... started.0likes0CommentsNetBackup and Nutanix. Few things to pay attention to.
1) If you want to check if login and password you were given is correct you can run from whitelisted media-server (for API v2): # curl -X GET https://10.10.10.10:9440/api/nutanix/v2.0/storage_containers/ --header 'Accept: application/json' --insecure --basic --user <username>:<password> 2) For curl andnbaapi_ahv_vm_restore commands you need to use username and pasword in single quotes. It's important: --user 'nbuuser':'P@ssw0rd!' --user 'nbuuser' --password 'P@ssw0rd!' otherwise you'll get error 401 and :Error code=6622 # curl -X GET https://10.10.10.10:9440/api/nutanix/v2.0/storage_containers/ --header 'Accept: application/json' --insecure --basic --user nbuuser:P@ssw0rd! <!doctype html><html lang="en"><head><title>HTTP Status 401 Б─⌠ Unauthorized</title> and # ./nbaapi_ahv_vm_restore --metadata_file_path /mycontainers/.restore/metadata.json --cluster_name nutanix01 --cluster_port 9440 --user nbuuser --password P@ssword! Parsed parameter values: metadata_file_path: /mycontainer/.restore/metadata.json cluster_name: nutanix01 cluster_port: 9440 username: nbuuser Core Properties set... Processing disks... The CD ROM is processed at ide: 0 The disk is processed at scsi: 0 with the NDFS file path /migrate/.restore/846da607-87cd-4cad-b47e-b78f345cc1f9 Virtual machine specifications for the restore: -------------------------------------------cut------------------------------ Failed to submit the Create VM task to the Nutanix Cluster. Check and verify the parameters, metadata details, or if the VM exists on the cluster. Error code=66221like0CommentsSAP HANA Restore requires active policy
Symptoms SAP HANA restore can't be initiated from SAP HANA Studio and no restore job appears in Java Console. Diagnosis 15:10:40.666 [1349] <16> VxBSAGetPolicyInfo: ERR - GET_POLICY_INFO request returned: 247 <the specified policy is not active> 15:10:40.666 [1349] <4> get_Policy_Methodtype: System detected error, operation aborted. 15:10:40.666 [1349] <4> backint_process_parm_file: ERROR - Failed to get the policy method type: SAP_HANA_TEST Solution You need to activate the original policy or create a new and add the client.0likes0Comments8.1.2 MySQL backup script (mysql_backup_script) asks for user password
Symptoms 1) Backup initiated from the GUI stays in Running state 2) Backup initiated viamysql_backup_script from the articlehttps://www.veritas.com/support/en_US/article.100041374 asks for password interactivelywhile "nbmysql -o backup" works fine Diagnosis It looks likemysql_backup_script ignores.mylogin.cnf and keep asking for the password but it's because of incorrect hardcoded connection string. It connectsto 127.0.0.1 instead of localhost and 'nbuuser'@'localhost' and 'nbuuser'@'127.0.0.1' are treated defferently. Solution It's just a workaround. When you create.mylogin.cnf via mysql_config_editor use: mysql_config_editor set --host=127.0.0.1 --user=<user> --password and when you create a user for nbu use: CREATE USER 'nbuuser'@'127.0.0.1' IDENTIFIED BY 'password'; after that backup works. P.S. Check the password before use it. MySQL still can't use passwords start with "#", "%" and so on.0likes0Commentsvmoprcmd output has changed
Recently, I noticed that vmoprcmd output has changed and now it looks like: Drive000 No No No hcart3 nbu-backup-srv-5 TLD {3,0,1,0} m8000 TLD /dev/rmt/0cbn nbu-backup-srv-0 TLD c32t0l0 (nbu-cifs-netapp-0) nbu-backup-srv-1 TLD {2,0,0,0} m9000 TLD /dev/rmt/1cbn nbu-backup-srv-6 SCAN-TLD {1,0,6,0} t5 TLD /dev/rmt/4cbn t7 TLD /dev/rmt/4cbn previous version was much better to look at: Drive000 No No No hcart3 nbu-backup-srv-5 {3,0,1,0} TLD m8000 /dev/rmt/0cbn TLD nbu-backup-srv-0 c32t0l0 (nbu-cifs-netapp-0) TLD nbu-backup-srv-1 {2,0,0,0} TLD m9000 /dev/rmt/1cbn TLD nbu-backup-srv-6 {1,0,6,0} SCAN-TLD t5 /dev/rmt/4cbn TLD t7 /dev/rmt/4cbn TLD but unfortunately it's by design and you can save your time and don't open a new support case.0likes0CommentsNetBackup 8.1.2 Linux persistent binding changes and SAS connected library
Good news, everyone! (c) Starting from NetBackup 8.1.2 we can usepersistent binding on Linux platform. Release Notes says: Starting with NetBackup 8.1.2, the NetBackup Device Manager (ltid) uses persistent device paths for tape drives. Instead of /dev/nstXXX device paths, NetBackup uses /dev/tape/by-path/YYY-nst device paths. The paths persist across SAN interruptions. Upon NetBackup Device Manager (ltid) startup, /dev/nstXXX paths are converted to the equivalent /dev/tape/by-path/YYY-nst path . Recently I had pretty big installation with different hardware and I had some feedback that might be usefulfor others. Unfortunately, persistent binding works correctlyout of the box for FC drives only. It isn't because of NetBackup but it's because of default Linux udev rule and SAS connected Tape Libraries specialty. By default the rule's part we're interested in (60-persistent-storage-tape.rules) looks like: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="st*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_PATH}" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_PATH}-nst" The main thing you should know it that symlink will be created using ID_PATH value. It's easy to get this value manually: # udevadm info -n /dev/nst0 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-fc-0x50012345678cc567-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-fc-0x50012345678cc567-lun-0 It works fine for FC connected drives because their paths are unique: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 280 Dec 19 12:33 . drwxr-xr-x 4 root root 80 Dec 19 12:28 .. lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc567-lun-0 -> ../../st0 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc567-lun-0-nst -> ../../nst 0 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc56a-lun-0 -> ../../st2 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc56a-lun-0-nst -> ../../nst 2 lrwxrwxrwx 1 root root 9 Dec 19 12:33 pci-0000:12:00.1-fc-0x50012345678cc566-lun-0 -> ../../st4 lrwxrwxrwx 1 root root 10 Dec 19 12:33 pci-0000:12:00.1-fc-0x50012345678cc566-lun-0-nst -> ../../nst 4 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc567-lun-0 -> ../../st1 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc567-lun-0-nst -> ../../nst 1 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc56a-lun-0 -> ../../st3 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc56a-lun-0-nst -> ../../nst 3 lrwxrwxrwx 1 root root 9 Dec 19 12:33 pci-0000:d8:00.1-fc-0x50012345678cc566-lun-0 -> ../../st5 lrwxrwxrwx 1 root root 10 Dec 19 12:33 pci-0000:d8:00.1-fc-0x50012345678cc566-lun-0-nst -> ../../nst and tpautoconf shows correct information (drives are available via multiple paths): # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.1-fc-0x50 012345678cc566-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.1-fc-0x50 012345678cc566-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGW -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.0-fc-0x50 012345678cc56a-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGW -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-fc-0x50 012345678cc56a-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.0-fc-0x50 02345678cc567-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-fc-0x50 012345678cc567-lun-0-nst - - But when we're working with SAS connected drives we might have some problems. For example after addingTape Library via SAS we have symlinks: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 80 Dec 6 13:41 . drwxr-xr-x 4 root root 80 Dec 6 13:41 .. lrwxrwxrwx 1 root root 9 Dec 6 13:41 pci-0000:12:00.0-sas-0x0000000000000000-lun-0 -> ../../st1 lrwxrwxrwx 1 root root 10 Dec 6 13:41 pci-0000:12:00.0-sas-0x0000000000000000-lun-0-nst -> ../../nst1 but when we check configuration we'll see one drive that has no symlinks: # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000PH9 -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-sas-0x0000000000000000-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000CVY -1 -1 -1 -1 - - - We have two drives indeed: # ls -la /dev/nst? crw------- 1 root tape 9, 128 Dec 21 02:42 /dev/nst0 crw------- 1 root tape 9, 129 Dec 21 02:42 /dev/nst1 Let's check their ID_PATHs: # udevadm info -n /dev/nst0 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-sas-0x0000000000000000-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-sas-0x0000000000000000-lun-0 # udevadm info -n /dev/nst1 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-sas-0x0000000000000000-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-sas-0x0000000000000000-lun-0 They are identical. That's why udev can't create two different symlinks and NBU can't configure two different drives. We need to modify thedefaultudev rule (or create a new custome one because during the upgrade default udev rules might be overwritten) to have different values for different drives. The easiest way you can do it is to use: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="st*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_SCSI_SERIAL}" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_SCSI_SERIAL}-nst" but it isn't a universal solutions and it depends on the Tape Library you're using. In my case I had different TapeLibraries and some of then presented all drives via multiple paths and I had to use minor numbers because it was the only unique value (I didn't have enough time for deep investigation because the TLs were on the remote site). It isn't the best thing to usebecause after reboot minor name can be different but in some cases you have no choice: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{MINOR}-nst" after that you need to re-apply udev rules: # udevadm trigger now both synlinks are in place: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 80 Dec 21 02:42 . drwxr-xr-x 4 root root 80 Dec 6 13:41 .. lrwxrwxrwx 1 root root 10 Dec 21 02:42 128-nst -> ../../nst0 lrwxrwxrwx 1 root root 10 Dec 21 02:42 129-nst -> ../../nst1 and we can check if both drives can be configured correctly: # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000PH9 -1 -1 -1 -1 /dev/tape/by-path/129-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00020CVY -1 -1 -1 -1 /dev/tape/by-path/128-nst - - Hope it helps.3likes0CommentsUsing Backup Exec with Hyper-V 2.0
Introduction Using Backup Exec to backup virtual environments that use Hyper-V 2 (Windows Server 2008 R2) as hypervisor may be not as simple as it first looks. In this paper I want to cover some of the reasons why partners and customers struggle getting reliable backups in those environments, especially when using clustered Hyper-V installations.