"the trustAnchors parameter must be non-empty" alert after Java Admin Console login
Symptoms Few times I got "ThetrustAnchors parameter must be non-empty" alert after Java Admin Console login and whole "Security Management" section was unavailable Diagnosis In Java Admin console log it looked like: [6/15/21 10:57:43 PM MSK {1623787063262}] [262144] SecureTransport-> Setting SNI to web server to load client compatible certificate:[NBCA] Exception occured while Login to WebServices org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://nbumaster:1556/netbackup/loginwithbpjavasessiontoken": Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty; nested exception is javax.net.ssl.SSLException: Unexpected error: java.security.InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty [6/16/21 1:13:58 AM MSK {1623795238399}] [262144] LoginBannerAdapter->initializePortal () [6/16/21 1:13:58 AM MSK {1623795238403}] [262144] LoginBannerAdapter-> getLoginBannerConfiguration () [6/16/21 1:13:58 AM MSK {1623795238404}] [262144] LoginBannerPortal -> readLoginBannerConfiguration () [6/16/21 1:13:58 AM MSK {1623795238459}] [262144] SecureTransport-> atDataDir path isC:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore [6/16/21 1:13:58 AM MSK {1623795238460}] [262144] SecureTransport-> Certificate location is: C:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore\..\certstore\trusted\ [6/16/21 1:13:58 AM MSK {1623795238461}] [262144] SecureTransport-> Path not accessible:C:\Users\?????????????\AppData\Roaming\Veritas\VSS\systruststore\..\certstore\trusted (SR-1)readFile: (Sent:Wed Jun 16 01:13:58 MSK 2021, Recv:Wed Jun 16 01:13:58 MSK 2021) Protocol Code: 4 Status: 2 Time Taken: 96ms Error Msg: No such file or directory In this case it happened because of localized name of the user and the folder with certificates became unavailable or it can be just a permission issue Solution Create another user (it's also possible to re-configure and use another folder) or fix issues with the folder1.1KViews0likes0CommentsCertificates and CLIENT_NAME on the master-server
Symptoms Client is unable to get a certificate (CACertificate can be received) with unusual error: nbu-client # /usr/openv/netbackup/bin/nbcertcmd -getCertificate nbcertcmd: The -getCertificate operation failed for server nbu-mas.domain.local EXIT STATUS 5908: Unknown error occurred. In nbcertcmd log: 13:38:27.725 [4785.4785] <2> getHostIdCertStatus: Checking if hostID exist of host nbu-mas.domain.local 13:38:27.725 [4785.4785] <2> readJsonMapFile: Json mapping file [/usr/openv/var/vxss/certmapinfo.json] does not exist 13:38:27.725 [4785.4785] <2> readCertMapInfoInstallPath: Mapping file does not exists 13:38:27.725 [4785.4785] <2> getHostIdCertStatus: getHostID failed, error :5949. .............................................................. 13:38:30.364 [4785.4785] <2> curlSendRequest: actual http response : 500 expected http result: 200 13:38:30.364 [4785.4785] <2> parse_json_error_response: Error code returned by server is :5908 13:38:30.364 [4785.4785] <2> parse_json_error_response: Developer error message return by server :com.fasterxml.jackson.databind.exc.MismatchedInputException: No content to map due to end-of-input at [Source: (String)""; line: 1, column: 0] 13:38:30.364 [4785.4785] <16> nbcert_curl_gethostcertificate: Failed to perform getcertificate, with error code : 5908 13:38:30.364 [4785.4785] <2> NBClientCURL:~NBClientCURL: Performing curl_easy_cleanup() 13:38:30.364 [4785.4785] <16> GetHostCertificate: nbcertcmd command failed to get certificate. retval = 5908 Diagnosis Everything looks fine except the ability to get a certificate. Rest API looks fine: [root@nbu-client nbcert]# curl -X GET https://nbu-mas.domain.local:1556/netbackup/security/certificates/crl --insecure -H 'Accept: application/pkix-crl' -H 'Authorization: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx' > /tmp/crl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 316 100 316 0 0 2088 0 --:--:-- --:--:-- --:--:-- 2078 but master-server wasn't able to get certificate even for itself: [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbcertcmd -getcertificate -force nbcertcmd: The -getCertificate operation failed for server nbu-mas.domain.local. EXIT STATUS 5986: Certificate request for host was rejected as the host could not be validated as a master server. Solution The root cause of the problem is that the master server's CLIENT_NAME record in bp.conf was mistakenly removed. Return it back and restart nbwmc service to make it work: [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbwmc terminate [root@nbu-mas tmp]# /usr/openv/netbackup/bin/nbwmc start Starting NetBackup Web Management Console could take a couple of minutes ... started.Mike_Gavrilov5 years agoLevel 62.8KViews0likes0Comments- Mike_Gavrilov5 years agoLevel 6891Views0likes0Comments
NetBackup and Nutanix. Few things to pay attention to.
1) If you want to check if login and password you were given is correct you can run from whitelisted media-server (for API v2): # curl -X GET https://10.10.10.10:9440/api/nutanix/v2.0/storage_containers/ --header 'Accept: application/json' --insecure --basic --user <username>:<password> 2) For curl andnbaapi_ahv_vm_restore commands you need to use username and pasword in single quotes. It's important: --user 'nbuuser':'P@ssw0rd!' --user 'nbuuser' --password 'P@ssw0rd!' otherwise you'll get error 401 and :Error code=6622 # curl -X GET https://10.10.10.10:9440/api/nutanix/v2.0/storage_containers/ --header 'Accept: application/json' --insecure --basic --user nbuuser:P@ssw0rd! <!doctype html><html lang="en"><head><title>HTTP Status 401 Б─⌠ Unauthorized</title> and # ./nbaapi_ahv_vm_restore --metadata_file_path /mycontainers/.restore/metadata.json --cluster_name nutanix01 --cluster_port 9440 --user nbuuser --password P@ssword! Parsed parameter values: metadata_file_path: /mycontainer/.restore/metadata.json cluster_name: nutanix01 cluster_port: 9440 username: nbuuser Core Properties set... Processing disks... The CD ROM is processed at ide: 0 The disk is processed at scsi: 0 with the NDFS file path /migrate/.restore/846da607-87cd-4cad-b47e-b78f345cc1f9 Virtual machine specifications for the restore: -------------------------------------------cut------------------------------ Failed to submit the Create VM task to the Nutanix Cluster. Check and verify the parameters, metadata details, or if the VM exists on the cluster. Error code=6622Mike_Gavrilov6 years agoLevel 61.2KViews1like0CommentsSAP HANA Restore requires active policy
Symptoms SAP HANA restore can't be initiated from SAP HANA Studio and no restore job appears in Java Console. Diagnosis 15:10:40.666 [1349] <16> VxBSAGetPolicyInfo: ERR - GET_POLICY_INFO request returned: 247 <the specified policy is not active> 15:10:40.666 [1349] <4> get_Policy_Methodtype: System detected error, operation aborted. 15:10:40.666 [1349] <4> backint_process_parm_file: ERROR - Failed to get the policy method type: SAP_HANA_TEST Solution You need to activate the original policy or create a new and add the client.Mike_Gavrilov6 years agoLevel 6802Views0likes0Comments8.1.2 MySQL backup script (mysql_backup_script) asks for user password
Symptoms 1) Backup initiated from the GUI stays in Running state 2) Backup initiated viamysql_backup_script from the articlehttps://www.veritas.com/support/en_US/article.100041374 asks for password interactivelywhile "nbmysql -o backup" works fine Diagnosis It looks likemysql_backup_script ignores.mylogin.cnf and keep asking for the password but it's because of incorrect hardcoded connection string. It connectsto 127.0.0.1 instead of localhost and 'nbuuser'@'localhost' and 'nbuuser'@'127.0.0.1' are treated defferently. Solution It's just a workaround. When you create.mylogin.cnf via mysql_config_editor use: mysql_config_editor set --host=127.0.0.1 --user=<user> --password and when you create a user for nbu use: CREATE USER 'nbuuser'@'127.0.0.1' IDENTIFIED BY 'password'; after that backup works. P.S. Check the password before use it. MySQL still can't use passwords start with "#", "%" and so on.1.1KViews0likes0Commentsvmoprcmd output has changed
Recently, I noticed that vmoprcmd output has changed and now it looks like: Drive000 No No No hcart3 nbu-backup-srv-5 TLD {3,0,1,0} m8000 TLD /dev/rmt/0cbn nbu-backup-srv-0 TLD c32t0l0 (nbu-cifs-netapp-0) nbu-backup-srv-1 TLD {2,0,0,0} m9000 TLD /dev/rmt/1cbn nbu-backup-srv-6 SCAN-TLD {1,0,6,0} t5 TLD /dev/rmt/4cbn t7 TLD /dev/rmt/4cbn previous version was much better to look at: Drive000 No No No hcart3 nbu-backup-srv-5 {3,0,1,0} TLD m8000 /dev/rmt/0cbn TLD nbu-backup-srv-0 c32t0l0 (nbu-cifs-netapp-0) TLD nbu-backup-srv-1 {2,0,0,0} TLD m9000 /dev/rmt/1cbn TLD nbu-backup-srv-6 {1,0,6,0} SCAN-TLD t5 /dev/rmt/4cbn TLD t7 /dev/rmt/4cbn TLD but unfortunately it's by design and you can save your time and don't open a new support case.803Views0likes0CommentsNetBackup 8.1.2 Linux persistent binding changes and SAS connected library
Good news, everyone! (c) Starting from NetBackup 8.1.2 we can usepersistent binding on Linux platform. Release Notes says: Starting with NetBackup 8.1.2, the NetBackup Device Manager (ltid) uses persistent device paths for tape drives. Instead of /dev/nstXXX device paths, NetBackup uses /dev/tape/by-path/YYY-nst device paths. The paths persist across SAN interruptions. Upon NetBackup Device Manager (ltid) startup, /dev/nstXXX paths are converted to the equivalent /dev/tape/by-path/YYY-nst path . Recently I had pretty big installation with different hardware and I had some feedback that might be usefulfor others. Unfortunately, persistent binding works correctlyout of the box for FC drives only. It isn't because of NetBackup but it's because of default Linux udev rule and SAS connected Tape Libraries specialty. By default the rule's part we're interested in (60-persistent-storage-tape.rules) looks like: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="st*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_PATH}" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_PATH}-nst" The main thing you should know it that symlink will be created using ID_PATH value. It's easy to get this value manually: # udevadm info -n /dev/nst0 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-fc-0x50012345678cc567-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-fc-0x50012345678cc567-lun-0 It works fine for FC connected drives because their paths are unique: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 280 Dec 19 12:33 . drwxr-xr-x 4 root root 80 Dec 19 12:28 .. lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc567-lun-0 -> ../../st0 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc567-lun-0-nst -> ../../nst 0 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc56a-lun-0 -> ../../st2 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:12:00.0-fc-0x50012345678cc56a-lun-0-nst -> ../../nst 2 lrwxrwxrwx 1 root root 9 Dec 19 12:33 pci-0000:12:00.1-fc-0x50012345678cc566-lun-0 -> ../../st4 lrwxrwxrwx 1 root root 10 Dec 19 12:33 pci-0000:12:00.1-fc-0x50012345678cc566-lun-0-nst -> ../../nst 4 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc567-lun-0 -> ../../st1 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc567-lun-0-nst -> ../../nst 1 lrwxrwxrwx 1 root root 9 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc56a-lun-0 -> ../../st3 lrwxrwxrwx 1 root root 10 Dec 19 12:28 pci-0000:d8:00.0-fc-0x50012345678cc56a-lun-0-nst -> ../../nst 3 lrwxrwxrwx 1 root root 9 Dec 19 12:33 pci-0000:d8:00.1-fc-0x50012345678cc566-lun-0 -> ../../st5 lrwxrwxrwx 1 root root 10 Dec 19 12:33 pci-0000:d8:00.1-fc-0x50012345678cc566-lun-0-nst -> ../../nst and tpautoconf shows correct information (drives are available via multiple paths): # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.1-fc-0x50 012345678cc566-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.1-fc-0x50 012345678cc566-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGW -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.0-fc-0x50 012345678cc56a-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGW -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-fc-0x50 012345678cc56a-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:d8:00.0-fc-0x50 02345678cc567-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000VGV -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-fc-0x50 012345678cc567-lun-0-nst - - But when we're working with SAS connected drives we might have some problems. For example after addingTape Library via SAS we have symlinks: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 80 Dec 6 13:41 . drwxr-xr-x 4 root root 80 Dec 6 13:41 .. lrwxrwxrwx 1 root root 9 Dec 6 13:41 pci-0000:12:00.0-sas-0x0000000000000000-lun-0 -> ../../st1 lrwxrwxrwx 1 root root 10 Dec 6 13:41 pci-0000:12:00.0-sas-0x0000000000000000-lun-0-nst -> ../../nst1 but when we check configuration we'll see one drive that has no symlinks: # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000PH9 -1 -1 -1 -1 /dev/tape/by-path/pci-0000:12:00.0-sas-0x0000000000000000-lun-0-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000CVY -1 -1 -1 -1 - - - We have two drives indeed: # ls -la /dev/nst? crw------- 1 root tape 9, 128 Dec 21 02:42 /dev/nst0 crw------- 1 root tape 9, 129 Dec 21 02:42 /dev/nst1 Let's check their ID_PATHs: # udevadm info -n /dev/nst0 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-sas-0x0000000000000000-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-sas-0x0000000000000000-lun-0 # udevadm info -n /dev/nst1 | grep ID_PATH E: ID_PATH=pci-0000:12:00.0-sas-0x0000000000000000-lun-0 E: ID_PATH_TAG=pci-0000_12_00_0-sas-0x0000000000000000-lun-0 They are identical. That's why udev can't create two different symlinks and NBU can't configure two different drives. We need to modify thedefaultudev rule (or create a new custome one because during the upgrade default udev rules might be overwritten) to have different values for different drives. The easiest way you can do it is to use: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="st*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_SCSI_SERIAL}" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{ID_SCSI_SERIAL}-nst" but it isn't a universal solutions and it depends on the Tape Library you're using. In my case I had different TapeLibraries and some of then presented all drives via multiple paths and I had to use minor numbers because it was the only unique value (I didn't have enough time for deep investigation because the TLs were on the remote site). It isn't the best thing to usebecause after reboot minor name can be different but in some cases you have no choice: # by-path (parent device path) KERNEL=="st*[0-9]|nst*[0-9]", IMPORT{builtin}="path_id" KERNEL=="nst*[0-9]", ENV{ID_PATH}=="?*", SYMLINK+="tape/by-path/$env{MINOR}-nst" after that you need to re-apply udev rules: # udevadm trigger now both synlinks are in place: # ls -la /dev/tape/by-path/ total 0 drwxr-xr-x 2 root root 80 Dec 21 02:42 . drwxr-xr-x 4 root root 80 Dec 6 13:41 .. lrwxrwxrwx 1 root root 10 Dec 21 02:42 128-nst -> ../../nst0 lrwxrwxrwx 1 root root 10 Dec 21 02:42 129-nst -> ../../nst1 and we can check if both drives can be configured correctly: # /usr/openv/volmgr/bin/tpautoconf -t TPAC60 HPE Ultrium 8-SCSI J4DB CZ00000PH9 -1 -1 -1 -1 /dev/tape/by-path/129-nst - - TPAC60 HPE Ultrium 8-SCSI J4DB CZ00020CVY -1 -1 -1 -1 /dev/tape/by-path/128-nst - - Hope it helps.2KViews3likes0CommentsUsing Backup Exec with Hyper-V 2.0
Introduction Using Backup Exec to backup virtual environments that use Hyper-V 2 (Windows Server 2008 R2) as hypervisor may be not as simple as it first looks. In this paper I want to cover some of the reasons why partners and customers struggle getting reliable backups in those environments, especially when using clustered Hyper-V installations.8.4KViews14likes12Comments- 1.7KViews2likes0Comments
A Newbie's Introduction to Backup Exec Pt. 2
It’s been a while since I last wrote an article, and almost a year since I wrote the first part of what was going to be a 2-part introduction to Backup Exec for newbies. Now’s as good a time as any to write it… To recap, read the first part on the link below:3.6KViews5likes6CommentsNetBackup 7.6 Blueprints - Media Server Deduplication Pool
The Technical Services team for Backup and Recovery have produced a number of documents we call "Blueprints". These Blueprints are designed to show backup and recovery challenges around specific technologies or functions and how NetBackup solves these challenges. Each Blueprint consists of:3.1KViews3likes1CommentNetbackup Bare Metal Recovery 7.5 for Windows
Netbackup Bare Metal Configurartion and Recovery Steps In this article I will show you how to configure Netbackup BMR option and how to perform restore activity on windows based server. Environment: Symantec Netbackup 7.5 on Windows Server Symantec Netbackup Client (BMR Soruce) on Windows Server3.3KViews14likes9CommentsNetbackup for Sybase (step by step)
Netbackup provides a Sybase backup and restore solution that not all of us aware of. the attached document provides detailed instratuction on all the steps required for backup and restore of sybase database. Note .... any feedback with mistakes or enhancmenteven in the writing style will be much appreciated6.5KViews4likes3CommentsTech Support Cracks the Case: Practice Makes Perfect
Recently, a large provider of health insurance almost gave up on NetBackup as its disaster recovery solution-of-choice. But the vigilant support of Symantec technical specialists and engineers helped restore the company’s confidence in NetBackup disaster recovery abilities. A NetBackup challengeTurls9 years agoLevel 61.4KViews3likes4CommentsStep By Step How To CREATE Reports IN Ops Center (without License)(Came With Netbackup 7.5 for Free)
Hi Most of the customer bosses wants to have daily or weekly reports of Netbackup, so its very easy , without purchasing the Reporting software or license we can generate it. I will tell you step by step that How We can Make Reports In Ops Center( which came free with NetBackup and No Need for its license also)11KViews7likes13CommentsNetBackup 7.6 Blueprints - BMR
The Technical Services team for Backup and Recovery have produced a number of documents we call "Blueprints". These Blueprints are designed to show backup and recovery challenges around specific technologies or functions and how NetBackup solves these challenges. Each Blueprint consists of:1.9KViews1like2CommentsA closer look at shared memory and buffers
Data is sent from the client to a 'buffer'. From here it is sent to storage (tape of disk). Think of data as 'water' and the buffer as a 'bucket' (in fact multiple buckets). The data (water) fills up the bucket, as you say, a tempoary storage area, before the water is tipped out of the bucket to the storage. Fairly simple.4.2KViews6likes2CommentsRestoring Exchange or individual mailboxes/items using Backup Exec - HOWTO
Ithought Iwould share this with the forum, as Ihave seen some posts around this. I've had to restore individual mailboxes and emails for users since the move from ARCserve 11.5 to BackupExec 11D (and we are moving onto 12.5). Initially under the impression that this would be a simple process, Iwent about restoring from a particular date, and kept on running into errors with insufficient disk space. No matter what Itried, it wouldn't work.14KViews31likes32CommentsBackup Exec AWS S3 configuration
I came across many customers who asked us if BE supports AWS s3, glad we had certified support to it. they requested us for demo. Seeing video i found its hardly few mins step pretty simple eh!!! So what's needed to get BE and AWS S3 configured following are list and few tech articles which has steps outlined in detail as well.2.4KViews0likes0CommentsBackup Exec 2014 Blueprint - Simplified Disaster Recovery
The Technical Services team for Backup and Recovery have produced a number of documents we call "Blueprints". These Blueprints are designed to show backup and recovery challenges around specific technologies or functions and how Backup Exec solves these challenges. Each Blueprint consists of:2.1KViews2likes4CommentsNetBackup 7.6 Blueprints - Enterprise Vault
The Technical Services team for Backup and Recovery have produced a number of documents we call "Blueprints". These Blueprints are designed to show backup and recovery challenges around specific technologies or functions and how NetBackup solves these challenges. Each Blueprint consists of:1.9KViews3likes3Comments20 Steps to Analyze , Correct and Troubleshoot Device issues in Windows
The following 20 steps will help analyze, correct and troubleshoot many different device issues in Window Enviorments. Just a few Examples (but not limited to ) : - Status codes : 84 , 96 , 2009 - Drives keep going down or randomly go down. - No drives available V E R I F Y4.4KViews16likes13CommentsHow to backup SQL logs and truncate them in BE 2012
If your SQL database is set to Full Recovery Mode, it maintains transaction logs. If these logs are not truncated from time to time, they will grow and eventually fill up your disk. BE will warn you that you need to truncate your log.12KViews6likes7CommentsRestore backup exec tape with Netbackup
Steps to restore backup exec tape with netbackup 1. Insert tape inthe drive with wright protected. 2. Run command to add media id C:\Program Files\Veritas\Volmgr\bin>vmphyinv.exe -u 0 -h fhdcbkpsrv02 Proposed Change(s) to Update the Volume Configuration =====================================================2.4KViews11likes7CommentsHow to install Symantec Backup Exec 2010
In this article I will demonstrate how to install Backup Exec 2010. Backup Exec is a backup solution from Symantec. It is used by lot of companies. And with the “deduplication” we can take full backup fast and there is no need more spaces. Let’s start to install Click Setup.exe and choice language;1.2KViews5likes4CommentsRecovering BE with a non-default BEDB
By default, BE will install a SQL Express instance called BKUPEXEC, to hold the BEDB. The BEDB is the database which holds things like your default BE settings, jobs and job history. BE does not restrict you from using a full SQL Server instance for the BEDB. Check the SCL for your version of BE to see what SQL Server versions are supported for use with the BEDB.1.5KViews1like0CommentsNetBackup Concepts and Terminology. (work in progress)
All about Veritas NetBackup In this article we’ll review NetBackup Enterprise Server and all the components involved in the backup process. This includes some licensing, installation and configuration concepts. I will only cover 3 TIER NetBackup Configurations.17KViews12likes2CommentsNetBackup 7.6 Blueprints - Hyper-V
The Technical Services team for Backup and Recovery have produced a number of documents we call "Blueprints". These Blueprints are designed to show backup and recovery challenges around specific technologies or functions and how NetBackup solves these challenges. Each Blueprint consists of:1.7KViews1like2CommentsInstall NBU Java Admin Console on Ubuntu or other Linux desktop
For Windows you caninstall just theNBU Java Admin Console but for Linux desktops, such as Ubuntu, you have to install the whole client, so this article shows how to install the NBU Java Admin Console on a Linux desktop: For Intel Linux distributions there are 3 directoriesin the NBU client media:3.3KViews2likes0CommentsEMEA Veritas NBU Core TC Partner team Charter
EMEA Veritas NBU Core TC Partner team Charter This document describes the terms of participation to the Veritas EMEA NetBackup Core TC Partner program. The Veritas EMEA NBU Core TC Partner program The Veritas EMEA NBU Core TC Partner program is derived from a Veritas internal program with the same name.1.2KViews0likes0CommentsCome utilizzare il software "Veritas™ Capacity Assessment Tool"
Questo articolo spiega come installare e utilizzareil software"Veritas™ Capacity Assessment Tool" che permette il calcolo del numero di front-end TB. La quantita' di front-end TB e' necessaria per calcolare il numero di licenze "Capacity Edition" necessarie:1.1KViews1like0CommentsBasic setup steps for the Bare Metal Restore (BMR) option to NBU
This information is an adjunct to documentation found in the Bare Metal Restore (BMR) System Administration Guide. Symantec NetBackup Bare Metal Restore 7.5 Administrator's Guide http://www.symantec.com/docs/DOC516317KViews4likes0CommentsHow to configure Netbackup Liveupdate
NetBackup LiveUpdate provides a cross-platform, policy-driven method to distributeNetBackup Release Updates and Hotfix downloads to NetBackup hosts at version6.5 and later. While configuring a new liveupdate server I faced few challenges and found guide related to it is not upto the mark. Hence decided to write down and article and explain various steps to novices like me.2.6KViews1like0CommentsNetwork services configurations on NetBackup BMR Boot Server
Network services configurations on NetBackup BMR Boot Server For Network boot based recovery, NetBackup Bare Metal Restore (BMR) leverages OS specific NW boot protocols to boot client over BMR Boot Server and start recovery process.1.8KViews2likes2CommentsDesigning Protection for the Future, Today; with Backup Exec 15
Organisations throughout EMEA often view backup and recovery as a necessity; they do not appear to support productivity, drive growth or increase profitability. This perception is understandable given the number of products in the marketplace that simply address backup and recovery alone. Narrow capabilities that do not even cater to all areas of the infrastructure have a tendency to worsen complexity and increase costs rather than the opposite outcome that IT teams strive for. This narrow set of capabilities is also detrimental to the wider IT infrastructure as businesses look to modernize infrastructure and make better use of their information. Industry analysts support the premise that three information challenges drive technology decisions[1]: Extending virtual and cloud Keeping pace with data growth Reducing cost and complexity At the Gartner Conference in December, Gartner’s Dave Russell commented, “most backup systems are antiquated and designed for architectures and environments that are no longer the norm.” The problem that underlies this premise is two-fold. Firstly, in information protection terms narrow capability-sets limit functionality today, but worse than that, restrict choice in the wider infrastructure for the future. Secondly, beyond product choice is process and understanding. Symantec research across a wide range of ‘consumers of IT’ indicates that knowledge around backup and disaster recovery planning, strategy and implementation is often limited. The knock-on effect for the business of this limited knowledge is the very reason that information protection continues to increase in importance; we rely on information constantly as a backbone of business and in many cases as a competitive advantage. Extending Virtual and Cloud Well understood as drivers of change in IT departments throughout EMEA with an adoption rate for server virtualization of 79% and cloud adoption of 56%[2] these two continue to proliferate messages of cost benefits and simplification. Perhaps we consider cloud as ‘virtual that somebody else does for me’ and gain a better understanding of the nature of where information is stored. Whilst the adoption of public cloud storage for backup in EMEA remains well behind adoption of web hosting, email hosting, content filtering and productivity solutions it does demand a new level of flexibility in planning information protection for both the short and longer term futures. We find ourselves moving rapidly from static, largely on-premise, physical infrastructures that are designed and built to last for a number of years, to infrastructures which are specifically designed to account for and embrace change across a wide variety of platforms, technologies and delivery mechanisms: physical, virtual and cloud. Keeping Pace with Data Growth Over many years the increasing growth of data has been of significance to IT teams looking to access, manage and protect company information. That data now resides in a greater quantity of ever more disparate locations within or outside one geographical location. Estimates suggest that global data will reach 7.9 Zetabytes in 2015, up from 1.2 Zetabytes in 2010. Furthermore the same estimates forecast that number will have increased to around 40 Zetabytes by 2020.[3] Today over 60% of that data is unstructured and therefore in many cases, unmanaged and potentially difficult to protect and restore. Whilst many conversations circulate around big data and driving value from vast quantities of data, the day-to-day issue for most organisations is one of meeting and improving recovery point and recovery time objectives (RPO and RTO). Powerful, integrated technologies enable more frequent backups and rapid recovery across physical, virtual and cloud, different operating systems and applications. Reducing Cost and Complexity Whilst company revenues are expected to increase throughout 2015, IT budgets are largely expected to remain at 2014 levels[4] driving the need for IT teams to deliver more to the business with the same, or less resource. The end of support life of Windows Server 2003 in July 2015 will be a driver for many organizations to review hardware and software estates; to look to new technologies in an effort to take account of flat or declining budgets. Although it often becomes an afterthought, backup and recovery can consume a significant portion of budget not in terms of product purchase necessarily but in terms of staff cost: skills required to manage complexity with so many facets to the infrastructure. Despite industry analyst recommendations[5], some organisations continue to make use of multiple products to protect the different platforms or technologies within their environment. Flexible, Powerful, Easy IT infrastructure is no longer built to last, but built to change across a combination of virtual, physical and cloud. You can take confidence in Backup Exec 15’s breadth and depth of integrated capabilities and have the flexibility to make business-centric decisions for IT, safe in the knowledge that information is protected and recoverable whatever your platform or technology. It’s Time for backup and recovery that enables choice. Ever-increasing volumes of data across a wide range of platforms and applications impact recovery objectives. Backup Exec 15 enables intelligent protection of information that brings recovery points closer and recovery times shorter, maximising available infrastructure resources. Backups take place more frequently, restore is completed as soon as it’s needed. Once captured that data is easily transitioned and repurposed in virtual infrastructure for test, development and analytics. It’s Time for backup and recovery that outperforms expectations. Business productivity is driven by innovation not by underlying process. Backup Exec 15 can be purchased, maintained and renewed through a single, all-inclusive license meter that enables full functionality; deployed, managed and upgraded centrally. Combined with a robust architecture which maximises reliability Backup Exec 15 helps you spend less time ‘doing backup.’ It’s Time for backup and recovery that gives your time back. The broad set of capabilities delivered in Backup Exec 15 brings value to business beyond recovery of information. Supporting and enabling the extension of IT infrastructure ever-further into the latest technology leverages the benefits that virtual and cloud infrastructures bring. Making use of this single touch point to manage the increasing volumes of data and the recovery demands imposed not only helps IT to make the best decisions but also to put more into productive, innovative activities. It’s Time for backup and recovery that drives your business. It’s Time for Backup Exec 15. Join the Conversation #ItsTimeForBE15 [1] ESG, Research Report: 2014 IT Spending Intentions Survey, February 2014 IDC, Unified Data Protection for Physical and Virtual Environments, January 2014 [2] Spiceworks “State of IT” Report, January 2015 [3] IDC, Digital Universe study, December 2012 IDC, Worldwide Disk-Based Data Protection and Recovery 2012-2016 Forecast, December 2012 [4] Spiceworks 2015 Budget Report [5] IDC, Unified Data Protection for Physical and Virtual Environments, January 2014Barnaby10 years agoLevel 3954Views1like0Commentsoptimization rate for shadow copy component backup
For shadow copy component backup with accelerator enabled, Maybe you would find that no matter whether to change data on disk volume, the shadow copy components backup job will send all data to server(optimization rate 0%), e.g.895Views0likes1Comment