the certificate revocation list is invalid error 7654
Hi, We have encounter a backup error "the certificate revocation list is invalid error 7654" upon checking the clients have under host management the clients status ishe "the vnetdproxyencounteredanerror" The clients is linux and not new clients for nbu.All running 8.2 and the master is clustered. The telnet from master to clients and vice versa is good as well as the ping. The clients cert not revoked and green status under host management in master server. Do you know what need to check first?Solved8.5KViews0likes3CommentsERR - Unable to NFS mount the required file system.
Hi, We are trying to do an exchange GRT backup. Exchange 2019, Windows Server 2019 and a Media Server running RHEL with Netbackup 8.2 Clients. In doing so our backups are only partially successful with the following error: 02/11/20202:49:49PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-UnabletoNFSmounttherequiredfilesystem. 02/11/20202:50:59PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20202:51:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB001\Logs_1604288407\ 02/11/20202:51:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20202:53:09PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20202:53:47PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB002\Logs_1604288407\ 02/11/20202:53:47PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:00:57PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:01:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB003\Logs_1604288407\ 02/11/20203:01:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:04:27PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:05:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB004\Logs_1604288407\ 02/11/20203:05:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:06:08PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:06:47PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB005\Logs_1604288407\ 02/11/20203:06:47PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:07:18PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:07:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB006\Logs_1604288407\ 02/11/20203:07:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:08:16PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:08:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB007\Logs_1604288407\ 02/11/20203:08:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:09:16PM-Infobpbrm(pid=3513266)DB_BACKUP_STATUSis0 02/11/20203:09:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-ErrorencounteredwhileattemptingtogetadditionalfilesforMicrosoftInformationStore:\MYDB008\Logs_1604288407\ 02/11/20203:09:46PM-Errorbpbrm(pid=3513266)fromclient<mydag>:ERR-Exchangegranularrestorefromthisimagemaynotwork. 02/11/20203:09:47PM-Infobptm(pid=3513457)waitedforfullbuffer10933times,delayed32049times 02/11/20203:09:48PM-Infobptm(pid=3513457)EXITINGwithstatus0<---------- 02/11/20203:09:48PM-Infobpbrm(pid=3513266)validatingimageforclient<mydag> 02/11/20203:09:48PM-Infobpbkar(pid=36536)done.status:1:therequestedoperationwaspartiallysuccessful 02/11/20203:09:48PM-endwriting;writetime:0:21:01 Therequestedoperationwaspartiallysuccessful (1) I have followed the instructions in this document:https://www.veritas.com/content/support/en_US/article.100000686 and when doing so receive the following error: C:\Program Files\Veritas\NetBackup\bin>nbfs mount -server mymediaserver-cred <cred> * connect to mymediaserver failed EXIT_STATUS=25 Any ideas?Solved6.1KViews0likes11CommentsError nbjm (pid=9600) NBU status: 2107, EMM status: Requested media server does not have credentials
Mar4,202112:20:43PM-beginDuplicate Mar4,202112:20:43PM-requestingresourceLCM_Silver.BoDataNBU-stu Mar4,202112:20:43PM-grantedresourceLCM_Silver.BoDataNBU-stu Mar4,202112:20:43PM-startedprocessRUNCMD(pid=3564) Mar4,202112:20:43PM-endedprocess0(pid=3564) Mar4,202112:20:43PM-requestingresourceSilver.BoDataNBU-stu Mar4,202112:20:43PM-requestingresource@aaabf Mar4,202112:20:43PM-reservingresource@aaabf Mar4,202112:20:43PM-Errornbjm(pid=9600)NBUstatus:2107,EMMstatus:Requestedmediaserverdoesnothavecredentialsorisnotconfiguredforthestorageserver Mar4,202112:20:43PM-Errornbjm(pid=9600)NBUstatus:2107,EMMstatus:Requestedmediaserverdoesnothavecredentialsorisnotconfiguredforthestorageserver Mar4,202112:20:43PM-Errornbjm(pid=9600)NBUstatus:2107,EMMstatus:Requestedmediaserverdoesnothavecredentialsorisnotconfiguredforthestorageserver Mar4,202112:20:44PM-endDuplicate;elapsedtime0:00:01 Requestedmediaserverdoesnothavecredentialsorisnotconfiguredforthestorageserver (2107) I checked the credentials that it was added on two servers (master + media server) as image The storage servers has been configured as image. How to solve this problem?4.3KViews0likes4CommentsMany tape dismounts during single duplication job
I am using bpduplicate for regular image duplication to tape. There are about 400 images duplicated by one bpduplicate command. All images are duplicated to one LTO tape. I expect there will be one mount and one dismout during the duplication. But the tape is dismounted and mounted about 14 times during single duplication. I want to eliminate these repetitive dismounts/mounts because they are causing job runtime prolongation and excessive drive and tape wear. Has anyone solved it?Solved4KViews0likes19CommentsInter Site SLP fails but Intra Site SLP succeeds
I have two datacenters, with a clustered master server node, media servers and an SSO connected tape library in each. I migrated our server infrastructure from older hardware running Windows Server 2008 R2 to newer servers running Windows Server 2012 R2 (Master Server Cluster Nodes) or 2016 (Media Servers), the IP addresses from the old media servers were re-used for the new servers Since the re-platform, existing SLP duplications between the two datacenters fail. The required media is loaded into the tape drives in each site, the server hosting the images to be duplicated queues the restore, the target server queues the backup job but the two servers never successfully initiate the communication channel required for the duplication to proceed. If I create an SLP to duplicate from server 1 to server 2 in the same site, the duplication completes successfully. The required media is loaded into the tape drives in the site, the server hosting the images to be duplicated queues the restore, the target server queues the backup job, the servers establish communication and the duplication successfully completes. Can anyone explain the actual processes that initiate SLP duplication, what the process flow is and what to look for when comparing the differences between the successful intra-site and the unsuccessful inter-siteduplications? Thanks, AlunSolved3KViews0likes18CommentsCloud Catalyst - How it works
Hi Guys, We tried to initially full backup the flat files from on-premise to cloud and the slp process is backup going to MSDP then duplicate to AWS s3 with cloud catalyst appliance. Im confuse because the raw size of files is 9 GB then after it backup to MSDP and duplicate to AWS s3 with cloud catalyst the size is the same. How the MSDP and catalyst works if we full backup again the same set of files?2.6KViews0likes6CommentsFailed to validate the user credential, configuring ethernet interfaces
Greetings. I am hoping this community can offer some direction or guidance. I have a NetBackup 8.2 appliance configured at one site (network and IPMI access were all functioning). This appliance was to be moved to another site, and consequently another network. Before physically moving the appliance, I conigured the IPMI to match the network at the new site (which I was unable to take advantage of due to not having access to a computer with Java on it). Once the appliance was mounted and the IPMI was connected at the new site (the other ethernet interfaces were unplugged/disconnected still), I started noticing some behaviors: Longer than usual boot up time. When logging in with the local administrative account, the time to login was longer than usual, and a message would appear that read "Unable to authenticate the user for web service access." When I would finally be logged in (this was being done using a KVM in the rack and not IPMI remote management), I would attempt to configure an ethernet interface to match the new network, and I would receive an error of "Failed to validate the user credentials. Please logout and login to Appliance Shell Menu and retry the operation." Furthermore, this "Failed to validate..." error would also be received when trying to run Appliance.Network > Show Configuration. Logging out/logging in and restarting for good measure did not seem to have an effect on the behaviors. Regarding the "Unable to authenticate the user for web service access..." I did execute Appliance.Infraservices > Show All, and all 3 (Database, Message queue and Web server) are all reported as running. I am hoping there is something I am missing. Assistance, recommendations and guidance are all welcome. Thank you in advance.Solved2.6KViews0likes5CommentsTrying to reconnect to a MSDP after Master reinstall
We cannot reconnect to a MSDP whose NetBackup software was reinstalled. We have a NetBackup Master Server which is out of support because it's still using Windows 2008 R2, it had NetBackup 8.0 and has an underscore in its hostname. Also, it has a MSDP which is not recommended. We decided that the first thing to fix was to make an upgrade from 8.0 to 8.2. The upgrade process ended succesfully but TomCat refused to work ("java.lang.IllegalArgumentException: The character [_] is never valid in a domain name."), so the Web Management Console Server presented problems and the platform was unusable. So we just reinstalled NetBackup 8.0 and recovered the catalog, but the MSDP refused connections: "Wed 03/24/2021 7:44:21.19 ERROR: Your PDDE storage data format is ahead of software supported version, not allowed to reuse". Therefore, we made a clean install of NetBackup 8.2 after renaming the Windows hostname (without an underscore) and adding the old hostname as an alias in the hosts table. This gave us new errors, where when trying to recreate the MSDP we could create a Disk Storage Server but not a Disk Pool. Specifically, we got a "No Volumes Found" while "nbdevconfig -previewdv" showed nothing. Also, spad.exe and spoold.exe both show the following errors: Error: 25017: DaemonLockSimpleTestW: pipe already exists Error: 25001: spoold: a conflicting instance is currently active. Error: 26016: Veritas PureDisk Content Router: Bootstrap failure. Thank you for reading so far and I would appreciate any ideas you could share with us.Solved2.5KViews0likes5CommentsEMM_DATA.db - huge size can't be reduced
Dear All, I recently got stuck with an issue where the size of my EMM_DATA.db has grown enormously and I can't find a way to reduce it at all. As a short description of my environment (LAB), I have 2 NBU master servers participating in AIR scenario where the first one (which is the source for AIR) has 3 separate MSDP based media servers connected to it. The second master server also has MSDP pool plus a tape library for offloading purposes. I'm replicating all backed up images between my two NBU master domains regularly as backup images gets produced by the clients. The EMM_DATA.db file on the source master server somehow managed to grow to the enourmous size of 8.9GB while on the target master server it is not larger than 30MB. Although the infrastructure works well enough and I have no major failure out of this DB size except maybe for the increased size of the catalog backups, I wanted to optimize the DB to a somewhat normal size. After trying to validate, reorganize and rebuild the DB section as per the tech. articles via the regular "nbdb_admin" and "nbdb_unload" commands nothing really has changed to the DB size therefore I started digging a bit further. I noticed one particular discrepancy through the NBDBAdmin utility which shown me visually that the EMM_DATA portion is indicating "-1 bytes" for both "Free DBspace" and "Total DBspace" metrics. Also the size of the DB is indicated to be as 4096MB while in reality it's twice as big which makes me think that there is some kind of a corruption throughout NBDB or perhaps just the EMM_DATA portion although all the validation activities I did via CLI indicated that there are no errors at all. Reorganizing the DB via CLI indicated very few fragments and it also didn't indicate any errors. The rebuild command didn't give me any output but just returned to command prompt, however I noticed slight modification to the NBDB log files (got truncated almost immediately) so I assume that it didn't fail but it was doing it's job. On the second master server everything works absolutely fine and the sizes are definitely in order. Even though there is no relationship with this I would like to mention that all backup images in the catalogs for both master servers are exactly the same including the granularity factor for some of the backups where I use this feature (in my case Active Directory). Strangely enough when I tried to do the run the "Reorganize All" command through the GUI utility it returned back "Operation Failed" status message. The result was the same also for the "Rebuild" command. On the contrary "Validation" went through and didn't indicate any issues. My environment is based on Windows Server 2019 + NetBackup v8.2 for all NBU servers. One of my very old catalog backups that I could dig out (from exactly two years ago) clearly indicates that the EMM_DATA.db file at the time being was a little less than 30MB. If I remember right my servers were running NBU v8.1 binaries at the time being which I have upgraded to v8.1.1, v8.1.2 up till the current v8.2 through the years. I suppose this could be some sort of a bug with any of the intermediate or latest binaries, but right now I'm clueless on how this issue can be solved. I would appreciate any thoughts from experts around and I hope to find a solution to this pesky little problem. Cheers2.5KViews0likes9CommentsArchitecting Catalog protection and recovery
Greetings! I am working on re-architecting my NBU environment (currently mastered on AIX) with a new build and design. Currently I backup and protect my catalog with physical media. My goal for the new catalog backup/protection is to write that out to a disk unit attached to the master and then use hardware replication for offsite protection of said catalog. This disk will be 100% dedicated to catalog backup ONLY. My concern comes down to catalog recovery; obviously with physical media that's a fairly simple process: locate the appropriate tape and feed it to the recovery process. I have a new test build to try out architectural options and new features for planning the new environment and I've setup an Advanced Disk target on the master to drop its catalog backups. Backing up the catalog that way seems to be all fine and lovely. But reviewing the Catalog DR file it generates has brought a question up: If I'm in a catalog recovery scenario (building burned down/whatever), exactly HOW does this work? As you can see below, the DR file specifies a specific disk storage device. If I'm rebuilding a master and rebuilding its disk pools, I am guessing that there is no guarantee that they get the same 'path' assigned? Can I get around that by inventorying the recovered diskpool? Should I use Standard Disk instead of Advanced Disk (don't recall seeing that during the setup, is that still a thing in 8.2?) Essentially: how do I feed the recovery process these files? Catalog Recovery Media Media Server Disk Image Path Image File Required * <servername> @aaaah Catalog_1572449441_FULL *<servername> @aaaah Catalog_1572449451_FULLSolved2.5KViews0likes10Comments