Looking for steps/procedures for upgrading BE21 to BE22
Looking to upgrade from BE 21 to BE 22, not able to find any documentation. Is it a simple as an update ugprade? Are there any particular steps/procedures to make sure it goes smoothly? I know licensing changes, and SQL is different. What impact does this have?2.1KViews0likes8CommentsAn error occurred while processing a B2D command
Intermittently(on a daily basis, not time specific) i get a B2D command error . The storage device Nas(QNAP TS1685) has the latest firmware installed, the media server is fully up to date(latest backup exec version and fully updated windows). I even rebuilt the array type on that NAS which made no difference. Played around with changing the network switch MTU between 1500 and 9000, which made no difference. I trunked and untrunked connections, also made no difference. I even installed a secondary backup exec media server onto a VM that runs through a different switch. Both servers render the same intermittent error. I did a continuous ping to the nas, and it never drops, even while backup exec throws these intermittent errors. I even changed the storage device to only allow 1 concurrent write session : makes no difference. There is more than enough space on the shared drives. Like i said, sometimes the backup works after retrying a few times. It also does not happen on the same backup set. Totally random. Sometimes after several manual retries, the backup will start fine and even complete. But often, we end up with the below error in the event log. The deduplication option was the worst. The dedup storage device(the same nas device as mentioned above) would be online a few hours and then go offline in backup exec, even while the ISCSI connected drive is hundred percent fine and accessible via windows. This happens on the physical server(Central administration server) and on the VM(secondary backup exec server). We gave up on dedup, because backup exec requires a server restart to reconnect the drive(service restarts didn't work). It doesn't seem to care that it is perfectly fine and working in windows. Like i said, no ping drops, no windows connection drops mentioned in the iscsi event log. Just backup exec that decides randomnly to put the device in an offline state. I then started playing around with TCP/UDP offload settings on the media server network adapter. Switched off all offload settings on the network adapter. Made no difference. An error occurred while processing a B2D command. Drive: OpenPosMTF() CreateFile failed (\\cmsnas\Backup\BackupExecSlow150\B2D008940.bkf). Error=1326 For more information, click the following link: https://telemetry.veritas.com/entt?product=BE&module=eng-event&error=V-379-33808&build=retail&version=21.0.1200.1204&language=EN&os=Windows_V-6.2.9200_SP-0.0_PL-0x2_SU-0x110_PT-0x33.1KViews0likes6CommentsGet-BEBackupDefinition not showing all results
Giving BEMCLI a test with a view to automating switching duplicate jobs to new S3 buckets every 3 months. Get-BEBackupDefinition in conjuction with Set-BEDuplicateStageBackupTask seems to be exactly what is required from the examples in the help guide... Get-BEBackupDefinition "Backup Definition 01" | Set-BEDuplicateStageBackupTask -Name "Duplicate 2" -Storage "Any disk storage" | Save-BEBackupDefinition However, running Get-BEBackupDefinition only returns a few results (from what i can see only agent based jobs). None of the VM based jobs show up. Running Get-BEJob shows everything as expected. Any pointers on how to use bemcli/powershell to automate changing jobs to use the new s3 bucket?168Views0likes0CommentsBack up to Local Disk Storage and then Duplicate to Cloud Deduplication Storage
We would like to have a local backup of our servers to a normal disk storage device in Backup Exec. This will allow for fast restore times. But we would also like the benefits of ransomware protection that cloud deduplication with immutable storage provides. So, we created a job that backs up to the local disk storage device and then runs a Duplicate job that has the cloud deduplication storage device as the destination. There are no errors from the job configured this way and we verified that the retention lock is being enabled properly in the immutable cloud storage. The problem is that the Duplicate job log shows this. Deduplication stats: scanned: 0 KB, CR sent: 0 KB, CR sent over FC: 0 KB, dedup: 0.0%, cache hits: 0, rebased: 0, where dedup space saving:0.0%, compression space saving:0.0% It seems that with this method we are getting immutable backups but not any deduplicated data. Is the log incorrect or does this method really not deduplicate anything? I don't know if it makes a difference, but the cloud storage is in Azure. We properly created the local deduplication volume and the cloud deduplication device with immutability support. I'm not asking for any help in setting that up and I have verified that the deduplication part of that is working if we backup straight from the server to the cloud deduplication device as shown here. Deduplication stats: scanned: 1129257857 KB, CR sent: 11590580 KB, CR sent over FC: 0 KB, dedup: 98.0%, cache hits: 8887711, rebased: 2994, where dedup space saving:98.0%, compression space saving:0.0%297Views0likes0CommentsRestore BEMCLI
Hi. I'm trying to automate the restore of files using the BEMCLI powershell scripts. However since some of our job history has been cleared, we're unable to find the files using BEMCLI, only through the UI search option are they found using file search in the restore panel. Is there any way to search for a file using BEMCLI, in the same way you do when you use restore -> search for file in the UI and restore based on this "selection"? Thanks217Views0likes0CommentsAlerts and Notifictions
Good Day.... I am demonstrating a trial for a client and am using Backup Exec 22.2 1193. I am trying to demonstrate how alerts can be automatically answered but I am not allowed to change many enable to disable and/or change the time and action to take. This trial is supposed to be a full blown version. How do I make it possible to change any of the information in this section. Attacked is an example. Thanks, Sig209Views0likes0CommentsBackup Exec Restore
Hello when I attempted to restore the SQL database, the following error occurred. V-79-65323-4305 - The log in this backup set begins at LSN 375268000006452800001, which is too recent to apply to the database. An earlier log backup that includes LSN 375268000001202400001 can be restored. thanks,,243Views0likes0CommentsNetwork File Share Backup Not Using Proper Credential
License: BE23.0 1250 Simple Core (Agent for Win, Agent for VMWare, Agent for Linux, tape, etc) Host OS: Windows Server 2022 Standard 20348.2402 Target: Synology NAS with CIFS3 Problem: The BE agent on the host is connecting to the NAS using the default credential instead of the credential that I assign during the job setup. I know this is happening because the logs on the synology show the user being used as "Administrator" which is not the user I setup for the credential used in the job. This results in the job failing every time. Note that when I use the "Test Run" job that was setup along with the backup job, the test run job seems to use the correct credential. It's the backup job that uses the wrong credential. Secondary Annoyance: The synology shows up as a "Windows" computer in BE, despite it being a CIFS target (even if I tell it to use Unix, it still says Windows).415Views0likes1CommentFinal error: 0xe00095a7 - The operation failed because the vCenter or ESX server reported that the
The product Version is 23 vSphere 8.x Error on Job Log: Completed status: Failed Final error: 0xe00095a7 - The operation failed because the vCenter or ESX server reported that the virtual machine's configuration is invalid. Final error category: Resource Errors For additional information regarding this error refer to link V-79-57344-38311 vSphere error message Invalid virtual machine configuration. Virtual NUMA cannot be configured when CPU hotadd is enabled. We are able to reproduce this issue with other servers and able to backup/restore as long as hot swap CPU is not enabled. Issue is we are unable to restore VM Server due to hot swap enabled CPU on VM in VSPHERE https://www.veritas.com/support/en_US/article.100058556 it says their is a hotfix says only applies to Product(s): NetBackup & Alta Data Protection. We already have a case with Veritas for a week now and they are not able to produce a hotfix or any fix. They were able to provide a workaround of restoring VM files on our backup server and importing them into vSphere, but that is not a solution.674Views0likes1Comment