NBU restore VMs from AIR_Gapped DR site into Main site
we are checking the possibility to restore a VM from AIR_Gap replication location back into the main site's Vcenter1 main site: NBU1+Vcenter1 DR site: NBU2+Vcenter2 VM initially resides on Vcenter1 of main site, NBU 10.2.0.1 (flex instance) takes backup of this VM and replicate its backup image to WORM in DR (NBU2). in case we got disaster in main site and NBU1 in main site is crashed, can we restore the VM from WORM/AIR-Gap (NBU2) into Vcenter1? is it only that we need to add Vcenter1 into NBU2? thanks,31Views0likes0Comments[Snapshot Manager] Inconsistency between Cloud and Storage sections
Hello! Looking for help, please. My situation is the following: I was faced with an enviroment with an old CloudPoint server that failed on upgrading, resulting in the loss of the images and configuration. Upon fresh installation of a new VM of the Snapshot Manager 10.3, i promptly configured the Cloud Section of the Web UI Primary server and added the provider configuration (Azure). All the permissions have been granted to the Snapshot Manager regarding Azure. Protection Plans created, protected assets selected. Problem is, even thou the jobs are coming through with status 0, i am unable to find any recovery points for the assets. Also, upon investigation, i found on the Storage -> Snapshot Manager section, that the Primary Server is configured as a Snapshot Server, with the old version (10.0). This was done on the old configuration and i have no idea as to why it is present there. Trying to connect does not work, error code 25 as well as retrieving version information. Trying to add the new Snapshot Manager will result in Entity alredy exists error message. Could this storage configuration be related? If so, any suggestions as how to fix it? (I am also unable to delete the old Cloudpoint from the Web UI, but it is disabled) Primary server version is 10.3 New Snapshot Manager is 10.3 Old Cloudpoint was 10.0, already decomissioned. Thank you!416Views0likes1CommentNBU 10x tape 2 tape copy (inline copy) clarification.
Hi people: We have a 4 drive LTO9 library, Master/media server is Windows2022 and NBU 10.1.2. I have some policies that do a two tape copy of data to different pools. For example, mydata policy have two schedules foreverfull schedule make a backup to Mypool01 (primary copy on tape1) and its copy is located in Mypool02 (second copy on tape2) with retention level 100. Most of the time the policy make the two tape backup and copy successfully. monthlyfull schedule make a backup to Monthpool01 (primary copy) and its copy is located in Monthpool02 with retention level 10. Sometimes tape1 o tape2 fails over the copy (HW errors mostly, like need cleans drive) and then just use bpduplicate to make the copy over the tape of the respective pool. Today I have some doubts about the correct syntax, because I need to copy a backup with two copies made with monthlyfull schedule (the backupid is server01.net_1705357985) to use foreverfull pools and change the retention level. My question/doubts are around the -npc; should I use it? Of course, I want that backups on foreverfull schedule remains as primary copies. bpduplicate -v -number_copies 2 -backupid server01.net_1705357985 -client server01.net -dp Mypool1,Mypool2 -dstunit mylibrary-01-hcart3-robot-tld-0 -id 000012 -L copi.wri -policy Respaldo_servr01 -rl 100,100 Can anybody give some ligth about this?835Views0likes8CommentsOracle to Netbackup Copilot
Hello, I'm trying to implement Copilot for Oracle. I've set up the SLPs and registered the test instance, but NBU is unable to perform a backup with the error: Unable to perform a manual backup with policy "test". The policy does not have a list of files to back up. The setup: Oracle Linux 7.7, NBU 10.2, StoreOnce 5260 (4.3.6), Catalyst 4.4.0. In short, I'm trying to implement NBU accelerator for faster backups. If there is another way, please refer to the guide. Thank you in advance.Solved793Views0likes6Comments[IT Analytics] [OVA] Trying to update from version 11.2 to 11.3
Need some help please. As the title says, i am trying to update my OVA implemented IT Analytics 11.2 to version 11.3. It was installed via OVA Template. Downloaded the upgrade .iso file, created the diska folder, mounted iso on diska and ran the portal_upgrade.sh from / The upgrade utility installer runs, show the Terms of use and generates the 'accept' request prompt. I type 'accept' then enter... Nothing happens, utility installer closes then im back to command line. Tried to run the upgrade utility on /opt/aptare/upgrade/upgrade_installer with no success as well. 'cp: cannot stat' error I am following the Linux upgrade guide. Any ideas why it is not working please? Running everything as 'root'. Screenshots attached.Solved799Views0likes2CommentsLTO cleaning tape definition with two different drives?
Hi all: I need some advice. I have a client with Quantum i3 tape library; with one LTO7 and other LTO8 tape drives and 50 slots. Both drives are in the same partition; but since the client still have LTO6 media (written with an oracle SL48 tape library; which was decommissioned) the LTO7 is hcart and LTO8 is hcart2. As you know, LTO Cleaning tape are universal; that means no matter the LTO X category of the drive the tape cleaning media/cartridge is the same. The library was configured to let the application manage all cleaning tapes. My current issue is to reach universality with LTO Cleaning tapes or at least how to assign the cleaning tapes to the drives if the cleaning tapes have almost the same code bar CLNXYZCU. The current Media ID Generation is 0,8,5:6:7:8:1:2 \_ robot number \_ bar code length I was thinking that maybe use from 000 to 400 to assign "HC_CLN1/2 inch cleaning tape" to LTO7 drive and from 500 to 999 to assign "HC2_CLN1/2 inch cleaning tape 2". But need to hear options/advices from people with more experience.Solved1.6KViews0likes3CommentsThe Veritas Flex Appliance and the Game of Leapfrog
It’s my firm belief that we don’t see much that is actually “new” in IT very often. Mostly what we see is either the clever application of existing technologies or the re-application of pre-existing technologies with the understanding that the tradeoffs are different. I include server virtualization and containerization in the “not new” category with both actually being quite old in terms of invention, but in more recent history, containers having more interesting applications. The reason I’m going down this path is I frequently get questions regarding the Flex appliance as to why we chose to implement with a containerized architecture instead of using a hypervisor for an HCI data protection appliance like [insert company name here]? And, are your sure there’s no hypervisor? Yes, I’m sure there’s no hypervisor in Flex, it uses containers. Fundamentally there are good applications for both, and for entirely different types of workloads, so let’s look at why chosecontainers for the Flex Appliance instead of a hypervisor. Containers has its roots in FreeBSD “jails”. Jails enabled FreeBSD clusters to be hardened and for deploying of “microservices” (to use a more modern turn of phrase) onto the systems. The big advantage here being very high levels of isolation for the microservices, each running in its own jail. Containers then versus now are fairly different. FreeBSD jails were relatively primitive compared to something like Docker, but for their time they were state of the art, and they worked quite well. Which brings us to hypervisors. VMware started largely in test/dev. About the time it was entering into production environments, we were also seeing significant uptake of widely adopted 64-bit x86 processors. Most applications were running on a single box and were 32-bit, single-threaded, single-core, and didn’t need more than 4GB of RAM. Rapidly, the default server was 4 cores and had 8GB of RAM, and those numbers were increasing quickly. The hypervisor improved the extremely poor utilization rates for many sets of applications. Today, most new applications are multi-core, multi-threaded, and RAM hungry, by design. 16-32 cores per box is normal as-is is 128+ GB of RAM, and modern applications can suck that all up effortlessly, making hypervisors less useful. Since 2004 Google has adopted running containers at scale. In fact, they were the ones who contributed back “cgroups”, a key foundational part of Linux containers and hence Docker. This is interesting because: Google values performance over convenience Google was writing multi-core, multi-threaded, “massive” apps sooner than others Google’s apps required relatively large memory footprints before others Googles infrastructure requires application fault isolation So, although virtualization existed, Google chose a lighter weight route more in line with their philosophical approach and bleeding edge needs. Essentially “leapfrogging” virtualization. Here we are today, with the Veritas Flex Applianceand containers. Containers allow us to deliver an HCI platform with “multi-application containerization” on top of “lightweight virtualization” - essentially leapfrogging HCI appliances built on a hypervisor for virtualization. A comprehensive comparison of virtualization vs. containers is beyond the scope of this blog, but I thought I would briefly touch on some differences that I think are key and that help to highlight why containers are probably the best fit for modern, hyper-converged appliances: Virtualization Containers Operating system isolation (run different kernels) Application isolation (same OS kernel) Requires emulated or “virtual hardware” and associated “PV drivers” inside guest OS Uses host’s hardware resources and drivers (in the shared kernel) Standardized “packaging” of virtual machine (mostly; variance between hypervisors) Standardized packaging requiring Docker or one of the other container technologies Optimized for groups of heterogeneous operating systems Optimized for homogeneous operating system clusters Here’s another way to look at it: Enterprise Cloud Hardware Custom/Proprietary Commodity HA Type Hardware Software SLAs Five 9s Always on Scaling Vertical Horizontal Software Decentralized Distributed Consumption Model Shared Service Self Service What you see here is a fundamentally different approach to solving what might be considered a similar problem. In a world with lots of different 32-bit operating systems running on 64-bit hardware, virtualization is a fantastic solution. In a hyper-converged appliance environment that is homogeneous and running a relatively standardized 64-bit operating systems (Linux) with 64-bit applications, only containers will do. The application services in the hyper-converged Flex Appliance are deployed, added or changed in a self-service consumption model. They’re isolated from a potential bad actor. The containers and their software services are redundant and highly available. Application services scale horizontally, on demand. One of the best party tricks of the Flex Appliance that I didn’t touch on above is that containers fundamentally change how data protection services are delivered and updated. With the Flex Appliance, gone are the days of lengthy and risky software updates and patches. Instead, quickly and safely deploy the last version in its own container in the same appliance. Put the service into production immediately or simultaneously run old and new versions until you’re satisfied with its functionality. We couldn’t do any of this with a hypervisor. And, this is why the Flex Appliance has leapfrogged other hyper-converged data protection appliances. I also refer you to this brief blogby Roger Stein for another view on Flex.3.4KViews0likes2CommentsNetbackup Upgrade Help
HI, I am a support engineer, I have limited knowledge about NBU. Don't go harsh on me for asking stupid questions. I have shared the environment details for a client who are planning an upgrade. Lucky for them, I have to do the upgrade. Environment details: PRIMARY SITE: A virtual NetBackup Master Server based upon NBU version 8.1.2 (OS LINUX). A NBU Appliance 5240 configured as a Media Server based upon NBU version 8.1.2 and Appliance version 3.1.2. MSDPis configured on the media appliance. DISASTER RECOVERY SITE: ANBU Appliance 5240 configured as a Master-Media Server based upon NBU version 8.1.2 and Appliance version 3.1.2. Via auto image replication working on one to one model NBU replicates the backup copies from PR site to DR site. Lets suppose we are upgrading to NBU 9.0, Now the questions are, Firstly I will update the master server to NBU 9.0. If the prerequisites check clears. all is good and I should proceed to upgrade the master server and move onto the media appliance. Is that how it goes? The media appliance is based upon NBU appliance version 3.1.2 so upgrading the appliance version to 4.0 would automatically upgrade the NBU version in it from 8.1.2 to 9.0. Is the NBU version bundled in the appliance version or do I have to upgrade it separately? At the DR site when upgrading the NBU appliance to appliance version 4.0, would it update the master and media server NBU versions in it to 9.0 or if I have to update NBU version separately would the media and master server both be updated from the same package? Should I run the AURA (Appliance Upgrade Readiness Analyzer) tool on both PR and DR appliances before upgrading them? Can I downgrade if something goes wrong? MarianneNicolaiRiaanBadenhorstPlease help, thanking in anticipation.Solved1.3KViews0likes2Comments