Slow SLP Duplication Jobs
hi , Im using SLP policies ( backup to disk -Local Disk then duplicate to tape -IBM TS), recently duplication jobs take a lot more time to complete ( around 10-12 hours for each) when i check Netbackup i can see it will find right media right away and loads it in tape drive very fast but the duplication job will hangs at "Waiting for positioning of Media ID" for about 10 hours. any help here would be apperciated. thanks! NBRB logs attached to this post. Netbackup 7.7 Windows Server 2012 R2 IBM TS Series LTO-66.9KViews0likes23CommentsConfiguring Amazon S3 in China (Blog)
(This isn't a question, just documentation of how we fixed an issue I couldn't find an answer to online. I have no doubt all of this is in the Cloud Admin Guide, I just decided to take the long route and skip that part :manembarrassed: ) My company has a handfulof remoteofficesaround the world, with two of themin different sitesin China.In each of those China offices we run a 7.7.3NetBackup Master Server, (Windows 2008), to back up local data to disk, and use SLP to replicate those images across the WAN to the other office for offsite protection. This has been working successfully for 3 years, but we've slowly been adding more data to these offices and can no longer do local backups ANDcrosssite replication without filling up the disk targets. We're using "borrowed storage"from another server to protect our offsite copies at the moment while we decide where to go next. One of our new initiativetests was to replicate to S3, but AmazonAWS has a separateChina environment different from the amazonaws.com that the rest of the world can access, amazonaws.com.cn. Our Cloud Admin set up a new S3 instance in China without an issue, but the problem is the default Amazon cloud instance in NetBackup 7.7.3 is not customizable, and does not include the China Amazon region. The problem is the cloudprovider.xml file (C:\Program Files\Veritas\NetBackup\db\cloud) is locked and there are no commands available to add the China region to the Amazon plugin. I was able to get a new cloudprovider.xml from NetBackup support, but that was only because the tech working my case had one on his desktop from helping another customer. The actualsolution to this, and any other provider that isn't a default Cloud option, is to contact the vendor and request the plugin directly from them. (You may need help combining customized instances with the new .xml but I didn't experience that so I don't know the procedure). Also, my device mappings were about a year old so I had to update that file as well. (https://sort.veritas.com/checklist/install/nbu_device_mapping) After replacing the cloudprovider.xml and upgrading the device mappings I was able to see the amazonaws.com.cn instance and, (having already opened the firewall), connected the very first attempt. Firewall: source tos3-cn-north-1.amazonaws.com.cn Bidirectional TCP: (5637,80,443)4.2KViews6likes4CommentsThe Veritas Flex Appliance and the Game of Leapfrog
It’s my firm belief that we don’t see much that is actually “new” in IT very often. Mostly what we see is either the clever application of existing technologies or the re-application of pre-existing technologies with the understanding that the tradeoffs are different. I include server virtualization and containerization in the “not new” category with both actually being quite old in terms of invention, but in more recent history, containers having more interesting applications. The reason I’m going down this path is I frequently get questions regarding the Flex appliance as to why we chose to implement with a containerized architecture instead of using a hypervisor for an HCI data protection appliance like [insert company name here]? And, are your sure there’s no hypervisor? Yes, I’m sure there’s no hypervisor in Flex, it uses containers. Fundamentally there are good applications for both, and for entirely different types of workloads, so let’s look at why chosecontainers for the Flex Appliance instead of a hypervisor. Containers has its roots in FreeBSD “jails”. Jails enabled FreeBSD clusters to be hardened and for deploying of “microservices” (to use a more modern turn of phrase) onto the systems. The big advantage here being very high levels of isolation for the microservices, each running in its own jail. Containers then versus now are fairly different. FreeBSD jails were relatively primitive compared to something like Docker, but for their time they were state of the art, and they worked quite well. Which brings us to hypervisors. VMware started largely in test/dev. About the time it was entering into production environments, we were also seeing significant uptake of widely adopted 64-bit x86 processors. Most applications were running on a single box and were 32-bit, single-threaded, single-core, and didn’t need more than 4GB of RAM. Rapidly, the default server was 4 cores and had 8GB of RAM, and those numbers were increasing quickly. The hypervisor improved the extremely poor utilization rates for many sets of applications. Today, most new applications are multi-core, multi-threaded, and RAM hungry, by design. 16-32 cores per box is normal as-is is 128+ GB of RAM, and modern applications can suck that all up effortlessly, making hypervisors less useful. Since 2004 Google has adopted running containers at scale. In fact, they were the ones who contributed back “cgroups”, a key foundational part of Linux containers and hence Docker. This is interesting because: Google values performance over convenience Google was writing multi-core, multi-threaded, “massive” apps sooner than others Google’s apps required relatively large memory footprints before others Googles infrastructure requires application fault isolation So, although virtualization existed, Google chose a lighter weight route more in line with their philosophical approach and bleeding edge needs. Essentially “leapfrogging” virtualization. Here we are today, with the Veritas Flex Applianceand containers. Containers allow us to deliver an HCI platform with “multi-application containerization” on top of “lightweight virtualization” - essentially leapfrogging HCI appliances built on a hypervisor for virtualization. A comprehensive comparison of virtualization vs. containers is beyond the scope of this blog, but I thought I would briefly touch on some differences that I think are key and that help to highlight why containers are probably the best fit for modern, hyper-converged appliances: Virtualization Containers Operating system isolation (run different kernels) Application isolation (same OS kernel) Requires emulated or “virtual hardware” and associated “PV drivers” inside guest OS Uses host’s hardware resources and drivers (in the shared kernel) Standardized “packaging” of virtual machine (mostly; variance between hypervisors) Standardized packaging requiring Docker or one of the other container technologies Optimized for groups of heterogeneous operating systems Optimized for homogeneous operating system clusters Here’s another way to look at it: Enterprise Cloud Hardware Custom/Proprietary Commodity HA Type Hardware Software SLAs Five 9s Always on Scaling Vertical Horizontal Software Decentralized Distributed Consumption Model Shared Service Self Service What you see here is a fundamentally different approach to solving what might be considered a similar problem. In a world with lots of different 32-bit operating systems running on 64-bit hardware, virtualization is a fantastic solution. In a hyper-converged appliance environment that is homogeneous and running a relatively standardized 64-bit operating systems (Linux) with 64-bit applications, only containers will do. The application services in the hyper-converged Flex Appliance are deployed, added or changed in a self-service consumption model. They’re isolated from a potential bad actor. The containers and their software services are redundant and highly available. Application services scale horizontally, on demand. One of the best party tricks of the Flex Appliance that I didn’t touch on above is that containers fundamentally change how data protection services are delivered and updated. With the Flex Appliance, gone are the days of lengthy and risky software updates and patches. Instead, quickly and safely deploy the last version in its own container in the same appliance. Put the service into production immediately or simultaneously run old and new versions until you’re satisfied with its functionality. We couldn’t do any of this with a hypervisor. And, this is why the Flex Appliance has leapfrogged other hyper-converged data protection appliances. I also refer you to this brief blogby Roger Stein for another view on Flex.3.4KViews0likes2CommentsAccelerator for a VMWare Backup Policy with AIR
Hello Experts, I am using NBU 7.7.3. I have a VMWare policy querying a cluster for an annotation to backup some clients. This policy has 4 schedules. Each Schedule has override policy storage selection checked with a corresponding AIR (Automated Image Replication) SLP's as the selection. The Storage selected in the SLP is an EMC DataDomain Storage unit (Using OST Plugin) setup as a Source for AIR. Attached some screenshots. With this setup, Should the 'Use Accelarator' Option be available for selection on the policy? Currently I see it grayed out.3.3KViews0likes9CommentsNBU 7.7.3 - Slow catalog backup
Experiencing poor performance in data transfer speed when running catalog backup. Setup is as follows: - VCS clustered master servers running NBU 7.7.3 - OS RHEL 6.6 - Storage unit is a DataDomain 990 - NIC 10Gb/Sec. - Catalog size apr. 1.6TB - Catalog resides on SAN - Observed data transfer speed in the area of 20-25MB/Sec. I've tested the storage unit using GEN_DATA and can see transfer rates up to 2-300MB/Sec. from the master servers. Anybody have an idea why the transfer speed is only using a small part of the bandwidth and/or how to troubleshoot further?Solved3.1KViews0likes7CommentsHow to clean up storage on the DSU for puredisk linux storage (NBU 7.5.0.5)
Good day all, i'm scannign a DSU storage for a puredisk service on a media server for NBU 7.5.0.5 on Linux Redhat 5.6. I've found a lot of HISTORY/SEGMENTS/files very old, 4 or more years ago, occuping space. history/segments/2012-08-07 history/segments/2012-08-08 history/segments/2012-08-09 history/segments/2012-08-10 history/segments/2012-08-11 history/segments/2012-08-12 history/segments/2012-08-13 The retention time for this solution is 2 weeks, for all the backups, so a retention and an history older than, let's say, 1 month just to be fair, is a waste of time. How can i cleanup what's not more needed? Why still it preserves all the segments, without compression or delete them automatically? Thank you. Regards3KViews0likes6CommentsNBU wishlist
don't see where to so i'll leave this here: make all admin commands to work with GUI. for example, to move a duplication job to another STU, it would be better if admins can do it with a right-click, move to.. generally, make ALL admin work/commands work in GUI too. easier and less prone to mistakes. ....Solved2.3KViews1like2CommentsLTO cleaning tape definition with two different drives?
Hi all: I need some advice. I have a client with Quantum i3 tape library; with one LTO7 and other LTO8 tape drives and 50 slots. Both drives are in the same partition; but since the client still have LTO6 media (written with an oracle SL48 tape library; which was decommissioned) the LTO7 is hcart and LTO8 is hcart2. As you know, LTO Cleaning tape are universal; that means no matter the LTO X category of the drive the tape cleaning media/cartridge is the same. The library was configured to let the application manage all cleaning tapes. My current issue is to reach universality with LTO Cleaning tapes or at least how to assign the cleaning tapes to the drives if the cleaning tapes have almost the same code bar CLNXYZCU. The current Media ID Generation is 0,8,5:6:7:8:1:2 \_ robot number \_ bar code length I was thinking that maybe use from 000 to 400 to assign "HC_CLN1/2 inch cleaning tape" to LTO7 drive and from 500 to 999 to assign "HC2_CLN1/2 inch cleaning tape 2". But need to hear options/advices from people with more experience.Solved1.6KViews0likes3CommentsNetBackup - NAS appliance and Offline Backups
Group, Hi, all. We're looking at options for an offline DR backup and were wondering if anyone has tried using a NAS appliance to receive backup copies and then just disconnecting it from the network once the backup completes? If so, what were your results? Thanks, Jason1.6KViews0likes7CommentsRecovery Without import (very awkward situation)
Hello, I have run into very awkward sitaution here, our client is running Netbackup 6.0 (i know its EOL) on solaris and they recently purchased Netbackup 8.1.2. So they want to import all the Backup info(all client and their backups) from old server into this new server so that if any request of old restore comes, it will be served from this new server. Is it even possible to achieve this!!!? I have been reading about the "Recovery without import" white paper and it says somthing about cat_export and cat_import step. but the problem is there is no Cat_export command on source (NB 6.0) server. I have done the step where it sas to copy the contents of /usr/openv/netbackup/db/images to the destination server. I cant brows the backup of any client of old server as there isn't any info about them in NBU GUI (BAR window). i do have the backup tape of catalog of old server. Waiting for your kind response.Solved1.5KViews0likes5Comments