cancel
Showing results for 
Search instead for 
Did you mean: 

NetBackup 7.1 Features - OpsCenter and OpsCenter Analytics Reporting Enhancements

Dave_High
Level 4
Employee

Welcome to NetBackup 7.1! This series of blogs will provide additional information into the key features of NetBackup 7.1 continuing today with the changes and enhancements to OpsCenter and OpsCenter Analytics. OpsCenter 7.1 offers a number of new reports in the already robust reporting set.

  • Enhanced SLP including the new AIR functionality
  • Reporting on Audit Trails – Phase two
  • Workload Analyzer

This blog will take a quick look at each of these new reports and outline how they can assist in the reporting and management of the NetBackup environment.

Enhanced SLP Reporting

Storage Life Cycle Policies are used to automate the movement of backed up data from one storage area to another. Automated Image Replication takes Storage Lifecycle Policy a step further allowing backup images to be replicated to another master server domain using a NetBackup Policy vs. another form of replication. OpsCenter is used to track and report on the flow of data during these processes.

Using OpsCenter to report on Storage Lifecycle Policy the following information can be found:

  • If the Storage Lifecycle Policy is performing according to schedule
  • If the additional copies have been made
  • Is there a backlog and is it getting worse

Since Auto Image Replication is part of Storage Lifecycle Policy reporting, there is not a separate set of reports for it. It is included as a drilldown within the Storage Lifecycle Policy reporting. SLP reporting is possible for the following topics based on the drilldowns:

  • SLP Status by SLP
  • SLP Status by Destination
  • SLP Duplication Progress
  • SLP Status by Client
  • SLP Status by Image
  • SLP Status by Image Copy
  • SLP Daily Backlog

One item of note - Storage Lifecycle Policy Reporting is only available on a NetBackup 7.1 Master, however if Storage Lifecycle Policies were used on a Master with 7.0 and/or 7.0.1 on it, once that system is upgraded to NetBackup 7.1 the information about the previous Storage Lifecycle Policies will be reported on. Hopefully that makes sense. One other item of note - there is no customized reporting available for Storage Lifecycle Policies beyond the modifications that can be made to the basic reports in the Edit Report’s tab. This will allow the user to change the dates being reported on as well as adding minimal filtering on a View, on a Master Server and on the Storage Lifecycle Policy name. No additional filters are available. The point is that there is no method to create a customized Tabular report, nor is there information available via the SQL custom reporting available with OCA at this time. The Point and Click reports are the only information available for SLP's.

Reporting on the status of the Storage Lifecycle Policy can help a customer understand when there is not enough hardware to handle the load Storage Lifecycle Policy places on a backup infrastructure. This is especially important when it comes to backlog. Looking at backlog can help when there are issues (such as Storage Lifecycle Policies not working properly) or when the capacity of the existing infrastructure has been reached so that additional hardware can be added – or data being protected using Storage Lifecycle Policy can be reduced to a subset of the data in the environment.

Auto Image Replication reporting, as a subset of the Storage Lifecycle Policy reporting, can help identify where the data is "located" at any given time. If Auto Image Replication is being used across a WAN these reports can also help indicate if enough bandwidth is in place to make the Auto Image Replication work properly. Storage Lifecycle Policy reports are generated from the NetBackup catalog and the Storage Lifecycle Policy work list and collected via NBSL. Data for Storage Lifecycle Policies is collected every 10 minutes from the Master server so the data is not as current as regular NetBackup reporting data.

Audit Trails – Phase 2

In OpsCenter 7.1, additional reporting was added to track the Phase Two functionality of Audit Trails. The Audit Trails functionality was added to the base NetBackup product starting in 7.0.1. The new functionality in NBU 7.1 (phase 2) includes:

  • Auditing of changes to bp.conf/registry entries if made using NetBackup
  • Auditing of changes to storage units
  • Auditing of changes to storage servers
  • Auditing of changes volume pools
  • Audit trails enabled by default
  • Integration with OpsCenter for more detailed reporting

The ability to report on these new features with OpsCenter 7.1 will make them easier to track than using the command line with NetBackup. There is no support in the NetBackup JAVA GUI for reporting on Audit Trails therefore any customer who wishes to view the Audit Trail information in a GUI will need to use OpsCenter 7.1. If a GUI view is not needed, the “nbauditreport” command can be used from the master server command line.

Audit Trails are turned on by default in NetBackup 7.1 therefore OpsCenter (no Analytics key is required) will be able to report on them by default. The 60 day reporting restriction for reports without an Analytics key is not enforced for Audit Trail reporting. Unless NBAC is installed and configured, NetBackup is run by root/administrator so reporting without NBAC will only show changes made by root/administrator. To make audit trails truly useful, NBAC and granular users should be considered.

Why is the reporting in OpsCenter? Based on the goal of a “single pane of glass” view into the NetBackup environment, placing the Audit Trail reporting in OpsCenter makes sense rather than recreating the ability to report on them in the NetBackup GUI.  The other advantage is that OpsCenter stores the data it receives from NetBackup in its own database - therefore even if the information is deleted from NetBackup, it will be available in OpsCenter. This can be very beneficial during an audit and provides a secondary layer of auditing in case someone has deleted the evidence from NetBackup.

Workload Analyzer

The Workload Analyzer is a report that gives information about the activities that are done across a period of seven days. The activities are number of jobs running at a given period of time and the amount of data that is transferred during this period. The report contains 168 data points of analysis in terms of the activities that are done for each hour for a period of seven days and are displayed in “heat map” format. The report is comprised of four reports that are based on time, data transfer, job queue, and job throughput.

  • Time is the point from when the jobs start, the periods for which the jobs are active, and the point when the jobs end.
  • Job queue is based on the length of time a job is in queue before the job is initiated.
  • Job size is the amount of data that is moved from the target to the destination depending on the job type. Job size is also based on how much data is broken down and moved in a given hour.
  • Job throughput is based on the data transmission rate.

The Workload Analyzer reports consider only the jobs that have completed for calculating and generating the report. For example, a job that started in the past, and is in the queued state for a long time is not considered in the job queue report. It is considered only when it is complete.

There are four parts of the Workload Analyzer:

  • Job Count Workload Analyzer: This report is based on the number of jobs running at a given period of time. The purpose of this report is to display the data that is based on time and load
  • Job Size Workload Analyzer: This report is based on the amount of data that is transferred during a given period of time. The calculation is based on the total amount of data that is backed up by the jobs - divided by the time period
  • Job Queue Workload Analyzer: This report is based on the period for which jobs are in a queue state before data backup begins. Queue time is calculated through the use of timestamps such as time taken to initiate the job, time taken to start and end the job etc. The queuing is calculated at individual job level and then aggregated to the level where the report is generated.
  • Job Throughput Workload Analyzer: This report is based on the rate in Kbytes per Second at which the data is transferred. This calculation provides an important indicator to understand the performance of the backups.

While various filters can be applied to the report, it can only display the information in a weekly span, and it can only be emailed in HTML format.

The Workload Analyzer reports can help the customer understand where there are available resources that can be used for backup and recovery. In many environments it seems that backups run 24/7 however they may not be actually using all of the resources available which reduces ROI. The Workload Analyzer can also show things such as when jobs are staying too long in the queue - which means that too many backups might be starting at one time or that not enough resources are in the infrastructure to handle the jobs. Becoming familiar with this report can help postpone hardware purchases by showing where additional backups can be added without impacting the current environment.

Based on information retrieved from NetBackup, the heat map report computes when resources were in use based on a number of factors including how long jobs were queued, how many jobs were run compared to the resources available at the time the job ran etc. The information is derived from a number of areas within NetBackup, processed, and computed as the Workload Analyzer report.

Hopefully this information on OpsCenter and Opscenter Analytics has been helpful. This blog concludes information about the major features in NetBackup 7.1. Join us tomorrow for a look at some of the minor features and enhancements in NetBackup 7.1.