Forum Discussion

George_Dicu's avatar
10 years ago

OpsCenter Custom Report

Hello,

 

So I have OpsCenter 7.6.0.3 and I`m trying to create a report on some clients in order to see only the Active Data on our dedup pools/tapes(basicly everything).

None of the predefined reports have this or maybe I mist it.

Is there any way of creating a monthly based report of a particulary client/policy who can show me the total amount of Active/Protected(NOT EXPIRED) data on our netbackup env.?

 

//George

  • Sorry, missed the active part of your request.

    My guess, your next question will be distibution by policy and schedule and if SLP is finished. =)

    For catalog assesment I use following script and excel pivot tables.

    Python 2.x, field are

    backup_id, client, policy, schedule, backup date, expire date, SLP, size (KB), SLP status(3 - complete), residense.

    __author__ = 'Pavel Voroapev'
    from codecs import open
    from datetime import datetime
    from re import split
    import os
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument("-p", "--path", help='path to bpimagelist -l -d "01/01/2000 00:00:00"', required=True)
    args = parser.parse_args()
    filepath = args.path
    (_ , out) = os.path.split(filepath)
    out += "out.csv"
    handle = open(filepath, 'r', encoding='utf-8')
    data = handle.readlines(2)
    imglist = dict()
    i = 0
    for line in data:
        i += 1
        if line[:5] == u'IMAGE' or line[1:6] == u'IMAGE':
            image = split(' ', line)
            imglist[image[5]] = list()
            imglist[image[5]].append(image[1])
            imglist[image[5]].append(image[6])
            imglist[image[5]].append(image[10])
            imglist[image[5]].append(datetime.fromtimestamp(float(image[13])).strftime('%Y-%m-%d %H:%M:%S'))
            imglist[image[5]].append(datetime.fromtimestamp(float(image[15])).strftime('%Y-%m-%d %H:%M:%S'))
            imglist[image[5]].append(image[20])
            imglist[image[5]].append(image[51])
            imglist[image[5]].append(image[18])
            imglist[image[5]].append(image[52])
        elif line[:4] == 'FRAG':
            frag = split(' ', line)
            if len(frag) > 4:
                if len(imglist[image[5]]) == 9:
                    imglist[image[5]].append(frag[8])
                    FRAG = frag[8]
                elif len(imglist[image[5]]) == 10:
                    if FRAG != frag[8]:
                        FRAG = frag[8]
                        imglist[image[5]][9] = imglist[image[5]][9] + ',' + FRAG
    handle.close()
    
    handle = open(out, 'w', encoding='utf-16')
    for image in imglist:
        line = image + u';'
        for val in imglist[image]:
            line += val + u';'
        line += u"\u000A"
        handle.write(line)
    handle.close()
    

     

    Note that imagelist expected to be encoded in utf-8, so you will need to do iconv before running.

     

8 Replies

  • Hello,

     

    You can try this one

     

    Protected / Written Report per Policy Type per Day

    select
    COALESCE(CAST(SUM(domain_JobArchive.bytesWritten)/1024.0/1024.0/1024.0 AS NUMERIC (20,2)), 0) AS 'Written (GB)',
    COALESCE(CAST(SUM(domain_JobArchive.preSISSize)/1024.0/1024.0/1024.0 AS NUMERIC (20,2)), 0) AS 'Pre Dedup Size (GB)',
    COALESCE(CAST((sum(domain_JobArchive.preSISSize) - sum (domain_JobArchive.bytesWritten))/sum(domain_JobArchive.preSISSize)*100 AS NUMERIC(20,2)),0) as "Dedupe %",
    DATEFORMAT(cast(UTCBigintToNOMTime(domain_JobArchive.endTime) as DATETIME), 'yyyy-mm-dd') as myDateTime,
    domain_JobArchive.policyType as "Policy Type",
    lookup_JobType.name as 'Job Type'
    FROM domain_JobArchive, lookup_JobType, WHERE
    lookup_JobType.name='Backup'
    AND domain_JobArchive.policyType NOT IN (-1)
    AND DATEDIFF(day,UTCBigIntToNomTime(domain_JobArchive.endTime), GETDATE()) <= 24
    group by "MyDateTime", "Policy Type", "name" order by "MyDateTime"

  • Hi xao,

    My 50 cents to Riaan's post, added joining over lookup_JobType and removed filtering by backups only. It should include SLPs and Vault, if you have it.

    SELECT
    
    COALESCE(CAST(SUM(domain_JobArchive.bytesWritten)/1024.0/1024.0/1024.0 AS NUMERIC (20,2)), 0) AS "Written (GB)",
    COALESCE(CAST(SUM(domain_JobArchive.preSISSize)/1024.0/1024.0/1024.0 AS NUMERIC (20,2)), 0) AS "Pre Dedup Size (GB)",
    COALESCE(CAST((sum(domain_JobArchive.preSISSize) - sum (domain_JobArchive.bytesWritten))/sum(domain_JobArchive.preSISSize)*100 AS NUMERIC(20,2)),0) as "Dedupe %",
    DATEFORMAT(cast(UTCBigintToNOMTime(domain_JobArchive.endTime) as DATETIME), 'yyyy-mm-dd') as "Jobs End Time",
    lookup_PolicyType.name as "Policy Type",
    lookup_JobType.name as "Job Type"
    
    FROM 
    domain_JobArchive, lookup_JobType, lookup_PolicyType
    
    WHERE
    lookup_PolicyType.id = domain_JobArchive.policyType AND
    lookup_JobType.id = domain_JobArchive.jobType AND
    
    DATEDIFF(hour,UTCBigIntToNomTime(domain_JobArchive.endTime), GETDATE()) <= 24
    
    GROUP BY "Jobs End Time", "Policy Type", "Job Type"
    
    ORDER BY "Jobs End Time"

     

  • Hi,

     

    Thank you for this script.

     

    Do you think you can modify it for a bigger time frame like a few years, and to show the datas for each month?

    I don't think this level of granularity is needed when you make a time freame of years and to show info monthly, like Job Type or status of the job...just the not expired data of a particulary policy name, each month.

    Thank You

    //George

     

     

  • SELECT
    
    COALESCE(CAST(SUM(domain_JobArchive.bytesWritten)/1024/1024/1024 AS NUMERIC (20,2)), 0) AS "Written (GB)",
    COALESCE(CAST(SUM(domain_JobArchive.preSISSize)/1024/1024/1024 AS NUMERIC (20,2)), 0) AS "Pre Dedup Size (GB)",
    COALESCE(CAST((sum(domain_JobArchive.preSISSize) - SUM(domain_JobArchive.bytesWritten))/( SUM(domain_JobArchive.preSISSize)+1) *100 AS NUMERIC(20,2)),0) as "Dedupe %",
    DATEFORMAT(cast(UTCBigintToNOMTime(domain_JobArchive.endTime) as DATETIME), 'yyyy-mm') as "Jobs End Time",
    lookup_PolicyType.name as "Policy Type",
    lookup_JobType.name as "Job Type"
    
    FROM 
    domain_JobArchive, lookup_JobType, lookup_PolicyType
    
    WHERE
    lookup_PolicyType.id = domain_JobArchive.policyType AND
    lookup_JobType.id = domain_JobArchive.type AND
    ( lookup_JobType.name = 'Backup' OR lookup_JobType.name = 'Dupication' OR lookup_JobType.name = 'Vault')  AND
    lookup_PolicyType.name != '-' AND
    
    DATEDIFF(year,UTCBigIntToNomTime(domain_JobArchive.endTime), GETDATE()) <= 1
    
    GROUP BY "Jobs End Time", "Policy Type",  "Job Type"
    
    ORDER BY "Jobs End Time"

    Try this.

  • Damn, i really need to get intoo Sybase.

     

    The report is great in this form, but it`s exluding al other types of jobs.

    Is the Written (GB) column the entire writen history from the biggining of the backups?

    I want to see the active data (not expired and, and not dedup) from one particular client from back x years, esalonate month by month.

    Can this be possible?

    //George

  • Hi xao,

    Those types are the only ones, that actually write data.

    Written (GB) - is sum of all "KB wirtten" values by Policy type from activity monitor per month.

    >>I want to see the active data (not expired and, and not dedup) from one particular client from back x years, esalonate month by month.

    Be prepared to wait for query to execute, that is a heavy one.

    SELECT
    
    COALESCE(CAST(SUM(domain_JobArchive.bytesWritten)/1024/1024/1024 AS NUMERIC (20,2)), 0) AS "Written (GB)",
    COALESCE(CAST(SUM(domain_JobArchive.preSISSize)/1024/1024/1024 AS NUMERIC (20,2)), 0) AS "Pre Dedup Size (GB)",
    COALESCE(CAST((sum(domain_JobArchive.preSISSize) - SUM(domain_JobArchive.bytesWritten))/( SUM(domain_JobArchive.preSISSize)+1) *100 AS NUMERIC(20,2)),0) as "Dedupe %",
    DATEFORMAT(cast(UTCBigintToNOMTime(domain_JobArchive.endTime) as DATETIME), 'yyyy-mm') as "Jobs End Time",
    lookup_PolicyType.name as "Policy Type",
    lookup_JobType.name as "Job Type",
    domain_JobArchive.clientName as "Client"
    FROM 
    domain_JobArchive, lookup_JobType, lookup_PolicyType
    WHERE
    lookup_PolicyType.id = domain_JobArchive.policyType AND
    lookup_JobType.id = domain_JobArchive.type AND
    ( lookup_JobType.name = 'Backup' OR lookup_JobType.name = 'Dupication' OR lookup_JobType.name = 'Vault')  AND
    lookup_PolicyType.name != '-' AND
    
    domain_JobArchive.clientName LIKE '%<YOURNAME>%' -- Put client name or part of it here
    
    GROUP BY "Jobs End Time", "Policy Type",  "Job Type", "Client"
    
    ORDER BY "Client", "Jobs End Time" DESC

     

  • Sorry, missed the active part of your request.

    My guess, your next question will be distibution by policy and schedule and if SLP is finished. =)

    For catalog assesment I use following script and excel pivot tables.

    Python 2.x, field are

    backup_id, client, policy, schedule, backup date, expire date, SLP, size (KB), SLP status(3 - complete), residense.

    __author__ = 'Pavel Voroapev'
    from codecs import open
    from datetime import datetime
    from re import split
    import os
    import argparse
    
    parser = argparse.ArgumentParser()
    parser.add_argument("-p", "--path", help='path to bpimagelist -l -d "01/01/2000 00:00:00"', required=True)
    args = parser.parse_args()
    filepath = args.path
    (_ , out) = os.path.split(filepath)
    out += "out.csv"
    handle = open(filepath, 'r', encoding='utf-8')
    data = handle.readlines(2)
    imglist = dict()
    i = 0
    for line in data:
        i += 1
        if line[:5] == u'IMAGE' or line[1:6] == u'IMAGE':
            image = split(' ', line)
            imglist[image[5]] = list()
            imglist[image[5]].append(image[1])
            imglist[image[5]].append(image[6])
            imglist[image[5]].append(image[10])
            imglist[image[5]].append(datetime.fromtimestamp(float(image[13])).strftime('%Y-%m-%d %H:%M:%S'))
            imglist[image[5]].append(datetime.fromtimestamp(float(image[15])).strftime('%Y-%m-%d %H:%M:%S'))
            imglist[image[5]].append(image[20])
            imglist[image[5]].append(image[51])
            imglist[image[5]].append(image[18])
            imglist[image[5]].append(image[52])
        elif line[:4] == 'FRAG':
            frag = split(' ', line)
            if len(frag) > 4:
                if len(imglist[image[5]]) == 9:
                    imglist[image[5]].append(frag[8])
                    FRAG = frag[8]
                elif len(imglist[image[5]]) == 10:
                    if FRAG != frag[8]:
                        FRAG = frag[8]
                        imglist[image[5]][9] = imglist[image[5]][9] + ',' + FRAG
    handle.close()
    
    handle = open(out, 'w', encoding='utf-16')
    for image in imglist:
        line = image + u';'
        for val in imglist[image]:
            line += val + u';'
        line += u"\u000A"
        handle.write(line)
    handle.close()
    

     

    Note that imagelist expected to be encoded in utf-8, so you will need to do iconv before running.

     

  • I tried to run this error , but got an error. Could you please combine the both query and past it here again? Thanks.