cancel
Showing results for 
Search instead for 
Did you mean: 

Question about duplication to tape

MickBoosh
Level 5

Hi,

I am getting some 190 errors which upon closer inspection have been created as the images on the staging area have expired before it has duplicated to tape. I have always been under the impression that duplication to tape takes place quickly (the same day) after it's been backed up to disk and then once the watermark is hit on the staging area it is a candidate to be deleted from disk?

 

The backup for this particular policy was created 1 week ago, so it's not going to tape for a week, is that right? Our staging policy runs from midnight to 5pm daily so it should have enough time. Also all the problems are from the same policy, which does SQL transaction logs.

 

Is a week too long for it to be duplicated? Should it be quicker than this? 

28 REPLIES 28

MickBoosh
Level 5

 

Label:                <master>-STU-3Drives
Storage Unit Type:    Media Manager
Host Connection:      <master>
Number of Drives:     4
On Demand Only:       no
Max MPX/drive:        1
Density:              hcart3 - 1/2 Inch Cartridge 3
Robot Type/Number:    TLD / 0
Max Fragment Size:    10240 MB
 
Label:                <master>-STU-2Drive
Storage Unit Type:    Media Manager
Host Connection:      <master>
Number of Drives:     2
On Demand Only:       yes
Max MPX/drive:        1
Density:              hcart3 - 1/2 Inch Cartridge 3
Robot Type/Number:    TLD / 0
Max Fragment Size:    10240 MB
 
Label:                <mediaserver>-STU-3Drives
Storage Unit Type:    Media Manager
Host Connection:      <mediaserver>
Number of Drives:     4
On Demand Only:       no
Max MPX/drive:        1
Density:              hcart - 1/2 Inch Cartridge
Robot Type/Number:    TLD / 1
Max Fragment Size:    102400 MB
 
Label:                <master>-hcart3
Storage Unit Type:    Media Manager
Host Connection:      <master>
Number of Drives:     4
On Demand Only:       no
Max MPX/drive:        1
Density:              hcart3 - 1/2 Inch Cartridge 3
Robot Type:           (not robotic)
Max Fragment Size:    1048576 MB
 
Label:                <mediaserver>-hcart-robot-tld-1
Storage Unit Type:    Media Manager
Host Connection:      <mediaserver>
Number of Drives:     4
On Demand Only:       no
Max MPX/drive:        1
Density:              hcart - 1/2 Inch Cartridge
Robot Type/Number:    TLD / 1
Max Fragment Size:    1048576 MB
 
Label:                <master>-hcart3-robot-tld-0
Storage Unit Type:    Media Manager
Host Connection:      <master>
Number of Drives:     4
On Demand Only:       no
Max MPX/drive:        1
Density:              hcart3 - 1/2 Inch Cartridge 3
Robot Type/Number:    TLD / 0
Max Fragment Size:    1048576 MB
 
Label:                Disk_Staging
Storage Unit Type:    Disk
Storage Unit Subtype: Basic (1)
Host Connection:      <master>
Concurrent Jobs:      12
On Demand Only:       yes
Max MPX:              1
Path:                 "F:\Disk_Staging_Area"
Max Fragment Size:    10240 MB
Stage data:           yes
Block Sharing:        no
High Water Mark:      90
Low Water Mark:       75
Ok On Root:           no
 
Label:                <mediaserver>-Disk_Staging
Storage Unit Type:    Disk
Storage Unit Subtype: Basic (1)
Host Connection:      <mediaserver>
Concurrent Jobs:      12
On Demand Only:       yes
Max MPX:              1
Path:                 "D:\DiskStaging"
Max Fragment Size:    10240 MB
Stage data:           yes
Block Sharing:        no
High Water Mark:      92
Low Water Mark:       80
Ok On Root:           no

MickBoosh
Level 5

 

Hmmmm so If I'm reading this right, it is duplicating and then for some reason it's trying to do it again, approximately 1 week later? Our max copies are set to 2 by the way.
 
However, the image(s) in question, Don't seem to be duplicated
 
1326619940
1326619980
1326620030
 
This is full output frmo the search 15/01 to present
 
#edit, this is the full sql backup and not the transaction logs, just realised.....
 
 AAAAAAA_1326650445 15/01/2012 18:00:45 SQL_Backup Default-Application-Backup <master> XX4725 2 No

AAAAAAA_1326650819 15/01/2012 18:06:59 SQL_Backup Default-Application-Backup <master> XX4725 2 No

AAAAAAA_1326651088 15/01/2012 18:11:28 SQL_Backup Default-Application-Backup <master> XX4725 2 No

AAAAAAA_1326651436 15/01/2012 18:17:16 SQL_Backup Default-Application-Backup <master> XX4802 2 No

AAAAAAA_1326651898 15/01/2012 18:24:58 SQL_Backup Default-Application-Backup <master> XX4802 2 No

AAAAAAA_1326652204 15/01/2012 18:30:04 SQL_Backup Default-Application-Backup <master> XX4802 2 No

AAAAAAA_1326652701 15/01/2012 18:38:21 SQL_Backup Default-Application-Backup <master> XX4803 2 No

AAAAAAA_1326653131 15/01/2012 18:45:31 SQL_Backup Default-Application-Backup <master> XX4803 2 No 

Mark_Solutions
Level 6
Partner Accredited Certified

So you have:

1326619940 - 9.30am 15th
1326619980 - 9.33am 15th
1326620030 - 9.33am 15th

All three backups happened at about the same time so it could be that there was a glitch in the system somewhere at about that time.

Can you double check to see if there anything related to these images actually on the disk areas? - anything starting in AAAAAAA_1326619940 (or was it BBBBBB?) if so what are the files called.

If not then you may need to clear them out if the catalog but lets look at the disk first

The reason i asked about the DDSU schedule to to see if it is running multiple times before the last one has finished (due to the 1 hour interval - this was fixed somewhere in a patch but I thought that was an earlier version of NBU, or due to being run manually)

MickBoosh
Level 5

> To clarify 3. - how often do you see the policy named DSSU_diskstagingname in the activity monitor?

 

It appers between 6-10 times every day, on the 15th it appeared 8 times for instance. All finish with status 0

MickBoosh
Level 5

I just replied with regards to teh DSSU frequency,

For instance here is the 15th, they seem reasonably well spread out.

I did say from midnight to 5pm but it's actually 6pm. My bad.

 

 2474437 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 00:00:00 15/01/2012 06:31:32 1 

2475440 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 06:32:31 15/01/2012 07:37:32 1 

2475597 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 07:38:25 15/01/2012 08:50:22 1 

2475823 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 08:51:17 15/01/2012 09:59:12 1 

2475982 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 10:00:02 15/01/2012 13:50:52 1 

2476632 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 13:51:46 15/01/2012 16:57:22 1 

2477245 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 16:58:21 15/01/2012 17:43:22 1 

2477356 Backup Done 0 __DSSU_POLICY_Disk_Staging - <master> <master> 15/01/2012 17:58:22 15/01/2012 21:37:52 1 

Mark_Solutions
Level 6
Partner Accredited Certified

They all look fine - none starting before the previous one had finished and all with a status of 0 - so that is good!

So it is down to any rogue fragments on disk or an incorrect catalog entry now - check the disk first and if nothing there then it means those few backups have gone for good so we just need to do some cleanup.

MickBoosh
Level 5

Ah right, how do I go about looking for an incorrect catalog entry? THanks for your help with the Mark, much appreciated! 

Mark_Solutions
Level 6
Partner Accredited Certified

First I would try running:

nbdelete -allvolumes -force

to see if this does a full cleanup and makes the error dissapear (it is not as dangerous as the command sounds!)

If that makes no difference then you can try running:

bpexpdate -backupid AAAAAAA_1326619940 -copy 1 -d 0 -force

This removes the catalog entry for this image even if it cannot be found on disk - if anything is on disk it will remove the disk image as well (but you have said that it is not on disk)

Do this for each image that doesn't exist and is causing the problem so that all references to it are removed

MickBoosh
Level 5

Thanks, I'll give a go tomorrow, must dash! 

Again, thanks for your help!