cancel
Showing results for 
Search instead for 
Did you mean: 

Weird and random failure error - The block size being used is incorrect

systematic92
Level 5

Morning gents,

I am experiencing a really weird and inconsistent issue with my backups. The error revolves around the following:

Backup- GUINNESS-FAS02:10000
An unknown error occurred on device "HP 0007".
V-79-57344-34035 - The block size being used is incorrect.
 

So our setup is simple:

BackupExec 2010 R3
 

1 HP Robotic Tape Library called HP003. The model of this library is MSL 4048 / MSL G3 Series

4 tape drives as part of the library called HP004, HP006. HP0007 and HP0008. The model of each of these drives is HP Ultrium 5-SCSI

We have two NetApp devices each of which have 2 controllers (NetApp1 consists of controllers Netapp-A and NetApp-B and NetApp 2 device consists of controllers NetApp-C and NetApp-D)
 

We have 2 device pools created where the first device pool consists of 3 drives (HP0004, HP0006 and HP0007) and backs up data from the NetApp1 device. We have a second device pool that consists of 1 drive (HP0008) that serves backups from the NetApp2 device.

The drives all have the same settings:

Enable Compression
Block Size (per device) = 64kb
Buffer Size (per device) = 256kb
Buffer count = 20
High Water count = 10
Read single block mode = disabled
Write single block mode = disabled
Read SCSI-passthrough mode = disabled
Write SCSI-passthrough mode = disabled

So this error is really strange........when a daily incremental fails, I leave it and do absolutely nothing. The very next day, when the scheduled job runs , it completes fine!!! What gives?

I think I recall a backup job failing during the early hours with the same error and when I came in the morning and hit  retry job, it worked and completed fine!

I can't understand these random success/failures and was hoping you guys might be able to chime in your expertise. Is there a way that I can avoid any future failures?

Thanks,

 

 

34 REPLIES 34

Kunal_Mudliyar1
Level 6
Employee Accredited

Yes if you see issues jes revert back to the old settings, its like tuning the tape drive for the best setting.

 

CraigV
Moderator
Moderator
Partner    VIP    Accredited

Well yes & no...what you weren't told in the post directly above is that if the block size is incorrect, you risk corrupting your backups...hence you won't be able to restore.

So while you can simply default the settings back to 64K, you'd have more important issues around restores...so you make a change, test a restore. See how that impacts things. Do this until you reach an optimal setting.

Thanks!

systematic92
Level 5

When you say risk corrupting my backups - will this be immediately evident? For example, I have changed the block size on all 4 drives now so if the jobs are successful tonight without producing an error, then im assuming I am good to go?

With respect to restores. Say there were corrupted backups tonight (now that the settings have changed to 256), and I revert back to the older settings of 64K, surely only the restores for the failed backups of last night will prove problematic right?

It just so happens that I have a restore for today. This restore will have been from tapes that backed up when the settings were defaulted at 64K. Im hoping the restore is fine now that the drives have been configured for 256.

 

 

Kunal_Mudliyar1
Level 6
Employee Accredited

If you backup using the 256 setting and then defaulted back to 64 it will not affect the restore

If the backup is successful it can be restored back.

CraigV
Moderator
Moderator
Partner    VIP    Accredited

...you're good-to-go as long as ANY changes made to the block size are tested via restore. They will affect the next backup job that runs...nothing prior to this.

Thanks!

systematic92
Level 5

Craig/Kunal,

Gents you are both legends but I needed to clarify your last 2 statements.

So if I change to 256 (which I have done) and my backups complete successfully, then I should assume that any restores I perform from that point onwards will be ok?

If any backups that I have taken with the new settings prove problematic with restores I can simply default back to 64 and I should be able to restore the data fine?

So the changing the block sizes isn't really a bad thing unless there is a problem with a restore which is easily remediated by defaulting back to 64k and re-running the restore job

 

Is that correct legends!

 

 

CraigV
Moderator
Moderator
Partner    VIP    Accredited

NEVER assume anything in IT. Make your change and test a restore of some data. This is the only thing you will have to fall back on as proof. In theory you should be able to restore using the lower block setting, but this is also something you should test.

I always used a block size of 128K which worked well, and never had to default to 64K, so I can't speak from experience and tell you for definite it is possible.

Likewise anybody who hasn't done this can't do the same...CYA (cover your @ss) and test both scenarios before settling on this.

Thanks!

Kunal_Mudliyar1
Level 6
Employee Accredited

YES to all the three questions/statement

Block size is simply  the blocks of data written by the tape drive to the tape .

But as Craig said CYA!

pkh
Moderator
Moderator
   VIP    Certified

What are your objectives in changing the buffer size?  Do you know the effects of such a change?

systematic92
Level 5

My end goal is to increase the performance/throughput of backups.

Ive noticed a slight increase on jobs from one NetApp controller but the other NetApp controller is BLISTERING faster. The latter was originally backing up at 7000mb/min, it now backs up at 14500mb/min

The reason for the change in buffer was predominantly down to online reports from owners suggesting the change in block size increased the speed/performance.

systematic92
Level 5

Rather than raising another forum thread, I was wonderinfg if you legends can help me with compression

So we have LT05 tapes, which are capable of backing up 3.0TB however all our tapes back up 1.4-1.5TB tops (when viewing the details in the Media tab). Is the 3TB value (for the tapes) based on a compressed value or uncompressed one?

The two-part question is, if you were in my shoes, would you enable a compression ratio to use the full 3TB? If so, how would I do that? I think the current ratio is 1:1 - where can I see this?

Thanks,

CraigV
Moderator
Moderator
Partner    VIP    Accredited

3TB on LTO5 is 2:1 compression which is the maximum you can get. In real-world scenarios, you won't get this.

If you're getting 1:1 compression, make sure that compression is enabled in your job (Hardware, otherwise Software). If it is, stop the BE services and use HP Library and Tape Tools (HP LTT) to run a compression test on an available tape.

Thanks!

Kunal_Mudliyar1
Level 6
Employee Accredited

LTO 5 has native capacity of 1.5 TB and is capable of storing compressed data up 3 TB

http://en.wikipedia.org/wiki/Linear_Tape-Open

Hardware compression is done by tape drive you can enable or disable this in BE , in job properties and storage tab

Log files compress the best

Some files which are already compressed like mp3 or jpg cannot be compressed further

You can keep the tape drive driver and firmware updated for best results

systematic92
Level 5

Excellent information gentleman! Thank you

My final question (promise) is about DeDupe.

As you already know, we back up alot of mixed data to tape and ive always had this inclination that the data we back up isnt deduped at the source (NetApp). I spoke to my storage admin today and he said, dedupe is enabled on volumes and runs at midnight each night. Just for reference, my backup jobs commence at 18:30.

Upon digging further, we checked a volume that had depue enabled. This was a 2TB volume and when checking the dedupe stats, it (NetApp) had deduped 32GB after 11 hours equivalent to 5%!!! So im guessing that most of the data I am backing up on a nightly basis is either a)  Unduped or b) In the process of being deduped.

In the above case scenario, should I enable backup exec to dedupe the data?

 

Thanks

CraigV
Moderator
Moderator
Partner    VIP    Accredited

...if backing up to tape or another B2D folder (read: NOT a dedupe folder within BE), then your data is rehydrated. If the NetApp is deduping your data, the data backed up is inflated to the original size.

If you have the option enabled to use a dedupe folder, then backup to that...run a test and see how this works in your situation.

Thanks!