cancel
Showing results for 
Search instead for 
Did you mean: 

The backup size on the tapes

ipmanyak
Level 5

I use LTO-5, 3.0TB (1.5Tb native).

The size of some backups exceeds 2 Tb and they are written sequentially on tapes.

My backup size on the tapes usually 1-1.2 Tb instead of 1.5 Tb. Why?

tapeshot.jpg

 

1 ACCEPTED SOLUTION

Accepted Solutions

mph999
Level 6
Employee Accredited
Aside of the excellent suggestions from revaroo and Nicolai ...  here are some details I posted to a simlar question some time ago.
 
NetBackup passes data to the operating system, one block at a time, to be written to the tape drive. NetBackup has no understanding of tape capacity. In theory, it would keep writing to the same tape "forever".
 

When the tape physically passes the logical end-of-tape, this is detected by the tape drive firmware. The tape drive firmware then sets a 'flag' in the tape driver (this would be the st driver in the case of Solaris). There is still enough physical space on the tape for the current block to be written, so this completes successfully. NetBackup then attempts to send the next block of data (via the operating system) but now the tape driver refuses, as the 'tape full' flag is set. The st driver then passes this 'tape full' message to the operating system, which passes it to NetBackup.  Only when this has happened will Netbackup request the tape to be changed.

Or put more simply, the tape becoming full cannot be controlled in anyway by NBU.  Additionally, the amount of data written to the tape is dependant on how 'compressible' the data is, and again, cannot be controlled by NBU.

Low amounts of data written to a tape can have various causes ...

1.  Be totally normal  (data is not very compressible)

2.  Drive hardware fault

3.  Tape driver fault

4.  Tape firmware fault

Martin

View solution in original post

9 REPLIES 9

Marianne
Level 6
Partner    VIP    Accredited Certified

We are missing the Status column. 
Are all of these tapes marked as Full?

ipmanyak
Level 5

Yes. All tapes are full/


A00009  HCART2   NONE     -       -      -       3     47483442 ACTIVE
A00018  HCART2   NONE     -       -      -       3     861730275        ACTIVE
A00015  HCART2   NONE     -       -      -       -     -        AVAILABLE
A00016  HCART2   NONE     -       -      -       -     -        AVAILABLE
A00017  HCART2   NONE     -       -      -       -     -        AVAILABLE
A00014  HCART2   NONE     -       -      -       3     743195337        FROZEN
A00000  HCART2   NONE     -       -      -       3     1254976116       FULL
A00001  HCART2   NONE     -       -      -       3     1342495683       FULL
A00002  HCART2   NONE     -       -      -       3     1060478374       FULL
A00003  HCART2   NONE     -       -      -       3     1156805312       FULL
A00004  HCART2   NONE     -       -      -       3     1108618176       FULL
A00005  HCART2   NONE     -       -      -       3     1008820288       FULL
A00006  HCART2   NONE     -       -      -       3     992528536        FULL
A00007  HCART2   NONE     -       -      -       3     1297076495       FULL
A00008  HCART2   NONE     -       -      -       3     1202441402       FULL
A00010  HCART2   NONE     -       -      -       3     1139897873       FULL
A00011  HCART2   NONE     -       -      -       3     1086209536       FULL
A00012  HCART2   NONE     -       -      -       3     1073724032       FULL
A00013  HCART2   NONE     -       -      -       3     1116835648       FULL

 

revarooo
Level 6
Employee

It will not be 1.5Tb because each fragment is probably not filled up. This is normal.

Nicolai
Moderator
Moderator
Partner    VIP   

Shoe shining - if you can't keep tape drive streaming they will perform at lot of start stop operations. Each stop/start operation create a inter block gap between data "containers". This likely why you are seeing less than native capacity.

Ensure you do at lest between 47 MB/sec and 94 MB/sec per drive.

LTO5 do speed matching between: 

47–140MB/s (uncompressed)

94–280MB/s (compressed)

mph999
Level 6
Employee Accredited
Aside of the excellent suggestions from revaroo and Nicolai ...  here are some details I posted to a simlar question some time ago.
 
NetBackup passes data to the operating system, one block at a time, to be written to the tape drive. NetBackup has no understanding of tape capacity. In theory, it would keep writing to the same tape "forever".
 

When the tape physically passes the logical end-of-tape, this is detected by the tape drive firmware. The tape drive firmware then sets a 'flag' in the tape driver (this would be the st driver in the case of Solaris). There is still enough physical space on the tape for the current block to be written, so this completes successfully. NetBackup then attempts to send the next block of data (via the operating system) but now the tape driver refuses, as the 'tape full' flag is set. The st driver then passes this 'tape full' message to the operating system, which passes it to NetBackup.  Only when this has happened will Netbackup request the tape to be changed.

Or put more simply, the tape becoming full cannot be controlled in anyway by NBU.  Additionally, the amount of data written to the tape is dependant on how 'compressible' the data is, and again, cannot be controlled by NBU.

Low amounts of data written to a tape can have various causes ...

1.  Be totally normal  (data is not very compressible)

2.  Drive hardware fault

3.  Tape driver fault

4.  Tape firmware fault

Martin

ipmanyak
Level 5

My speed to the tape 3-4 GB/min (50-70MB/s). I use tape for duplication with SLP. When next job started  it waits so far the tape won't be released. Where I can  setup streaming for SLP ?

Nicolai
Moderator
Moderator
Partner    VIP   

You do not configure SLP for streaming.

It is infrastructure task. You need to ensure the underlying infrastructure can carry data fast enough to keep tape drive running.

There is a tech note out for SLP tunables: http://www.symantec.com/docs/TECH72995

as well as the SLP best ptatice : http://www.symantec.com/docs/TECH75047

Have you configured SIZE_DATA_BUFFERS & NUMBER_DATA_BUFFERS on media severs ?

ipmanyak
Level 5

I earlier configured SIZE_DATA_BUFFERS & NUMBER_DATA_BUFFERS on media severs. After this my backups speed on network increased from 11 MB/s  to 22-30 MB/s.

SIZE_DATA_BUFFERS_DISK = 1048576

NUMBER_DATA_BUFFERS_DISK = 64

NET_BUFER_SZ on master server = 512KB  (Communication bufer size)

NET_BUFER_SZ on media server = 262144

After increasing all this parameters  I see that  "waited for full buffer 1290704 times, delayed 1578369 times"

It became more many times!  

Nicolai
Moderator
Moderator
Partner    VIP   

Altering NUMBER and SIZE buffers does not improve underlying infrastructure limitations. It only improve the handling of the infrastructure available.

I would look at the network infrastructure and as well the staging disk. 

  • Doing both read and write slow down overall disk performance
  • Check for fragmentation
  • Do a read/write test at full load using non-netbackup tool
  • Try reducing concurrency to see if you get better performance.

​A good resource of performance hints is the Backup planing and performance tuning guide:

http://www.symantec.com/docs/DOC7449