cancel
Showing results for 
Search instead for 
Did you mean: 

Client has two Harddrives to backup, result in 2 large Bkup speed's

flavor4real
Level 5
Hello,~
It's me again ;-/
I have setup two identical backup jobs, where job#1 grabs the data from drive X, and job#2 grabs the data from drive Y. One job showes a speed of ~20Mb/sec, where the other showes a backup speed of ~1.9MB/sec... At first i thought it was a buffer issue, then i thought the network is the issue. After reviewing these two results, what could be the reoson for the large speed difference? Could it be the client server?
1 ACCEPTED SOLUTION

Accepted Solutions

Ed_Wilts
Level 6
> i just saw that the slower backed up drive is compressed data... you think that the problem?

If by compressed you mean Windows compressed file folders, then absolutely that it is the cause.  Windows will have to read the data and uncompress it before giving it to NetBackup.  Assuming identical drives, one full of uncompressed data and one full of data that's compressed by a factor of 2, the 2nd one will actually be backing up twice the data and put a signficantly higher load on the CPU.

I haven't seen a case where Windows compression is a good thing in a long, long time.  The cost of disk is cheaper than the management and maintenance overhead.

View solution in original post

6 REPLIES 6

Andy_Welburn
Level 6
Could be the type of data - e.g. few large files will back up quicker than a lot of small files.

Local I/O resources being used by client - e.g. disks busy whilst being backed up.

flavor4real
Level 5
both drives on the same client and the data amount is about the same....
i just saw that the slower backed up drive is compressed data... you think that the problem?

I see that another backup job is very slow too and the backed up drives are also compressed on the client.

Marianne
Level 6
Partner    VIP    Accredited Certified
There are lots of factors that determine disk read speed. First of all the disk itself. Also, type of data (not amount) - O/S disk will normally be slower than a separate disk that is used for software repository, as Andy said - other processes running on the disk while the backup is running  and then the BIG culprit - fragmentation.

There are tools that can be downloaded from the web to test read speed independent of any backup software, e.g.

HP performance assessment tools (pat utilities) can be downloaded from:
http://www.hp.com/support/pat


Ed_Wilts
Level 6
> i just saw that the slower backed up drive is compressed data... you think that the problem?

If by compressed you mean Windows compressed file folders, then absolutely that it is the cause.  Windows will have to read the data and uncompress it before giving it to NetBackup.  Assuming identical drives, one full of uncompressed data and one full of data that's compressed by a factor of 2, the 2nd one will actually be backing up twice the data and put a signficantly higher load on the CPU.

I haven't seen a case where Windows compression is a good thing in a long, long time.  The cost of disk is cheaper than the management and maintenance overhead.

flavor4real
Level 5
@ Ed:  I thought the same. Right now, I'm uncompressing the data but this will take a while so I'm hoping that it will be done tomorrow morning.  I tweaked the NB Buffers and all and always received a speed of 1-2 MB/sec. Until I test backed up another server, where one drive is compressed and another not. I've seen right away the higher speed difference of the uncompressed one.

We will implement a seperate network/fiber structure for the netbackup soon too. I think this is another reason why the backup speed is low. so once everything is setup then I'll see the real speed.

flavor4real
Level 5
Another question:  would a backup over VM virtulization make a difference ?
I'm just asking because some servers are backed up over VMvirtualization and from what i hear, it is way faster than regular backups