cancel
Showing results for 
Search instead for 
Did you mean: 

Puredisk 6.6 backup taking very long to complete daily backups.

fyl
Level 2

Hi, need some advise from our Puredisk gurus.

Some of our files servers at our remote sites are not completing their daily backups within 24 hour windows and exceeding it ... sometimes may take 3 or 4 days to complete all the backup jobs.

Each server have 2.4 - 7.4 total million files and we tried splitting them to a number of smaller jobs (1.5 mil files/600GB - 11.5TB data) to try to get the jobs completed but each job still takes 8 to 10 hours to complete. A typical profile would be 

 

Server/backup job "Server ABC (Home Profile folder)"
Backup time duration 10.5 hr
unique file backed up 7305
File selected on source (K) 1392.268
Total Data size on source (GB) 627.85
Modified (GB) 124.26
Data Change rate 19.79% (modified/total data size*100)
 
Data retention is 45 days.

These servers are back up through WAN to a DataCenter and bandwidth does not seem to be the contraint as WAN utilisation lower than 40% during backups.

Is there any limits / factors that determines PureDisk backup speed ? I can't seem to find any documents that provides such specifics.

Any suggestions or comments ?

Would a local puredisk server help ?

 

4 REPLIES 4

fyl
Level 2

A minor correction on data size.

Each server have 2.4 - 7.4 total million files and we tried splitting them to a number of smaller jobs (1.5 mil files/600GB - 1.5TB data) to try to get the jobs completed but each job still takes 8 to 10 hours to complete.

S_Williamson
Level 6

You have pretty much nailed the issue, the overhead required to catalog each file and break it down and then compare it against the previous blocks it may or may not have over that many files doesnt seem like a good solution for Puredisk.

The backup required for something like that needs to probably be blocked based (non dedupe) . We have Enterprise Vault servers with similar sizes of files and all really tiny files. We found the best solution is a Netbackup Flash Backup or if they are on NAS/SAN to replicate to remote site and do a backup once per week.

The only other thing I can think of is a> get a much beefier CPU (4x4Cores) or something as the CPU might be maxed out while its deduping/fingerprinting  or b> spread the number of files out over a few different servers.

You might want to contact Symantec Support to see if they have any other suggestions.

Simon

 

fyl
Level 2

Thank you Simon for your feedback and explanation. 

We did check the server performance counters and it doesn't seem to be that high even during backups and since backup is taking so long, it typically runs over prime time while users are accessing the files and disks.

Do you typically see a big spike in CPU / Disk reads during puredisk backups or is the agent 'smart' to throttle backups based on system load ? When puredisk does fingerprinting, does it essentially go through every bit of the file to validate the fingerprits and determien what areas have changed ? If so, we should see big I/O read since it has to go through every file.

I know conventional backups will typically use up as much resources as it needs during backups.

S_Williamson
Level 6

A typical file server we see CPU load hit 50-60% (2x2Core) for around 2-3hrs while it scans the filesystem. These are mostly only 250k-1M files, not the 2.4-7 you mention above.

Our Enterprise vault server running a flash backup of all its files (around 12M) takes around 5 days to complete (CPU wise 4xquad cores running 15-20% all the time backups are running). RAM wise its using 2.5GB out of 4GB. Im unsure about disk IO on this server as we dont keep those stats. I do know we ran this on a Virtual sever for a while and it was unable to cope, we were getting 8-10M/sec from the box as a virtual and now as a physical host is 35-50M/sec.

As for the agents being smart, I wouldnt think so. From memory though, I seem to recall somewhere that they only may use 1 CPU for Puredisk and use it 100%.

Fingerprinting initially goes through all files but the second and subsequent passes only breaks down files that have been changed since the last backup.