Running Veritas BUE v16 and encountering an issue when backing up servers that have a large number of small files. For example, one of our file servers has 4.3 million files (mostly Office docs, jpg, etc.) in 600K folders for a total size of 1.1TB. This takes 18 hours to backup to disk at a rate of 1127MB/min. We do this job once a week, then run nightly differential backups against it through the rest of the week. For this disk-to-disk job, we aren't doing compression or encryption. This is not an abnormal issue with this one job. We have another job of similar size and file composition that takes 14 hours to complete. Jobs of similar overall size, that are setup to backup entire VM's take 1/2 the time to complete.
How can I get more speed out of the data backup jobs? Or is this simply how it is due to the nature of the files being backed up? Thanks.
Huge quantities of small files have always been low on the performance scale due to all the directory access required. Large files have better throughput due to less directory overhead.
That's what I was fearing the issue was, but hoping that maybe there was a setting I'm missing in the job configuration. On these two servers with the large number of files, we do a full VM backup on Saturday, but then we have to do a data-only backup on Monday so we can have that base-line backup for the differentials. Guess my best bet is to tweak the schedule a bit and do the full data-only backup early Sunday morning so it doesn't overlap into the work week.