cancel
Showing results for 
Search instead for 
Did you mean: 

fragmented DSSU disk staging Unit

LeeClayton
Level 5
Wanted to know if anyone bothers defragging their disk staging units and if so does it speed your backups up much? to disk or tape?

My disk units are 5TB on one server and 16TB on another server so not ideal for defragging. When I checked for fragmentation on bother servers they come out at 99% (fully red).

Reading some forums I saw there's a utility called CONTIG which looks as though it will run a lot faster than the default defragger, has anyone used this with success?

http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx

Lee
4 REPLIES 4

reson8
Level 4

Defragging a 5TB at 99% fragmentation is going to take many hours. Triple that for your 16TB Volume. 

Easiest way is to have the drives formatted, but that will require additional disk space.  Might want to try re-designing your DSSUconfigurations, perhaps splitting up the 16TB volume atleast. Ensure that your fragment size for your DSSU is not to small either.

By having several DSSU you can stop usage and redirect all jobs to the others so you can perform a format.

High fragmentation will not only cause poor read and write performance, it will shorten the life of your drives.

LeeClayton
Level 5
thanks for the reply, I recently upped the fragment size from 2GB to 10GB, I didn't want to increase it too much else restores will be a bit slower.
As for my DSSU's - we maybe increaseing our 16TB drive to 32TB, I guess it would make sense to break this down to 5TB disks? I guess if i did this I'd have to balance each policy to different DSSU's so they averaged out correctly?
As for defragging, I'm gona start scheduling a contig (defrag) every day from 11:00 - 21:00 (that's when my media servers are not doing anything), although i know this won't defrag the whole drive, I'm hoping over time it should do the trick.
My backups to disk average around 100MB to 500MB a minute and my duplications to tape average around 500MB to 1GB a minute.

reson8
Level 4
Correct, there will be more work in evenly distributing the work among policies.

I would probably break it down into 4 x 8TB at a minimum.  You may want to look at creating DSSU groups for redundancy.

As for your mention on average speed on your disk, the value of maximum jobs going to each DSSU should be taken into consideration as over utilization the DSSU will cause more fragmentation. 

If DSSU can only perform at maximum of 10MB/s,  and backup jobs can average 5MB/s.  I would set the max concurrent jobs to no more than 2 or 3.
Setting this value to high will only cause more overhead processing, memory usage and importantly fragmentation.

Giroevolver
Level 6
I have a similar setup and run the DSSU's in 2TB chunks to help reduce to much disk thrashing on one set of disks. If your running windows it will write from the start of your 16TB lun which will cause heavey load on those disks hence the 2TB chunks.

If you setup a Storage unit group and add all your chunks in there, then point the clients to that group you can use round robin or prioritized depending on how balanced you want your clients to each disk unit.

Fragmentation wise, adding an extra 2TB to format one of your DSSU's is easier than adding 16TB if your low on storage!