My question is about otimization
Our Pure Disk type storage disks are configured with a fragment size of 51.2 GB. Our storage units that send to HCP(Hitachi) and those that send to AWS are configured with a fragment size of 524.288 GB.
What I would like to know is what we can gain and what we can lose by changing the fragment size values.
A fragment is the smallest chunk of data Netbackup can read to restore data.
So restoring 2 KB file, Netbackup need to read either 50GB or 524 GB of data depending on the fragment size.
Using larger fragment sizes means larger chunks of data need to be read to restore data.
Using small fragment sizes means more overhead on the Netbackup database because more records of fragment need to be stored.
On tape fragment size has a larger impact than on disk.
Thank you Nicolai,
I had forgotten about the tapes, today our storage units that send to tapes are configured with 1 TB of fragment size.
When we install a new media server, we copy from the old media servers. So these values come by default or were used by former administrators of the environment.
Is there a best practices guide? Mainly for AWS.