07-13-2011 08:42 AM
Hi Folks,
I've been playing with AdvancedDisk (AD) storage pools and zfs, and have a few questions i'd like to run by you all.
I've got a Sun Thumper with 48TB local storage available to me. I've configured this storage according to best practice with four roughly equal multi-terabyte pools, as so:
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
pool1 9.06T 437M 9.06T 0% ONLINE -
pool2 10.9T 1.73G 10.9T 0% ONLINE -
pool3 10.9T 726M 10.9T 0% ONLINE -
pool4 9.06T 693M 9.06T 0% ONLINE -
Going through the AD best practice guide, a couple of things jumped out at me:
File System Restrictions
For some file system types, notably NFS and ZFS, the lack of full posix compliance of the file system means that full file system conditions cannot be detected correctly, leading to problems when spanning volumes. To avoid problems where these file systems are used, each disk pool should be comprised of only one volume so that no spanning occurs.
Does this mean that when creating my disk pools, that I shouldn't be looking to carve up each of these pools into separate zfs file systems? That I should create a disk pool and assign the whole of the pools space to it and nothing more and then use storage unit groups with load-balancing?
Might a number of smaller pools be a better idea thus affording me a little more granularity at the storage unit group level?
Thanks in advance.
07-13-2011 09:39 AM
Most of our backup jobs come from RMAN. I know that NB cannot estimate the size of backup's coming from RMAN jobs, how does AdvanceDisk handle this when it tries to calculate how much space to pre-allocate? For example, we have a few multi-terabye database backups. Our images are set to 20GB and as far as I know each of these images are considered a backup in their own right.
P16 from the AD Best Practice guide says:
When a backup starts NetBackup first compares the estimated size of the job against the available space on all disks within the disk pool (including any disks that are marked down) to determine if there is sufficient space for the backup. It then selects the disk within the pool with the most available space and starts writing the backup to that disk until the disk reaches its high water mark or the backup completes.
It goes on to say:
For backup jobs that have run previously the amount of space that is pre-allocated is based on the size of the last backup of the same type made by the same policy on the same client and an overhead of 20% is added. Backup jobs with no previous history the capacity of the disk volume above the high watermark is used (e.g. for a 2TByte volume with a 98% high water mark, 40Gbytes is pre-allocated).
Would it therefore base its calculations on iterations of 20GB? Does this also apply for RMAN jobs?
07-14-2011 08:02 AM
Hi,
there is a very nice documentation how to use a Sun x4500 as a NetBackup media server including zfs configuration.
Maybe that will help you, just do a Google search for "sun x4500 as symantec netbackup 6.5 media server"...
Regards,
Alex
07-19-2011 01:14 AM
Thanks for the tip, have had a look at that doc and whilst its very good, it doesn't really help me answer my questions.
Appreciate it though.