06-21-2012 11:10 PM
HI ,
When ever we want a file system to be extended we are addign a new LUN to the diskgroup .In this way we are adding multiple disks in a diskgroup
My question what is the best method to get the LUN extended or to add new LUNs .
Whatz the prons and cons of this 2 methods ?
Thanks and regards ,
Siva
Solved! Go to Solution.
06-22-2012 12:53 AM
From a volume manager point of view the
There also maybe Pros and Cons from an array point of view.
Mike
06-22-2012 12:53 AM
From a volume manager point of view the
There also maybe Pros and Cons from an array point of view.
Mike
06-22-2012 07:09 PM
Siva,
It was the case with older versions of Solaris that the max LUN count on certain HBA's was 256 devices. That may seem like more than enough, but when drives were only between 9 and 36 GB and DBA's were lobbying for all the spindles they could get a hold of (This was before the prevalence of Cache), you'd be surprised how quickly that number was reached (especially with Multipathing). So in this model extending the LUN size was in some cases your only option..
That said, most if not all of those restrictions have been addressed and therefore in most cases just adding LUNS to the host rather than expanding RAID groups is the easiest method. I would be remiss however if I didn't take into account considerations on the array side as well (More RAID groups means more parity, which translates to less useable capacity overall).
One area that is still an issue is with VMware. ESX stills has a 2 TB LUN Size limit with 4.1 and VMFS3. Although in ESXi 5, VMFS 5 has addressed this (Virtual RDM's however still have a 2 TB Limit). This coupled with the 256 LUN limit can result in LUN expansion rather than addition being your only choice.
The other consideration is that when you add devices rather than grow luns, this allows you to choose whichever array vendor and storage tier you want. Simply expanding a LUN will only increase capacity while offering no additional flexibility for migrations or tiering (LUN Expansions are restricted to the same drive type and RAID set in most cases). Furthermore, expanding a LUN is not always an online activity and could potentially require application downtime for the required host-side operations (reboots etc). Conversely adding LUNS and expanding volumes with Storage Foundation is always a online operation with no down time required.
Also with CFS we mitigate any issues around import/deport times for the increased LUN count.
Ultimately, the choice is dependent on a variety of circumstances and is never black and white.
Joe D
06-30-2012 09:01 AM
HI Mark /Joe
Thanks for the answers.
Regards,
Siva
06-30-2012 09:02 AM
oops.A typo it was Mike :)
07-03-2012 03:51 AM
Just to add to what Mike mentioned.. When adding luns after luns into a single diskgroup, over time, you can end up with 100+ luns inside. I've seen cases where such setup can take a LONG time (30sec+) to import / deport a diskgroup. (as SFW needs to modify the header content for every disk during import/deport) This time becomes even longer when it's a cluster diskgroup as every device will need to be SCSI reserved/released as well. (I've seen some diskgroups take well over a minute or two to import)
Of course, other environment variables such as drivers, multipathing setup, zoning setup (SIST), volume size etc also affect the import/deport time on such diskgroups with a large number of disks.
07-06-2012 12:11 PM
Support for Multinode Disk Group Access was added to SFW 6.0. This could be categorized as CVM for Windows (Not CFS however). This entirely mitigates (as is the case with CFS/CVM on UNIX) the import/deport delays for Windows systems running SFW with large lun counts per DG.
Joe D