cancel
Showing results for 
Search instead for 
Did you mean: 

Multiple luns in Disk group

shiv124
Level 4

HI ,

When ever we want a file system to be extended we are addign a new LUN to the diskgroup .In this way we are adding multiple disks in a diskgroup

My question what is the best method to get the LUN extended or to add new LUNs .

Whatz the prons and cons of this 2 methods ?

 

Thanks and regards ,

 

Siva

1 ACCEPTED SOLUTION

Accepted Solutions

mikebounds
Level 6
Partner Accredited

From a volume manager point of view the

  • Disadvantage of adding a new LUN is that it will take longer to import and deport the diskgroups with more VM disks, but we are talking fractions of a second, so you would have to add several LUNs to be able to notice any difference.
     
  • Advanage of adding a new LUN is that there is more flexability in the volume layout so if the volume was originally created on one LUN, if you add a new LUN to extend then you could convert layout to a stripe in VM over the 2 LUNs which will improve I/O throughput, but generally this extra throughput is not required for most applications.  So for example suppose using one LUN you can acheive 100Mb/s, then if you only ever write at a max of 40Mb/s, then increasing 100Mb/s throughput is not of any use.

There also maybe Pros and Cons from an array point of view.

Mike

View solution in original post

6 REPLIES 6

mikebounds
Level 6
Partner Accredited

From a volume manager point of view the

  • Disadvantage of adding a new LUN is that it will take longer to import and deport the diskgroups with more VM disks, but we are talking fractions of a second, so you would have to add several LUNs to be able to notice any difference.
     
  • Advanage of adding a new LUN is that there is more flexability in the volume layout so if the volume was originally created on one LUN, if you add a new LUN to extend then you could convert layout to a stripe in VM over the 2 LUNs which will improve I/O throughput, but generally this extra throughput is not required for most applications.  So for example suppose using one LUN you can acheive 100Mb/s, then if you only ever write at a max of 40Mb/s, then increasing 100Mb/s throughput is not of any use.

There also maybe Pros and Cons from an array point of view.

Mike

joseph_dangelo
Level 6
Employee Accredited

Siva,

It was the case with older versions of Solaris that the max LUN count on certain HBA's was 256 devices.  That may seem like more than enough, but when drives were only between 9 and 36 GB and DBA's were lobbying for all the spindles they could get a hold of (This was before the prevalence of Cache), you'd be surprised how quickly that number was reached (especially with Multipathing). So in this model extending the LUN size was in some cases your only option.. 

That said, most if not all of those restrictions have been addressed and therefore in most cases just adding LUNS to the host rather than expanding RAID groups is the easiest method.  I would be remiss however if I didn't take into account considerations on the array side as well (More RAID groups means more parity, which translates to less useable capacity overall). 

One area that is still an issue is with VMware. ESX stills has a 2 TB LUN Size limit with 4.1 and VMFS3.  Although in ESXi 5, VMFS 5 has addressed this (Virtual RDM's however still have a 2 TB Limit).  This coupled with the 256 LUN limit can result in LUN expansion rather than addition being your only choice.

The other consideration is that when you add devices rather than grow luns, this allows you to choose whichever array vendor and storage tier you want.  Simply expanding a LUN will only increase capacity while offering no additional flexibility for migrations or tiering (LUN Expansions are restricted to the same drive type and RAID set in most cases).  Furthermore, expanding a LUN is not always an online activity and could potentially require application downtime for the required host-side operations (reboots etc). Conversely adding LUNS and expanding volumes with Storage Foundation is always a online operation with no down time required.

Also with CFS we mitigate any issues around import/deport times for the increased LUN count.

Ultimately, the choice is dependent on a variety of circumstances and is never black and white.

Joe D

shiv124
Level 4

HI Mark /Joe

 

Thanks for the answers.

 

Regards,

 

Siva

shiv124
Level 4

oops.A typo it was Mike :)

Jay_Kim
Level 5
Employee Accredited Certified

Just to add to what Mike mentioned.. When adding luns after luns into a single diskgroup, over time, you can end up with 100+ luns inside. I've seen cases where such setup can take a LONG time (30sec+) to import / deport a diskgroup. (as SFW needs to modify the header content for every disk during import/deport) This time becomes even longer when it's a cluster diskgroup as every device will need to be SCSI reserved/released as well. (I've seen some diskgroups take well over a minute or two to import)

Of course, other environment variables such as drivers, multipathing setup, zoning setup (SIST), volume size etc also affect the import/deport time on such diskgroups with a large number of disks.

joseph_dangelo
Level 6
Employee Accredited

Support for Multinode Disk Group Access was added to SFW 6.0.  This could be categorized as CVM for Windows (Not CFS however).  This entirely mitigates (as is the case with CFS/CVM on UNIX) the import/deport delays for Windows systems running SFW with large lun counts per DG.

Joe D