cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

How to expand a dg "vgbackup" and after a filesystem "/prod01p"

I have a dg with 44 LUNs (100 GB) and I'm allocating more 13 LUNs for expansion, getting in the end with 57 LUNS. Currently the dg this with 44 striped.

v  vol_prod01p  -            ENABLED  ACTIVE   9219080192 SELECT  vol_prod01p-01 fsgen
pl vol_prod01p-01 vol_prod01p ENABLED ACTIVE   9219082752 STRIPE  44/128   RW

now insert the new disks, as follows:
 

vxdg -g vgbackup adddisk vgbackup250=emc_clariion0_250
vxdg -g vgbackup adddisk vgbackup253=emc_clariion0_253
vxdg -g vgbackup adddisk vgbackup255=emc_clariion0_255
vxdg -g vgbackup adddisk vgbackup257=emc_clariion0_257
vxdg -g vgbackup adddisk vgbackup264=emc_clariion0_264
vxdg -g vgbackup adddisk vgbackup265=emc_clariion0_265
vxdg -g vgbackup adddisk vgbackup266=emc_clariion0_266
vxdg -g vgbackup adddisk vgbackup267=emc_clariion0_267
vxdg -g vgbackup adddisk vgbackup268=emc_clariion0_268
vxdg -g vgbackup adddisk vgbackup269=emc_clariion0_269
vxdg -g vgbackup adddisk vgbackup271=emc_clariion0_271
vxdg -g vgbackup adddisk vgbackup272=emc_clariion0_272
vxdg -g vgbackup adddisk vgbackup273=emc_clariion0_273

from this step which the commands should I use for expansion of the dg subsequently increase the filesystem with these new discs.
 
Information :
 

/dev/vx/dsk/vgbackup/vol_prod01p
                       4.3T   4.0T   307G    94%    /prod01p

root@BMG01 # vxassist -g vgbackup maxsize
Maximum volume size: 2730465280 (1333235Mb)


Thanks.

 

 

 

 

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

My recommendation is a single

My recommendation is a single concatenated volume.

PS: 

You can close the discussion by selecting one or more helpful post and 'Mark as solution'.
To mark more than one helpful post as solution, select 'Request split solution'. 

View solution in original post

7 Replies
Highlighted

A volume striped across 44

A volume striped across 44 luns/disks?? 

Oh dear!!! Who did this and why?

To expand the volume, you need to add another 44 luns in order for VxVM to maintain the current volume layout.

Alternatively, mirror the volume to new disks and specify different layout (stripe across less amount of large luns), then remove original plex with 44 disks.

You can then resize the volume by addin same amount of disks as the number of stripes.

Have a look at this recent post:

Issue while Extending file system in vxvm 

Highlighted

Marianne is right and to

Marianne is right and to change stripe size, the best method is to mirror to new stripe layout and then remove orginal, but you can also use relayout, but this could take a very long time and effect I/O while it is running

So to relayout to 4 columns use:

vxassist -g vgbackup relayout vol_prod01p ncol=4

I would test this first on your new disks - so create a 13 way stripe new volume on your new disks and then do a relayout to see how long it takes at different sizes.

Mike

Highlighted

Ok, Marianne and Mike This

Ok, Marianne and Mike

 

This is a backup and dg can be deleted without problems. How to acquire Veritas recently, and I have no experience, what would be your recommendation  to create a dg of same size 4 TB, with possibility of expanding gradually.

 

Thanks.

Highlighted

I have a bit of a problem

I have a bit of a problem when striping is done in VxVM across luns that are already striped at the hardware layer.
Instead of increasing performance, the striping here will probably decrease performance.

One large lun with proper layout at hardware level to ensure performance and redundancy is in most instances the best.

I suggest you go with Mike's suggestion to create a test volume and perform a relayout.

To know what is right for you, you should actually create volumes and test performance before putting into production.

Always keep redundancy and future expansion in mind.

Excellent TN:

Stripe set performance considerations for Veritas Storage Foundation
http://www.symantec.com/docs/TECH204950 

Extract:

In theory, as more spindles are added to a stripe set, more I/O is processed in parallel, potentially improving performance. However, the increase in parallel processing must be weighed against the increasing amount of movement that is the result of fragmenting I/O across multiple columns. As columns are added, one eventually encounters a "diminishing return" where adding further columns no longer provides a significant improvement in I/O, or is not worth the increased risk of a hardware failure. Every spindle that is added to a stripe set increases the chance that a single hardware failure will cause the entire volume to fail. 

Note: Do not assume that a larger number of columns will provide better performance than a smaller number, or that a certain stripe unit size will have superior performance when compared to a different stripe unit size, or even that a striped volume will actually have superior performance when compared to a concatenated volume.

There are too many variables involved in performance for such assumptions to be true for all cases and there is no substitute for testing. Before putting a volume into production, use benchmarking tools to test I/O performance, in different layouts, in a manner that is representative of the intended production environment. This is the only reliable method to determine which layout provides the best performance.

 

This forum discussion is about striping over already striped HW lun:
https://www-secure.symantec.com/connect/forums/running-vxvm-striping-hardware-san-running-raid-10
 

Highlighted

Striping at Vxvm can still

Striping at Vxvm can still improve performance, but I wouldn't go beyond 4 colums and you should ensure that no disks in one column are in the same raid set (i.e shared the same spindles) as another column, otherwise the striping will not improve performance and could reduce it.  If you are not sure how the back-end is configured then best to configure as concat, rather than stripe and modern arrays perform so well, that the extra benefit of striping at vxvm level will probably not be noticed unless you are writing several GB in a very short time (like a database import).

If you can delete the existing volume (lose all data in /prod01p)  then this is the easiest:

umount /prod01p
vxassist -g vgbackup remove volume vol_prod01p
vxassist -g vgbackup make vol_prod01p 4096g ncol=4
mkfs -Fvxfs /dev/vx/rdsk/vgbackup/vol_prod01p
mount /dev/vx/dsk/vgbackup/vol_prod01p /prod01p

 

This will create a 4TB striped column with 4 columns or if you omit the ncol=4, will create a concat (non-striped volume).

You will require 4 LUNs at a time to increase the striped volume, or a concat volume can be increased one LUN at a time.

Mike

Highlighted

OK Mike and Marianne,   with

OK Mike and Marianne,
 
with the information you can get on with my tests. Just to close, as I said earlier is an area where backup large files (16 GB) are recorded and removed frequently. I believe that the form would be the most recommended concat what do you think? After the comments you can close the case.
Since already thank you for the valuable feedback.
 
Thank you very much.
Highlighted
Accepted Solution!

My recommendation is a single

My recommendation is a single concatenated volume.

PS: 

You can close the discussion by selecting one or more helpful post and 'Mark as solution'.
To mark more than one helpful post as solution, select 'Request split solution'. 

View solution in original post