05-03-2013 03:34 AM
Hi Guys, I have one concat volume say vol01. I am planning to create a striped plex on new storage devices and attach it to the existing volume vol01 and mirror it with existing concate plex. Kindly let me know if it feasible to perform.
Your help is apreciated.
Thanks
Solved! Go to Solution.
05-03-2013 01:18 PM
Layered volume give exact same performance and the data is in exactly the same place as non-layered, but VM creates more objects which means it is more complicated, but gives more reliance as more disks can fail without the volume failing. But if this is just a migration step, then I guess you will delete the plex on the old array, so there is no point in a layered volume and actually, the volume could not be created as layered as this requires both plexes to be stripes. Layered volumes are created where possible over a certain size, but you can override this trigger point as in my last post to the largest possible volume so layered volumes are disabled.
Mike
05-03-2013 05:00 AM
Yes this is fine - volume manger allows different layouts on plexes. The syntax would be something like:
vxassist -g diskgroup mirror vol_name layout=striped
Mike
05-03-2013 05:38 AM
Thanks Mike, Also could you please let me what would be syntax for creating mirror or striped plex of 16 column with 64 each 33gb disks.
05-03-2013 05:50 AM
vxassist -g diskgroup mirror vol_name layout=striped ncol=16 stwidth=64 !newdisk_name
Mike
05-03-2013 10:06 AM
Sorry mike, i meant with ncol=16 with 64 disks. Do I have to specify all 64 disks while creating mirror or striped plex ?
05-03-2013 10:24 AM
And currently I am planning like below :
Create subdisks on each 64 disks, by taking whole disk.
vxmake -g DG sd disk01-01 disk01,0,33g
Create plex as follows:
vxmake -g DG plex "Plex name" layout=striped ncol=16 sd=disk1,disk2...disk64
vxplex -g DG att Plex Volume
05-03-2013 11:06 AM
Why use lower level commands? If you use vxassist, then you probably don't need to specify disks as vxassist will try to mirror across controllers and your new array will show up on a different controller. If you want to verify that vxassist will do the right thing, then just create a small concat volume (like 100m) on your existing array and then run:
vxassist -g diskgroup mirror vol_name layout=striped ncol=16
This should create a mirrored plex across 16 of your disks on the new array.
As you are using 64 disks, I am assuming your existing volume is just over 2TB (33*64GB) so if you mirror this volume using vxassist it will use 64 disks.
If you don't want vxassist to create a layered volume, then you can edit/create /etc/default/vxassist with:
stripe-mirror-col-trigger-pt=262144g stripe-mirror-col-split-trigger-pt=262144g
This stops all volumes been layered.
Mike
05-03-2013 12:53 PM
Yes Mike file system is around 2TB. We wanted to play safe by creating subdisks, plex , since we are sure we will have one concat plex and one striped plex by this way.
Could you please explain me below things:
"If you don't want vxassist to create a layered volume, then you can edit/create /etc/default/vxassist with:
stripe-mirror-col-trigger-pt=262144g stripe-mirror-col-split-trigger-pt=262144g
This stops all volumes been layered."
Thanks
05-03-2013 01:18 PM
Layered volume give exact same performance and the data is in exactly the same place as non-layered, but VM creates more objects which means it is more complicated, but gives more reliance as more disks can fail without the volume failing. But if this is just a migration step, then I guess you will delete the plex on the old array, so there is no point in a layered volume and actually, the volume could not be created as layered as this requires both plexes to be stripes. Layered volumes are created where possible over a certain size, but you can override this trigger point as in my last post to the largest possible volume so layered volumes are disabled.
Mike
05-06-2013 02:56 AM
Thanks Mike, for your valueable inputs,
I will be using either
vxassist -g diskgroup mirror vol_name layout=striped ncol=16
or if it creates any problem then might go with low lable commands.
Thanks.