cancel
Showing results for 
Search instead for 
Did you mean: 

VxVM Shrink a volume online

adp1
Level 3

Greetings, 

I have a 2.4 terabyte concat volume that's data has been moved/restructured and is now only using about 200 gig worth of the allocated space.    We've had very poor luck in the past using the vxresize command to shrink the volume.  

Does anyone know of another way to consolidate the data onto a single disk and the volume shrunk so that I can reclaim the unsused disks?    

The volume has a single plex, with 8 subdisks.  

Your help is appreciated.

Thanks!

1 ACCEPTED SOLUTION

Accepted Solutions

g_lee
Level 6

regarding the previous issue with resizing the 500g volume:

although it's not possible to say with certainty why it may have failed without knowing the exact details/output from the time you encountered, possible causes of resize problems/failure (in addition to Gaurav's earlier points) include:

a. the filesystem is being heavily used while resize is being run (ideally should quiesce the filesystem whilst the resize operation is in progress) - so retry the resize during a quieter period

b. the filesystem is fragmented - defragment the filesystem and retry the resize.

technote: http://www.symantec.com/business/support/index?page=content&id=TECH72741

see VxFS 5.1 (Linux) Adminstrator's Guide -> VxFS performance: creating, mounting, and tuning file systems -> Monitoring free space -> Monitoring fragmentation

https://sort.symantec.com/public/documents/sf/5.1/linux/html/vxfs_admin/ch02s04s01.htm

regarding the current filesystem:

From the output there appears to be sufficient filesystem & diskgroup space. You can migrate to the hds_65133* disks after the resize is done (as it currently doesn't reside on these disks).

Could you provide the following output to confirm the vxfs version /details?

# fstyp -v /dev/vx/rdsk/cnups1ds_dg1/cnups1ds_oraclepss

If the filesystem is vxfs, you should be able to resize the fs using the steps below (confirm with fstyp first).

1. Verify the filesystem is not fragmented

2. Run vxresize (preferably not during a period of heavy use), eg: to shrink to 200g

# vxresize -s -F vxfs -g cnups1ds_dg1 cnups1ds_oraclepss 200g
(note the filesystem must be mounted to perform the resize. see vxresize manpage for more info: https://sort.symantec.com/public/documents/sf/5.1/linux/manpages/volume_manager/man/html/man1m/vxresize.1m.html )

3. Verify the filesystem and volume have shrunk:

# df -h /oracle/PSS
# fstyp -v /dev/vx/rdsk/cnups1ds_dg1/cnups1ds_oraclepss
# vxprint -qhtrg cnups1ds_dg1 cnups1ds_oraclepss

4. Once the filesystem has been resized, you can migrate to the hds_65133 disks by mirroring with vxassist, then removing the original emc_4798 plex.

eg: to mirror to disk hds_65133_0143 (~500gb free from your output, so should have sufficient space once fs shrunk to 200g - substitute with desired disk(s) if you wish to use a different device)

# vxassist -b -g cnups1ds_dg1 mirror cnups1ds_oraclepss hds_65133_0143
######## (-b runs in background, check progress with vxtask list)

Check the mirror operation is complete, both plexes ENABLED and ACTIVE

# vxtask list #### should return no output
# vxprint -qhtrg cnups1ds_dg1 cnups1ds_oraclepss

remove the mirror on emc_4798

# vxassist -g cnups1ds_dg1 remove mirror cnups1ds_oraclepss !emc_4798_0eba
#### NOTE: this assumes the original plex is on emc_4798_0eba, which should be the case from the output you've provided. if vxprint shows a different device, substitute the correct disk in the remove mirror command above

see vxassist manpage for more details: https://sort.symantec.com/public/documents/sf/5.1/linux/manpages/volume_manager/man/html/man1m/vxassist.1m.html

If you encounter any issues, please provide the exact command(s) run and the error output to investigate further.

Alternatively you could also try the resizing the filesystem separately per ScottK's suggestion (resize fs with fsadm first, then resize the volume with vxassist) - you do need to take care re: the order and correct sizing as he's mentioned above.

View solution in original post

5 REPLIES 5

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hello,

Can you update on OS version & volume manager version ?

Also, how many disks & space your diskgroup has ? vxresize needs to have temporary space available for resize operations...  Also the filesystem you are trying to resize should not be 100% used ..

you can check the free space in diskgroup using "vxdg -g <dg> free"  command..

There are few more restrictions (as per 5.0MP3 vxvm admin guide), hope you are not hitting any one of them..

Note the following restrictions for using vxresize:
■ vxresize works with VxFS and UFS file systems only.
In some situations, when resizing large volumes, vxresize may take a long
time to complete.

■ Resizing a volume with a usage type other than FSGEN or RAID5 can result
in loss of data. If such an operation is required, use the -f option to forcibly
resize such a volume.
■ You cannot resize a volume that contains plexes with different layout types.
Attempting to do so results in the following error message:
VxVM vxresize ERROR V-5-1-2536 Volume volume has different
organization in each mirror
To resize such a volume successfully, you must first reconfigure it so that each
data plex has the same layout.

 

G

g_lee
Level 6

Normally resize issues tend to occur when users attempt to use other methods other than vxresize (ie: running vxresize is the safer way to shrink the filesystem).

Can you please elaborate regarding the "poor luck" you've had in the past using vxresize (ie: the exact errors/nature of the problems)? ie: depending on the particular issue it may not apply in this situation.

In addition to the restrictions Gaurav mentioned above, note that only VxFS filesystems can be shrunk, and the filesystem must be mounted to perform the shrink operation (see manpage: https://sort.symantec.com/public/documents/sf/5.0MP3/solaris/manpages/vxvm/man1m/vxresize.html )

If the filesystem is VxFS, can you also provide the following output for the volume (as well as the OS/SF version details + output requested by Gaurav)

# vxprint -qhtrg <dg> <volume> <disk_you_want_to_consolidate_to>

(ie: where <disk_you_want_to_consolidate_to> is the dmname of the disk that you want to be the eventual single disk containing the shrunk 200g volume)

regards,
Grace

NB: also moved this to Storage Foundation forum as this is not specific to DMP

adp1
Level 3

"Poor Luck" refers to trying to resize a small volume (500g or less) and the process never completes.    We've let is run for over 24 hours and it just sits there and seemingly does nothing.  

Linux SLES 10

Veritas 5.1

 

There is more than enough space in this diskgroup to do this process, so I don't think that's the issue.....

vxprint output:

v  cnups1ds_oraclepss -      ENABLED  ACTIVE   4951251456 SELECT  -        fsgen
pl cnups1ds_oraclepss-01 cnups1ds_oraclepss ENABLED ACTIVE 4951251456 CONCAT - RW
sd emc_4798_0eba-02 cnups1ds_oraclepss-01 emc_4798_0eba 460800 958889728 0 EMC_4798_0eba ENA
sd emc_4798_0eca-02 cnups1ds_oraclepss-01 emc_4798_0eca 460800 719027968 958889728 EMC_4798_0eca ENA
sd emc_4798_0ec2-02 cnups1ds_oraclepss-01 emc_4798_0ec2 304087040 566034944 1677917696 EMC_4798_0ec2 ENA
sd emc_4798_0ec2-12 cnups1ds_oraclepss-01 emc_4798_0ec2 890171904 69178624 2243952640 EMC_4798_0ec2 ENA
sd emc_4798_105a-01 cnups1ds_oraclepss-01 emc_4798_105a 0 119861248 2313131264 EMC_4798_105a ENA
sd emc_4798_105b-01 cnups1ds_oraclepss-01 emc_4798_105b 0 599557888 2432992512 EMC_4798_105b ENA
sd emc_4798_08db-01 cnups1ds_oraclepss-01 emc_4798_08db 0 959350528 3032550400 EMC_4798_08db ENA
sd emc_4798_08d3-01 cnups1ds_oraclepss-01 emc_4798_08d3 0 959350528 3991900928 EMC_4798_08d3 ENA
 

 

df output:
/dev/vx/dsk/cnups1ds_dg1/cnups1ds_oraclepss  2.4T  183G  2.2T   8% /oracle/PSS

 

vxdg free output: 

vxdg -g cnups1ds_dg1 free
DISK         DEVICE       TAG          OFFSET    LENGTH    FLAGS
emc_1555_08f7 emc_1555_08f7 emc_1555_08f7 0         378158848 -
emc_4780_0cc8 emc_4780_0cc8 emc_4780_0cc8 0         119861248 -
emc_4780_0cc9 emc_4780_0cc9 emc_4780_0cc9 0         119861248 -
emc_4798_0a0e emc_4798_0a0e emc_4798_0a0e 0         599557888 -
emc_4798_0eba EMC_4798_0eba EMC_4798_0eba 0         460800    -
emc_4798_0eca EMC_4798_0eca EMC_4798_0eca 0         460800    -
emc_4798_0ec2 EMC_4798_0ec2 EMC_4798_0ec2 0         304087040 -
emc_4798_0ec2 EMC_4798_0ec2 EMC_4798_0ec2 870121984 20049920  -
emc_4798_0fd7 emc_4798_0fd7 emc_4798_0fd7 0         719488768 -
emc_4798_0994 emc_4798_0994 emc_4798_0994 0         599557888 -
hds_65133_0140 hds_65133_0140 hds_65133_0140 325058560 722748304 -
hds_65133_0142 hds_65133_0142 hds_65133_0142 443040624 604766240 -
hds_65133_0143 hds_65133_0143 hds_65133_0143 0         1047806864 -
hds_65133_0144 hds_65133_0144 hds_65133_0144 0         1047806864 -
hds_65133_0145 hds_65133_0145 hds_65133_0145 0         1047806864 -
hds_65133_0146 hds_65133_0146 hds_65133_0146 0         1047806864 -
hds_65133_0147 hds_65133_0147 hds_65133_0147 0         1047806864 -
 

My final destination after the volume is shrunk is to migrate it to the hds_65133 disk, which is why there are so many of them available.  

ScottK
Level 5
Employee

I recently ran into a failure to shrink using vxresize (it simply returned an error immediately) This system was also Linux (albeit RHEL) and SF 5.1. I've heard that 5.1SP1 has a number of fixes related to resize, although I have not confirmed.

I have been unpacking the work into shrinking the file system (with fsadm) then shrinking the volume (with vxassist). This also helps narrow down where the problem may be occuring. As G_Lee noted, you can get into trouble with this approach if you calculate the sizes wrong, or mix up which file system is on which volume, etc.

One caveat/hassle is that fsadm 5.1 requires sectors (!). Fortunately that is fixed in 5.1SP1 so it will accept  more conventional units.

g_lee
Level 6

regarding the previous issue with resizing the 500g volume:

although it's not possible to say with certainty why it may have failed without knowing the exact details/output from the time you encountered, possible causes of resize problems/failure (in addition to Gaurav's earlier points) include:

a. the filesystem is being heavily used while resize is being run (ideally should quiesce the filesystem whilst the resize operation is in progress) - so retry the resize during a quieter period

b. the filesystem is fragmented - defragment the filesystem and retry the resize.

technote: http://www.symantec.com/business/support/index?page=content&id=TECH72741

see VxFS 5.1 (Linux) Adminstrator's Guide -> VxFS performance: creating, mounting, and tuning file systems -> Monitoring free space -> Monitoring fragmentation

https://sort.symantec.com/public/documents/sf/5.1/linux/html/vxfs_admin/ch02s04s01.htm

regarding the current filesystem:

From the output there appears to be sufficient filesystem & diskgroup space. You can migrate to the hds_65133* disks after the resize is done (as it currently doesn't reside on these disks).

Could you provide the following output to confirm the vxfs version /details?

# fstyp -v /dev/vx/rdsk/cnups1ds_dg1/cnups1ds_oraclepss

If the filesystem is vxfs, you should be able to resize the fs using the steps below (confirm with fstyp first).

1. Verify the filesystem is not fragmented

2. Run vxresize (preferably not during a period of heavy use), eg: to shrink to 200g

# vxresize -s -F vxfs -g cnups1ds_dg1 cnups1ds_oraclepss 200g
(note the filesystem must be mounted to perform the resize. see vxresize manpage for more info: https://sort.symantec.com/public/documents/sf/5.1/linux/manpages/volume_manager/man/html/man1m/vxresize.1m.html )

3. Verify the filesystem and volume have shrunk:

# df -h /oracle/PSS
# fstyp -v /dev/vx/rdsk/cnups1ds_dg1/cnups1ds_oraclepss
# vxprint -qhtrg cnups1ds_dg1 cnups1ds_oraclepss

4. Once the filesystem has been resized, you can migrate to the hds_65133 disks by mirroring with vxassist, then removing the original emc_4798 plex.

eg: to mirror to disk hds_65133_0143 (~500gb free from your output, so should have sufficient space once fs shrunk to 200g - substitute with desired disk(s) if you wish to use a different device)

# vxassist -b -g cnups1ds_dg1 mirror cnups1ds_oraclepss hds_65133_0143
######## (-b runs in background, check progress with vxtask list)

Check the mirror operation is complete, both plexes ENABLED and ACTIVE

# vxtask list #### should return no output
# vxprint -qhtrg cnups1ds_dg1 cnups1ds_oraclepss

remove the mirror on emc_4798

# vxassist -g cnups1ds_dg1 remove mirror cnups1ds_oraclepss !emc_4798_0eba
#### NOTE: this assumes the original plex is on emc_4798_0eba, which should be the case from the output you've provided. if vxprint shows a different device, substitute the correct disk in the remove mirror command above

see vxassist manpage for more details: https://sort.symantec.com/public/documents/sf/5.1/linux/manpages/volume_manager/man/html/man1m/vxassist.1m.html

If you encounter any issues, please provide the exact command(s) run and the error output to investigate further.

Alternatively you could also try the resizing the filesystem separately per ScottK's suggestion (resize fs with fsadm first, then resize the volume with vxassist) - you do need to take care re: the order and correct sizing as he's mentioned above.