cancel
Showing results for 
Search instead for 
Did you mean: 

SuSE SLES and NETAPP LUN's. How yo grow them?

Jason_Nichols
Level 3

Hi everyone,

Please bear with what might be a very simple question here. We have a SuSE SLES 11 server which is connected via HBA's to a LUN on a NETAPP filer. This LUN is say 500GB in size and is part of a disk group we have called rootdg, Multipathing is also enabled at the OS I beleive as the issue of the command multipath -ll shows me the paths are active and all is good.

I have increased the size of the volume and then the LUN on the NETAPP to 1TB, check this and the NETAPP reports the correct sizes. The OS is still reporting the old LUN so as I could reboot the server I did. It reboots but still VxFS cannot see the new size. Multipath can as if I issues the same command as above it shows the same paths just with the larger LUN size.

We have tried the command  vxdctl enable but this seems not to work.

On a side not I have tried to install the VOM and put the agent on the same server, this reports the same but does report the disk sdd as the correct larger LUN - Not sure how as this is part of the four paths to the LUN sda,sdb,sdc,sdd.

What am I doing incorrectly? I am a mere beginner in this as you can tell.

Thanks

J

1 ACCEPTED SOLUTION

Accepted Solutions

Marianne
Level 6
Partner    VIP    Accredited Certified

The command you're looking for is 'vxdisk resize'.

Read more about requirements and command usage under 'Dynamic LUN expansion' on page 146 of VxVM Admin Guide.

View solution in original post

12 REPLIES 12

Marianne
Level 6
Partner    VIP    Accredited Certified

The command you're looking for is 'vxdisk resize'.

Read more about requirements and command usage under 'Dynamic LUN expansion' on page 146 of VxVM Admin Guide.

rregunta
Level 4

Hello Jason,

 

Please provide below command outputs:

vxdisk list sda

vxdisk list sdb

vxdisk list sdc

vxdisk list sdd

vxprint -qth

fdisk output for above disks.

 

Regards

Rajesh

Gaurav_S
Moderator
Moderator
   VIP    Certified

Fully agree with Marianne... you have done something called "Dynamic Lun expansion" ... & you need to run "vxdisk resize" for veritas to understand the new size of volume, command will also increase the filesystem from 500G to 1TB...

 

Dynamic LUN expansion

Note: A Storage Foundation license is required to use the dynamic LUN expansion feature.

The following form of the vxdisk command can be used to make VxVM aware of the new size of a virtual disk device that has been resized:

# vxdisk [-f] [-g diskgroup] resize {accessname|medianame} [length=value]

The device must have a SCSI interface that is presented by a smart switch, smart array or RAID controller. Following a resize operation to increase the length that is defined for a device, additional disk space on the device is available for allocation. You can optionally specify the new size by using the length attribute.

If a disk media name rather than a disk access name is specified, the disk group must either be specified using the -g option or the default disk group will be used. If the default disk group has not been set up, an error message will be generated.

This facility is provided to support dynamic LUN expansion by updating disk headers and other VxVM structures to match a new LUN size. It does not resize the LUN itself.

Any volumes on the device should only be grown after the device itself has first been grown. Otherwise, storage other than the device may be used to grow the volumes, or the volume resize may fail if no free storage is available.

Resizing should only be performed on devices that preserve data. Consult the array documentation to verify that data preservation is supported and has been qualified. The operation also requires that only storage at the end of the LUN is affected. Data at the beginning of the LUN must not be altered. No attempt is made to verify the validity of pre-existing data on the LUN. The operation should be performed on the host where the disk group is imported (or on the master node for a cluster-shared disk group).

Resizing of LUNs that are not part of a disk group is not supported. It is not possible to resize LUNs that are in the boot disk group (aliased as bootdg), in a deported disk group, or that are offline, uninitialized, being reinitialized, or in an error state.

Warning: Do not perform this operation when replacing a physical disk with a disk of a different size as data is not preserved.

Before reducing the size of a device, any volumes on the device should first be reduced in size or moved off the device. By default, the resize fails if any subdisks would be disabled as a result of their being removed in whole or in part during a shrink operation.

If the device that is being resized has the only valid configuration copy for a disk group, the -f option may be specified to forcibly resize the device.

Resizing a device that contains the only valid configuration copy for a disk group can result in data loss if a system crash occurs during the resize.

Resizing a virtual disk device is a non-transactional operation outside the control of VxVM. This means that the resize command may have to be re-issued following a system crash. In addition, a system crash may leave the private region on the device in an unusable state. If this occurs, the disk must be reinitialized, reattached to the disk group, and its data resynchronized or recovered from a backup.
------------------------------

Note this will only grow the disk ie: will not increase any volumes/filesystems residing on the disk - use vxresize to perform the increase on this once the disk size has been increased.

 

Link to admin guide

http://sfdoccentral.symantec.com/sf/5.0MP3/linux/html/vxvm_admin/ch02s14.htm

 

Gaurav

Jason_Nichols
Level 3

HI, Here is the listings as requested.

 

HYD-ICAPLN-S01:~ # vxdisk list sda
Device:    fas20400_2
devicetag: fas20400_2
type:      auto
hostid:    HYD-ICAPLN-S01
disk:      name=rootdg01 id=1285450506.5.HYD-ICAPLN-S01
group:     name=rootdg id=1285450507.7.HYD-ICAPLN-S01
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig autoimport imported thinrclm
pubpaths:  block=/dev/vx/dmp/fas20400_2s3 char=/dev/vx/rdmp/fas20400_2s3
guid:      {c3df601a-c8ec-11df-b63f-5fd89f42c800}
udid:      NETAPP%5FLUN%5F800000035467%5FP3hem4ZGhP40
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=3 offset=65792 len=836647600 disk_offset=0
private:   slice=3 offset=256 len=65536 disk_offset=0
update:    time=1285619753 seqno=0.12
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=51360
logs:      count=1 len=4096
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths:   4
sdd     state=enabled   type=primary
sdc     state=enabled   type=secondary
sdb     state=enabled   type=primary
sda     state=enabled   type=secondary

 

HYD-ICAPLN-S01:~ # vxdisk list sdb
Device:    fas20400_2
devicetag: fas20400_2
type:      auto
hostid:    HYD-ICAPLN-S01
disk:      name=rootdg01 id=1285450506.5.HYD-ICAPLN-S01
group:     name=rootdg id=1285450507.7.HYD-ICAPLN-S01
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig autoimport imported thinrclm
pubpaths:  block=/dev/vx/dmp/fas20400_2s3 char=/dev/vx/rdmp/fas20400_2s3
guid:      {c3df601a-c8ec-11df-b63f-5fd89f42c800}
udid:      NETAPP%5FLUN%5F800000035467%5FP3hem4ZGhP40
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=3 offset=65792 len=836647600 disk_offset=0
private:   slice=3 offset=256 len=65536 disk_offset=0
update:    time=1285619753 seqno=0.12
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=51360
logs:      count=1 len=4096
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths:   4
sdd     state=enabled   type=primary
sdc     state=enabled   type=secondary
sdb     state=enabled   type=primary
sda     state=enabled   type=secondary

 

HYD-ICAPLN-S01:~ # vxdisk list sdc
Device:    fas20400_2
devicetag: fas20400_2
type:      auto
hostid:    HYD-ICAPLN-S01
disk:      name=rootdg01 id=1285450506.5.HYD-ICAPLN-S01
group:     name=rootdg id=1285450507.7.HYD-ICAPLN-S01
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig autoimport imported thinrclm
pubpaths:  block=/dev/vx/dmp/fas20400_2s3 char=/dev/vx/rdmp/fas20400_2s3
guid:      {c3df601a-c8ec-11df-b63f-5fd89f42c800}
udid:      NETAPP%5FLUN%5F800000035467%5FP3hem4ZGhP40
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=3 offset=65792 len=836647600 disk_offset=0
private:   slice=3 offset=256 len=65536 disk_offset=0
update:    time=1285619753 seqno=0.12
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=51360
logs:      count=1 len=4096
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths:   4
sdd     state=enabled   type=primary
sdc     state=enabled   type=secondary
sdb     state=enabled   type=primary
sda     state=enabled   type=secondary

 

HYD-ICAPLN-S01:~ # vxdisk list sdd
Device:    fas20400_2
devicetag: fas20400_2
type:      auto
hostid:    HYD-ICAPLN-S01
disk:      name=rootdg01 id=1285450506.5.HYD-ICAPLN-S01
group:     name=rootdg id=1285450507.7.HYD-ICAPLN-S01
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig autoimport imported thinrclm
pubpaths:  block=/dev/vx/dmp/fas20400_2s3 char=/dev/vx/rdmp/fas20400_2s3
guid:      {c3df601a-c8ec-11df-b63f-5fd89f42c800}
udid:      NETAPP%5FLUN%5F800000035467%5FP3hem4ZGhP40
site:      -
version:   3.1
iosize:    min=512 (bytes) max=2048 (blocks)
public:    slice=3 offset=65792 len=836647600 disk_offset=0
private:   slice=3 offset=256 len=65536 disk_offset=0
update:    time=1285619753 seqno=0.12
ssb:       actual_seqno=0.0
headers:   0 240
configs:   count=1 len=51360
logs:      count=1 len=4096
Defined regions:
 config   priv 000048-000239[000192]: copy=01 offset=000000 enabled
 config   priv 000256-051423[051168]: copy=01 offset=000192 enabled
 log      priv 051424-055519[004096]: copy=01 offset=000000 enabled
 lockrgn  priv 055520-055663[000144]: part=00 offset=000000
Multipathing information:
numpaths:   4
sdd     state=enabled   type=primary
sdc     state=enabled   type=secondary
sdb     state=enabled   type=primary
sda     state=enabled   type=secondary

 

HYD-ICAPLN-S01:~ # vxprint -qth
Disk group: rootdg

dg rootdg       default      default  24000    1285450507.7.HYD-ICAPLN-S01

dm rootdg01     fas20400_2   auto     65536    836647600 -

v  data         -            ENABLED  ACTIVE   830472192 SELECT   -        fsgen
pl data-01      data         ENABLED  ACTIVE   830472192 CONCAT   -        RW
sd rootdg01-01  data-01      rootdg01 0        830472192 0        fas20400_2 ENA

 

fdisk -l /dev/sda

Disk /dev/sda (Sun disk label): 255 heads, 189 sectors, 17361 cylinders
Units = cylinders of 48195 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sda3  u          0     17361 418356697+   5  Whole disk
/dev/sda8  u          0     17361 418356697+   f  Unknown
HYD-ICAPLN-S01:~ # fdisk -l /dev/sdb

Disk /dev/sdb (Sun disk label): 255 heads, 189 sectors, 17361 cylinders
Units = cylinders of 48195 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sdb3  u          0     17361 418356697+   5  Whole disk
/dev/sdb8  u          0     17361 418356697+   f  Unknown
HYD-ICAPLN-S01:~ # fdisk -l /dev/sdc

Disk /dev/sdc (Sun disk label): 255 heads, 189 sectors, 17361 cylinders
Units = cylinders of 48195 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sdc3  u          0     17361 418356697+   5  Whole disk
/dev/sdc8  u          0     17361 418356697+   f  Unknown
HYD-ICAPLN-S01:~ # fdisk -l /dev/sdd

Disk /dev/sdd (Sun disk label): 255 heads, 189 sectors, 17361 cylinders
Units = cylinders of 48195 * 512 bytes

   Device Flag    Start       End    Blocks   Id  System
/dev/sdd3  u          0     17361 418356697+   5  Whole disk
/dev/sdd8  u          0     17361 418356697+   f  Unknown
HYD-ICAPLN-S01:~ #

Just to show I am not going mad here  below is the multipath output.

multipath -ll
360a98000503368656d345a4768503430 dm-0 NETAPP,LUN
[size=688G][features=1 queue_if_no_path][hwhandler=0][rw]
\_ round-robin 0 [prio=8][active]
 \_ 0:0:1:0 sdb        8:16  [active][ready]
 \_ 1:0:1:0 sdd        8:48  [active][ready]
\_ round-robin 0 [prio=2][enabled]
 \_ 0:0:0:0 sda        8:0   [active][ready]
 \_ 1:0:0:0 sdc        8:32  [active][ready]

 

As you can see I do have 688GB of usable space on that LUN. Just as an extra bit of info the error we get when we try to expand is...

Version:1.0 StartHTML:0000000105 EndHTML:0000020967 StartFragment:0000020521 EndFragment:0000020927

var/VRTSralus # vxdisk -f -g rootdg resize rootdg01

VxVM vxdisk ERROR V-5-1-8643 Device rootdg01: resize failed: Configuration daemon error -6

 

Thanks

Jason
 

Gaurav_S
Moderator
Moderator
   VIP    Certified

one simple thing to check, as you can see the content I pasted above:

Resizing of LUNs that are not part of a disk group is not supported. It is not possible to resize LUNs that are in the boot disk group (aliased as bootdg), in a deported disk group, or that are offline, uninitialized, being reinitialized, or in an error state.

 

Did you setup any alias of bootdg ? If bootdg is aliased to rootdg then resize won't be allowed ?

Can you paste what is output of

# vxdg bootdg

 

If output comes as "nodg" then you can rule out this possibility, if output comes as rootdg, you can change the bootdg alias for timebeing & retry the resize operation..

# vxdctl bootdg <any-diskgroup-name>

 

retry resize..

 

Gaurav

Gaurav_S
Moderator
Moderator
   VIP    Certified

can you also update what vxvm version you are using ?

 

Gaurav

Jason_Nichols
Level 3

Thanks for the quick reply.

We have not bootdq in use it reports nodg.

 

HYD-ICAPLN-S01:/ # vxprint -Ath

Disk group: rootdg

 

DG NAME         NCONFIG      NLOG     MINORS   GROUP-ID

ST NAME         STATE        DM_CNT   SPARE_CNT         APPVOL_CNT

DM NAME         DEVICE       TYPE     PRIVLEN  PUBLEN   STATE

RV NAME         RLINK_CNT    KSTATE   STATE    PRIMARY  DATAVOLS  SRL

RL NAME         RVG          KSTATE   STATE    REM_HOST REM_DG    REM_RLNK

CO NAME         CACHEVOL     KSTATE   STATE

VT NAME         RVG          KSTATE   STATE    NVOLUME

V  NAME         RVG/VSET/CO  KSTATE   STATE    LENGTH   READPOL   PREFPLEX UTYPE

PL NAME         VOLUME       KSTATE   STATE    LENGTH   LAYOUT    NCOL/WID MODE

SD NAME         PLEX         DISK     DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE

SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH   [COL/]OFF AM/NM    MODE

SC NAME         PLEX         CACHE    DISKOFFS LENGTH   [COL/]OFF DEVICE   MODE

DC NAME         PARENTVOL    LOGVOL

SP NAME         SNAPVOL      DCO

EX NAME         ASSOC        VC                       PERMS    MODE     STATE

SR NAME         KSTATE

 

dg rootdg       default      default  24000    1285450507.7.HYD-ICAPLN-S01

 

dm rootdg01     fas20400_2   auto     65536    836647600 -

 

v  data         -            ENABLED  ACTIVE   830472192 SELECT   -        fsgen

pl data-01      data         ENABLED  ACTIVE   830472192 CONCAT   -        RW

sd rootdg01-01  data-01      rootdg01 0        830472192 0        fas20400_2 ENA

 

One thing I have just seen is that we have installed VOM client(5.1) but it seems to be reporting verision 4.1 for vxvm. Would that cause any issues?

J

Gaurav_S
Moderator
Moderator
   VIP    Certified

No.... VOM 5.1 shouldn't cause any issues to your resize as you have triggered it manually via commandline...

Can you also tell what product you have installed ? its just VxVM or SF or SFCFS or SFRAC ?

do you have any Maintenance packs installed ?  can you paste

 

# rpm -qa | grep -i VRTSvxvm

 

Why I am asking is, I was having a look at release notes of some latest VxVM product, I saw 4.1 version had some bugs...

One more thing, Does the license you have installed supports Dynamic Lun expansion ?

# vxlicrep -e |grep -i expansion

 

Just for your interest, found this tech article, explains DLE with example of Linux

http://www.symantec.com/business/support/index?page=content&id=TECH37221

 

Gaurav

Jason_Nichols
Level 3

Hi G,

Thanks so much for you prompt replies. You are a great help.

rpm -qa | grep -i VRTSvxvm
VRTSvxvm-5.1.00.00-GA_SLES11

vxlicrep -e |grep -i expansion
   Dynamic Lun Expansion#VERITAS Volume Manager = Enabled
   Dynamic Lun Expansion               = Enabled
   Dynamic Lun Expansion               = Enabled
HYD-ICAPLN-S01:~ #
 

I did try to check this with my very limited knowledge and it looked OK.

After a quick read of the document you pointed me to. This is my output from /proc/partitions

more /proc/partitions
major minor  #blocks  name

 104     0  286683544 cciss/c0d0
 104     1    2104483 cciss/c0d0p1
 104     2  284575410 cciss/c0d0p2
   8     0  721437696 sda
   8     8  418356697 sda8
   8    16  721437696 sdb
   8    24  418356697 sdb8
   8    32  721437696 sdc
   8    40  418356697 sdc8
   8    48  721437696 sdd
   8    56  418356697 sdd8
 201     0  286683544 VxDMP1
 201     1    2104483 VxDMP1p1
 201     2  284575410 VxDMP1p2
 201    16  721437696 VxDMP2
 201    19  418356697 VxDMP2p3
 199  24000  415236096 VxVM24000
 253     0  721437696 dm-0
 253     1  418356697 dm-1


I think from this the "real" disk can see the extra size but the resize commands refuse to see them!

Thanks again

Jason

Gaurav_S
Moderator
Moderator
   VIP    Certified

Hi Jason,

the error you are facing "configuration daemon error", to me its a generic error, it can appear because of multiple reasons..

what I can see from above & suggest accordingly is:

-- If this is a standalone server, can you try restarting vxconfigd daemon & see if you can resize ?

# vxconfigd -k

Please note, if you are using Veritas CFS/CVM/RAC, do not restart vxconfigd (unless this is a test box) as it may initiate failovers.

-- I see disks are Thin Reclaimable , see below:

HYD-ICAPLN-S01:~ # vxdisk list sda
Device:    fas20400_2
devicetag: fas20400_2
type:      auto
hostid:    HYD-ICAPLN-S01
disk:      name=rootdg01 id=1285450506.5.HYD-ICAPLN-S01
group:     name=rootdg id=1285450507.7.HYD-ICAPLN-S01
info:      format=cdsdisk,privoffset=256,pubslice=3,privslice=3
flags:     online ready private autoconfig autoimport imported thinrclm  <<<<<<<<<<<<<<<<<<<<<<<

 

Since Thin reclaimation is fairly new feature, I am not sure if this may cause any constraint to DLE ..... For this you will need to consult with Symantec support, Open a symantec support case & confirm.

 

Gaurav

Marianne
Level 6
Partner    VIP    Accredited Certified

I agree with Gaurav - please open a support call.

I have found documentation sayng that 'vxconfigd core dump' when doing 'vxdisk resize' is supposed to be fixed with VxVM 5.1:

http://sfdoccentral.symantec.com/sf/5.1/linux/html/sfcfs_notes/ch01s07s02.htm

Icident 1797540:  VxVM: vxdisk resize intermittently causes vxconfigd to dump core.

Jason_Nichols
Level 3

Thanks - I have logged a support call. I will update here when fixed.

One more question that may be an issue. We have 5.1 installed by we only have 4.1 licenses at installed.

Would this cause an issue or would it just give us the option to resize if the license was in correct?

Jason