cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

replace disks in VCS managed disk group that online and shared

ciwei2103
Level 2

I'm looking for some advices about how to best approch this.

Envirorment: solaris 8, veritas VxVM 3.5 ( 3.5,REV=06.21.2002.23.1)

, VCS 3.5 and with CFS/CVM and oracle 9i RAC.

WE have a need to migrating and replace the disks that currently in vxfencoordg

and oradg. the disk is currently online and shared , as seeing below.

bash-2.03# vxdisk list | grep c4 | sort +3

c4t0d2s2 sliced - (vxfencoorddg) online

c4t0d3s2 sliced - (vxfencoorddg) online

c4t0d4s2 sliced - (vxfencoorddg) online

c4t0d0s2 sliced c4t0d0s2 oradg online shared

 

I would like to replace the coordinator with the following new disk.

bash-2.03# vxdisk list | grep c3t20

c3t20d23s2 sliced - - online

c3t20d24s2 sliced - - online

c3t20d25s2 sliced - - online

c3t20d26s2 sliced - - online

 

Here is the procedure I proposed.

step 1. to replace the disks in vxfencoorddg.

- shutdown the cluster on both node

- shutdown the vxfensing driver

/etc/init.d/vxfen stop .

on node1 :

- vxdg -t import vxfencoorddg ( on node1)

- remove the old coordinator disk, add the new disks

using vxdg adddisk , then vxdg rmdisk

- restart the vxfensing

/etc/init.d/vxfen start

- check the contents of /etc/vxfenab contain the new disks

verify with: gabconfig -a | grep 'Port b'

on node2:

- start the vxfensing && check /etc/vxfentab

 

step 2. to replace the disk in oradg

I was planing to do

- vxdctl -c mode ( to find the master CVM node)

- vxdg -g oradg adddisk c3t20d23s2=c3t20d23s2

then mirror the oravol with

vxassist -g oradg mirror oravol c3t20d23s2

However , this didn't work. the vxdg adddisk just hangs. ( maybe a bug? )

so I'm planing like this:

- after shutdown the cluster and vxfensing

on master CVM node

- vxdg -t import oradg

- vxdg adddisk .. to add new disks

- vxassist -g oradg mirror oravol <new disks>

- vxdg rmdisk .. to remove the old disks

 

Step 3. after all this is done, reboot both node to let VCS pickup and manager

these diskgroups again.

will this procedure works as expected?  any better ways of doing this? thanks,

Jason

0 REPLIES 0