cancel
Showing results for 
Search instead for 
Did you mean: 
Highlighted

Move Disks from one EMC array to another array having VxVM DG's

Hi All,

There is activity of Data migration to our New EMC Symm, so, the taks is:

We have a Emc disks used with vxvm with in the existing cluster, now we are migrating this emc disks to new EMC, so I need some finetune on my workplan,

1. take complete backup of vxvm config.
2. using vxdg deport, we will deport all the dg's
3. EMC will move the disks/replicate tehdisks adn present them to host.
4. scan for the  disks. and using vxdg importdg
5. fsck and mount.

Please let me know if there are any other things which needs to be taken care,

Just to provide more info. the cluste is oracle10g OPS parellel cluster.

Thanks & regards,
Govinda.

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

Hi Govinda,  Your procedure

Hi Govinda,

 Your procedure looks good. It should work ok.
If you are using SF 5.0,then you may get udid mismatch in vxdisk list output,
which is ok, as the new LUN's serial number would be different than original ones.

Refer to SF 5.0 VxVM user guide for UDID details. Also before running vxdisk updateudid,
to fix the udid mismatch error, check if you have latest patches installed.

Also if you are using our SFRAC product, you may have to reboot all the nodes
in the cluster after fencing disks are migrated. There are few technotes available
in our support site http://support.veritas.com on this topic as well.

Good Luck.

Regards
Srini

View solution in original post

6 Replies
Highlighted

Hi Govinda, Stupid question:

Hi Govinda,

Stupid question: Why aren't you just zone the new LUNs to the 10g RAC cluster nodes and add a VxVM mirror while the database remains online? After the mirror sync you would remove the mirror plexes on the old LUNs and keep running on the new EMC array.

Regards

Manuel
Highlighted

Hi Manuel, Thanks for the

Hi Manuel,

Thanks for the reply,

The situation is, EMC will copy/replicate the data disks for us, we have nothing to do with data. only question is when they present the luns from new array, we should be able to import the DG's (expecting that thedisks will have all the meta stracture inplace as per EMC).

We are expecting changes to CTD devices but they should have same meta data as of current,

will we will be able to import the DG's using vxdg ?? 

Hope this clarifies, let me know if you still need more insight.

Thanks & regards,
GOvinda.

Highlighted

Hi Govinda, I do not really

Hi Govinda,

I do not really understand why somebody would go through the described procedure if both arrays are attached to the same SAN.

VxVM even allows migrating to a different storage vendor without ANY application downtime. It seems EMC did not propose the easier and less risky approach.

> Just to provide more info. the cluste is oracle10g OPS parellel cluster.

You did not mention which OS and RAC(?) software solution you are using. Could you please provide some details.

Thanks

Manuel






Highlighted
Accepted Solution!

Hi Govinda,  Your procedure

Hi Govinda,

 Your procedure looks good. It should work ok.
If you are using SF 5.0,then you may get udid mismatch in vxdisk list output,
which is ok, as the new LUN's serial number would be different than original ones.

Refer to SF 5.0 VxVM user guide for UDID details. Also before running vxdisk updateudid,
to fix the udid mismatch error, check if you have latest patches installed.

Also if you are using our SFRAC product, you may have to reboot all the nodes
in the cluster after fencing disks are migrated. There are few technotes available
in our support site http://support.veritas.com on this topic as well.

Good Luck.

Regards
Srini

View solution in original post

Highlighted

Thanks for your replies Srini

Thanks for your replies Srini and Manuel,

Sorry for the delay, i was not infront of desk for long.

The servers are hp-ux 11.23
VxVM 4.1
Serive Guard CFS cluster (shared)

The DG 's are : 
 vxdg list
NAME         STATE           ID
dg04         enabled,shared,cds   1243932129.119.acomp30
archdg       enabled,shared,cds   1243926131.85.acomp30
dg01         enabled,shared,cds   1242648047.87.acomp30
dg02         enabled,shared,cds   1243927944.115.acomp30
dg03         enabled,shared,cds   1243928726.117.acomp30
racdg        enabled,shared,cds   1242258259.44.acomp30
-------------------------

CLUSTER        STATUS
prod      up

  NODE           STATUS       STATE
  comp30        up           running
  comp31        up           running

MULTI_NODE_PACKAGES

  PACKAGE        STATUS        STATE         AUTO_RUN     SYSTEM
  SG-CFS-pkg     up            running       enabled      yes
  SG-CFS-DG-1    up            running       enabled      no
  SG-CFS-MP-1    up            running       enabled      no
  SG-CFS-MP-2    up            running       enabled      no
  SG-CFS-DG-2    up            running       enabled      no
  SG-CFS-MP-3    up            running       enabled      no
  SG-CFS-MP-4    up            running       enabled      no

--------------------
My plan would be

1. shutdown the cluster/packages on both the nodes.
2. deport the DG's show above
3. Ask EMC to migrate and present the luns, 
4. scan the luns and see that they are part of VxVM (vxdisk list vxdctl enable etc)
5. run vxdg import to get the DG's online ( if req vxdg start vol all)

Please let me know if this works ?/ are there any steps missed??

Thanks and regards,
Govinda.




Highlighted

Hi Govida,  you can perfprm

Hi Govida,
 you can perfprm the following activity in two ways...

1. EMC will replicate the data from the array end. (array based migration)

Plan:
1.emc will copy the remote/local data from array.
2.once replication is completed from array side, they will split the relation.
3.on server side you need to deport the disk group.
4.emc will unmask the old devices and mask the new devices to the server.
5.you need to scan the newly allocated devices and import them


2. you can perform same activity from server side if you have vxvm installed on your server (host based migration)

plan:
1.emc will allocated same size or lager side devices to the server.
2. you need to attach the mirror to the existing plxes.
3.once sync is completed break the mirror and remove old devices.