cancel
Showing results for 
Search instead for 
Did you mean: 

VCS and CVM

tanislavm
Level 6

Hi,

I like to verify the online migration steps in cvm with IOfencing device,vcs with IOfencing and cvm,vcs without IOfencing.

 

 

 

The goal is to migrate data disks and root disks along with IOfencing device,when there is IOfencing.

CVM with IOfencing device:

-Migrate online data disks:

 On master node the online migration is done by using mirroring.

-Migrate root and IOfencing device:

 On any node we shutdown the VCS:hastop -all.Also unload IOfencing device.The purpose of this is to avoid any timeout IOfencing,if we have vcs and IOfencing up,during the IOfencing device mirroring.This timeout could lead to service failover or node crash.

Now the vcs is down but at the file system level and vxvm level we still have glm and master/slave architecture,so the processes from different nodes will not corrupt the data.

 

After mirroring is done we load the IOfencing device on every node and start vcs:hastart -all.

 

 

CVS with IOfencing device:

-On any node the data disk could be mirrored.

It is like in CVM with IOfencing as above.

 

CVM or VCS without IOfencing.

-On any node the data disk could be migrated

-On every node we mirror the root disk.

 

 

Thanx   and any comment is very appreciated.  

 

 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

Yes, just make sure the disk / lun is presented to all nodes. Do vxdctl enable & vxdisk scandisks and you'll see it in the DG, and the vxprint, and df -h

View solution in original post

5 REPLIES 5

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

Hi,

 

-Migrate root and IOfencing device:

 On any node we shutdown the VCS:hastop -all.Also unload IOfencing device.The purpose of this is to avoid any timeout IOfencing,if we have vcs and IOfencing up,during the IOfencing device mirroring.This timeout could lead to service failover or node crash.

Now the vcs is down but at the file system level and vxvm level we still have glm and master/slave architecture,so the processes from different nodes will not corrupt the data.

 

Once you run hastop -all you will bring down all had process, this will include ports h, f,v and w. That basically means that CVM will be down and you'll have no CFS file systems mounted that are part of service groups.

 

You can then stop the remaining ports, o,d,b, and a. After that I would just run the process to reconfigure fencing using your new disks.

 

https://sort.symantec.com/public/documents/sf/5.0/linux/html/sf_rac_install/sfrac_install23.html#1159142

 

tanislavm
Level 6

Hi,

In CVM case with IOfencing device how should i perform the online migration?

Should i take offline IOfencing device then just perform the mirroring of system disk and data disk?

On CVM the newly disk created on master node with file system on them will be seen on the other nodes with df -h?

 

 

tnx a lot.

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

For this operation you can't do online migration, you'll need downtime.

Take you can do the mirroring of the system and data disks while online, then you'll need to shutdown the service groups, and cluster to reconfigure fencing.

That's how i would do it.

tanislavm
Level 6

hi,

On CVM the newly disk created on master node with file system on them will be seen on the other nodes with df -h?

RiaanBadenhorst
Level 6
Partner    VIP    Accredited Certified

Yes, just make sure the disk / lun is presented to all nodes. Do vxdctl enable & vxdisk scandisks and you'll see it in the DG, and the vxprint, and df -h