cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

Array Migration using VxVM on Solaris VCS cluster

sarahjones
Level 2

Hi,

We lost our Unix admin a few months ago who usually administered VxVM for us and now I'm in a position where I need to migrate volumes between arrays. Unfortunately no documentation of how this was successfully achived in the past was taken so I'm looking for some help.

I've seen a number of posts that relate to this but am posting a series of questions again as I'm new to veritas.

The cluster is:

- Solaris 9
- VCS 5.0 and VxVM 5.0 MP1 two node straetched cluster
- Each node has its own storage array and zoning to the EVA and DMX in each data centre
- Qlogic HBAs and Native Sun Driver
- Current Array: HP EVA
- Target Array: EMC DMX
- Current SAN: Brocade (HP-badged)
- Target SAN: Brocade

Migration Plan (with loads of questions) is:

- EMC PowerPath has been installed for multipathing on DMX a few weeks back
- Freeze cluster in main data centre - this node to be used for migration
- Take first channel out of the current SAN fabric 1 and plug it into new SAN fabric 1 in main data centre on active, frozen node.
- Leave both channels from standby node in 2nd data centre in EVA fabrics for now
- Zone and Mask target LUNs from data cetnre 1 and 2 on single HBA in SAN fabric 1.
- Discover LUNs (cfgadm)
- DMX storage managed by PowerPath so list devices using powermt display dev=all to map devices to actual array/LUN
- Initialise disk in VxVM (vxdisksetup -i emcpower56) - repeat for all new LUNs
- Add DMX LUNs to disk groups (vxdg -g testdg adddisk testdgdmx=emcpower56) - repeat for all new LUNs
- Add plexes and mirror (vxassist -g testdg mirror testvol emcpower56)

The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time?

- check two new plex attached to testvol (vxprint -qthg testdg testvol)
- check sync state compleeted (vxtask list)
- Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01)
- Delete EVA plex (vxedit -g testdg -rf rm testvol-01)
- Unmask EVA storage and cleanup using cfgadm on both nodes
- Take second channel from active node and plug to SAN fabric 2
- rescan using qlogic driver to pick up second leg to lun
- verify with powermt display dev=all
- cable 2nd node in second data centre to both new fabrics and scan using qlogic driver
- check 2nd node using powermt display dev=all

Can the VEA GUI be used to carry out the same as the above commands that I've researched?

Thanks in advance,
Sarah

1 ACCEPTED SOLUTION

Accepted Solutions

Gaurav_S
Moderator
Moderator
   VIP    Certified

Logically attaching a plex to oarch01vol-L01 should work but to be honest I didn't try that & I am sure this won't be recomended as well as we are trying to play around with layout of layered volume ... so really won't make a strong call here ...

If you have a test setup to try, you can try first there ...

For relayout, you may want to convert the volume into a simple format , below is example of stripe..

# vxassist -g <diskgroup> relayout <volume> layout=stripe ncol=<number>

PS: this operation will take time to complete

 

Gaurav

 

View solution in original post

6 REPLIES 6

Gaurav_S
Moderator
Moderator
   VIP    Certified

Answers inline:

The existing volumes have two plexes, one from each data centre each with one sub disk. Will vxassist automatically create the plex, attach it to the volume and start mirroring? Am I ok to repeat this command twice with different objects to get both new mirrors sync'ing at the same time?

--- yes, vxvm will automatically create the plex & start mirroring, you can view the progress in "vxtask list" output ....

-- Yes, multiple commands would be ok but the more you trigger, you are giving more job to vxconfigd so expect some degradation to mirroring & performance .... I usually practice to keep max of 5 sync at the same time  (my view) ...

From rest of the steps:

- check two new plex attached to testvol (vxprint -qthg testdg testvol)
- check sync state compleeted (vxtask list)
- Disassocaite EVA plex when sync state completed (vxmend -g testdg off testvol-01; vxplex -g testdg dis testvol-01)

- Delete EVA plex (vxedit -g testdg -rf rm testvol-01)

------------->> For safety, take a diskgroup backup here (take backup of /etc/vx/cbr/bk/ directory to some other location)

---------------->> Add a step here to remove the disk from diskgroup ....

# vxdg -g <diskgroup> rmdisk <disk>

Unmask EVA storage and cleanup using cfgadm on both nodes
- Take second channel from active node and plug to SAN fabric 2

 

 

VEA can do most of the job but would be difficult to explain, I would prefer to stick to command line ..

good luck

 

Gaurav

sarahjones
Level 2

THanks. I've been able to get an export of some of the vxprint from one of the clusters and it seems they've possibly been configured using layered volumes.

What do I need to check on this layout and which volume do I add the plexes to?

For example in the VEA GUI I see only one volume relating to oarch01vol but fron a vxprint I see what I think is a layered volume.

vxprint
TY NAME          ASSOC   KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
dg db01dg      db01dg   -        -        -        -        -       -
dm dm1oarch01    c0t2d0s2  -        52360448 -        -        -       -
dm dm2oarch01    c0t3d0s2  -        52360448 -        -        -       -

v  oarch01vol  fsgen   ENABLED  52359168 -        ACTIVE   -       -
pl oarch01vol-03 oarch01vol  ENABLED  52359168 -    ACTIVE   -       -
sv oarch01vol-S01 oarch01vol-03  ENABLED  52359168 0    -        -       -

v  oarch01vol-L01 fsgen   ENABLED  52359168 -        ACTIVE   -       -
pl oarch01vol-P01  oarch01vol-L01 ENABLED  52359168 -     ACTIVE   -       -
sd dm1oarch01-02  oarch01vol-P01 ENABLED  52359168 0        -        -       -
pl oarch01vol-P02 oarch01vol-L01 ENABLED  52359168 -     ACTIVE   -       -
sd dm2oarch01-02 oarch01vol-P02 ENABLED  52359168 0        -        -       -

Thanks in advance.

 

Gaurav_S
Moderator
Moderator
   VIP    Certified

In above case, actual volume is oarch01vol only ... rest are subvolumes & its components (L-01 /P01 etc)...

I would expect that this volume is played around in past ... if I see the size above, its completely a mirror only .. each & every component is of same size ...

Also, interestingly, only 1 disk is used in above structure .... i.e dm1oarch01 ... other disk dm1oarch02 is not at all used ...

You can think of an alternative to "relayout" this volume into a simple structure & then proceed with your plan ... see man page for "vxrelayout"

https://sort.symantec.com/public/documents/sf/5.0/solaris/manpages/vxvm/vxrelayout_1m.html

 

G

sarahjones
Level 2

Without having to go through a relayout before the migration could I use the same process and add the new plexes to the oarch01vol-L01 volume, sync them up and drop the old ones? I'm under pressure to carry this out at the weekend so want to avoid another step if possible.

What would I issue as a relayout?

Ta,

Sarah.

DJLivingston
Not applicable

After you remove the old disks from Veritas control (some suggest inserting "vxdiskunsetup -C emcpowerxx" before the vxdisk rm) and unmask them from being seen by the server, you will need to run "powermt check."  It will answer with messages about the dead paths, and ask whether you want to remove them from PowerPath.  Say yes, of course.  If it tells you it can't remove the paths because the device is in use, there is probably some leftover Veritas association.  This is not a showstopper and, if you're trying to get this done in an outage window, I would simply move on.  It will, however, prevent you from cleaning up the old entries via cfgadm, devfsadm, or luxadm, so your messages file will be a little more crowded going forward.

Also, you say you are using native Sun HBA drivers, and then you mention rescanning with the QLogic driver - if you mean you're using the Leadville stack, which I assume you are since you include extensive use of cfgadm, you shouldn't need to rescan with any QLogic utility.

Oh, if you have any trouble finding the vxdiskunsetup command, it's under /etc/vx/bin.

Hope this helps!

David

Gaurav_S
Moderator
Moderator
   VIP    Certified

Logically attaching a plex to oarch01vol-L01 should work but to be honest I didn't try that & I am sure this won't be recomended as well as we are trying to play around with layout of layered volume ... so really won't make a strong call here ...

If you have a test setup to try, you can try first there ...

For relayout, you may want to convert the volume into a simple format , below is example of stripe..

# vxassist -g <diskgroup> relayout <volume> layout=stripe ncol=<number>

PS: this operation will take time to complete

 

Gaurav