cancel
Showing results for 
Search instead for 
Did you mean: 

Migrating VxVM volumes from a V240 to a T4-2 Ldom

javashak
Level 3

Hi all, I have a V240 running Solaris 10U08 with VxVM 6.0.000 that I need to migrate to a Solaris T4-2 Ldom (LDM 3.1). I have loaded the software that is in source onto the LDOM and presented the required SAN LUNs that match source into the guest LDOM but when I try to set up a disk (LUN) in VxVM using vxdisksetup I get: 

"ERROR V-5-2-43 c0d3: Invalid disk device for vxdisksetup".

I have tried using vxdisk init c0d3s2 and get:

"Disk sector size is not supported" and tried vxdisk init c0d3s2 format=sliced and get: "Disk VTOC does not list private partition" Don't know xhat else to try !!!

8 REPLIES 8

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified

Wait a second... 

Do you need to move storage that was previously used on V240 to T4-2 Ldom?

If that is the case, you simply need to deport the diskgroup(s) on the V240, ensure disks are visible at OS level on T4-2 Ldom and with 'vxdisk -o alldgs list' and then import the diskgroup(s).

If you need to configure brand-new storage on the ldom, you need to use the OS 'format' command to ensure that disk is labeled.
Once disk is labeled and s2 represents the entire disk, you will be able to initialize the disk in VxVM.

mikebounds
Level 6
Partner Accredited

I am guessing you have read Virtualisation guide, but just in case you haven't:

http://www.symantec.com/business/support/resources/sites/BUSINESS/content/live/DOCUMENTATION/5000/DO...

Mike

Gaurav_S
Moderator
Moderator
   VIP    Certified

Can you give the "vxdisk list" & "vxdisk -e list" output ?

Did you presented the same VxVM luns (as of primary server) to secondary server ? or you have presented new luns ?

Are you able to read the partition table of solaris disks ? without a valid partition table, VxVM can't read disks.

 

G

javashak
Level 3
Hi Marianne, slice 2 is configured as whole disk but still get these results. Hi Mike, thanks for the link. Having started reading through it, it looks like it saying that the the SF software should be installed in both the primary ldom and the guest (I take it that is what is meant by co-located). Is that right? And therefore the LUNs are presented to the guest domain as /dev/vx/dmp/c0d3.... Adrian

javashak
Level 3
Hi Guarav, here's the output: flartrans# vxdisk list DEVICE TYPE DISK GROUP STATUS c0d0s2 auto:SVM - - SVM c0d1s2 auto:SVM - - SVM c0d2s2 auto:none - - online invalid c0d3s2 auto:none - - online invalid flartrans# vxdisk -e list DEVICE TYPE DISK GROUP STATUS OS_NATIVE_NAME ATTR c0d0s2 auto:SVM - - SVM c0d0s2 - c0d1s2 auto:SVM - - SVM c0d1s2 - c0d2s2 auto:none - - online invalid c0d2s2 - c0d3s2 auto:none - - online invalid c0d3s2 - flartrans# c0d3s2 is the LUN/disk in question

mikebounds
Level 6
Partner Accredited

There are 2 ways of presenting storage:

  1. Present LUNs (all paths to LUN) to guest VM and install SF in guest VM (SF is not required in control domain)
  2. Presents LUNs to control domain, install SF, create a volume and present volume to guest VM which you can format in guest VM with ufs or vxfs if you install vxfs in the guest VM

Mike

javashak
Level 3
The guest domain is not set up as an IO domain so all LUNs are presented to the control domain and then passed to the guest via "ldm add-vdisk" so it looks like I am left with option 2 as my way forward. The problem with that is my migration software (TDMF) is on the guest and the resulting copy will be "snap mirrored" (NetApp) to another guest domain. Sounds like I need to rethink the solution for migration and make the guest domains IO domains. Adrian

javashak
Level 3
Thanks all for your help