cancel
Showing results for 
Search instead for 
Did you mean: 

Howto recover vxdg with only one side of mirrors

Steffen_Soerens
Level 3

Hi

Had a mirror between to SAN arrays A + B on an older solaris 7/32b + vxvm 3.1, which needs to migrate to new vxdmp version.

Shut solaris 7 down, moved SAN B luns to new solaris 10 + vxvm 5.0, which now fails to import vxdg's, even after running vxdisk clearimport on all luns.

Any hints appreceiate, TIA

ims-v3:/> vxdisk -o alldgs list

DEVICE       TYPE            DISK         GROUP        STATUS

c0t0d0s2     auto:sliced     rootdisk     rootdg       online

c0t1d0s2     auto:sliced     rootdisk2    rootdg       online

c6t0d0s2     auto:sliced     -            -            online

c6t0d1s2     auto:sliced     -            (arr1imsdg)  online

c6t0d2s2     auto:sliced     -            (arr2imsdg)  online

c6t0d3s2     auto:sliced     -            (arr2imsdg)  online

c6t0d4s2     auto:sliced     -            (arr2imsdg)  online

c6t0d5s2     auto:sliced     -            (arr2imsdg)  online

c6t0d6s2     auto:sliced     -            (arr2imsdg)  online

c6t0d7s2     auto:sliced     -            (arr2imsdg)  online

c6t0d8s2     auto:sliced     -            (arr2imsdg)  online

c6t0d9s2     auto:sliced     -            (arr2imsdg)  online

c6t0d10s2    auto:sliced     -            (arr2imsdg)  online

c6t0d11s2    auto:sliced     -            (arr2imsdg)  online

c6t0d12s2    auto:sliced     -            (arr2imsdg)  online

c6t0d13s2    auto:sliced     -            (arr2imsdg)  online

c6t0d14s2    auto:sliced     -            (arr2imsdg)  online

c6t0d15s2    auto:sliced     -            (backupdg)   online

c6t0d16s2    auto:sliced     -            (backupdg)   online

c6t0d17s2    auto:sliced     -            (backupdg)   online

c6t0d18s2    auto:sliced     -            (backupdg)   online

c6t0d19s2    auto:sliced     -            (backupdg)   online

c6t0d20s2    auto:sliced     -            (backupdg)   online

c6t0d21s2    auto:sliced     -            (backupdg)   online

c6t0d22s2    auto:sliced     -            (backupdg)   online

c6t0d23s2    auto:sliced     -            (backupdg)   online

c6t0d24s2    auto:sliced     -            (backupdg)   online

c6t0d25s2    auto:sliced     -            (backupdg)   online

c6t0d26s2    auto:sliced     -            (backupdg)   online

c6t0d27s2    auto:sliced     -            (backupdg)   online

c6t0d28s2    auto:sliced     -            (backupdg)   online

c6t0d29s2    auto:sliced     -            (backupdg)   online

c6t0d30s2    auto:sliced     -            (backupdg)   online

c6t0d31s2    auto:sliced     -            (backupdg)   online

c6t0d32s2    auto:sliced     -            (backupdg)   online

c6t0d33s2    auto:sliced     -            (backupdg)   online

c6t0d34s2    auto:sliced     -            (backupdg)   online

c6t0d35s2    auto:sliced     -            (backupdg)   online

c6t0d36s2    auto:sliced     -            (backupdg)   online

c6t0d37s2    auto:sliced     -            (backupdg)   online

 

ims-v3:/> vxdisk list c6t0d1s2

Device:    c6t0d1s2

devicetag: c6t0d1

type:      auto

hostid:    

disk:      name= id=1434971136.7192.ims

group:     name=arr1imsdg id=985687944.1167.ims

info:      format=sliced,privoffset=1,pubslice=4,privslice=3

flags:     online ready private autoconfig

pubpaths:  block=/dev/vx/dmp/c6t0d1s4 char=/dev/vx/rdmp/c6t0d1s4

privpaths: block=/dev/vx/dmp/c6t0d1s3 char=/dev/vx/rdmp/c6t0d1s3

guid:      -

udid:      IBM%5F2145%5FDISKS%5F60050768019201DD480000000000064D

site:      -

version:   2.1

iosize:    min=512 (bytes) max=2048 (blocks)

public:    slice=4 offset=0 len=62908416 disk_offset=2048

private:   slice=3 offset=1 len=2047 disk_offset=0

update:    time=1435240653 seqno=0.54

ssb:       actual_seqno=0.0

headers:   0 248

configs:   count=1 len=1486

logs:      count=1 len=225

Defined regions:

 config   priv 000017-000247[000231]: copy=01 offset=000000 enabled

 config   priv 000249-001503[001255]: copy=01 offset=000231 enabled

 log      priv 001504-001728[000225]: copy=01 offset=000000 enabled

Multipathing information:

numpaths:   4

c6t0d1s2        state=enabled

c5t0d1s2        state=enabled

c5t1d1s2        state=enabled

c6t1d1s2        state=enabled

 

ims-v3:/> vxdg import arr1imsdg

VxVM vxdg ERROR V-5-1-10978 Disk group arr1imsdg: import failed: 

Disk for disk group not found

 

ims-v3:/> vxdg -C import arr1imsdg

VxVM vxdg ERROR V-5-1-10978 Disk group arr1imsdg: import failed: 

Disk for disk group not found

ims-v3:/> vxdg -f import arr1imsdg

VxVM vxdg ERROR V-5-1-10978 Disk group arr1imsdg: import failed: 

Disk group version doesn't support feature; see the vxdg upgrade command

ims-v3:/> vxdg upgrade arr1imsdg

VxVM vxdg ERROR V-5-1-2356 Disk group arr1imsdg: upgrade failed: No valid disk found containing disk group

Would it somehow be possible to import a non-deported DG from only half the disks (one side mirror)?

Old solaris 7 booted fine and imported DGs and then I could disassociate plexs from SAN B, no problem.

(Got pre-4.1 manual scripted vx config backup data of mirrored config)

1 ACCEPTED SOLUTION

Accepted Solutions

mikebounds
Level 6
Partner Accredited

So did you do step 3 

3. On Sol 7 system disassociate SAN B plexes and remove SAN disks from diskgroup 

 

where I have just realised I missed out "B", so should say "remove SAN B disks from diskgroup"
So SAN B disks should not be in a diskgroup, so if SAN B is still connected to both hosts (there is no need to disconnect and reconnect SANs during this), then you should remove SAN B disks from diskgroup on Sol 7 system (no need to take down App for this), but you may have to remove disassociated plexes and subdisks to be able to remove disks.

 

Other alternative is to use vxdisksetup -f -i if SAN B is disconnected on Sol7  but you will need to specify privlen as this is bigger in more current versions so you wil need to make sure disk is created with same privlen as it had originally.
First run for exmaple "vxdisk list c6t0d1" to see what private region size is and output to a file so you can make sure it is the same afterwards

Then, for example for a 16MB private region, run:

 

vxdisksetup -fi c6t0d1 privlen=16384

 

And then run  "vxdisk list c6t0d1" to make sure offsets and lengths are the same for private and public regions.  

If you run vxdisksetup -f, then if you present LUNS back to Sol7, make sure you destroy  or rename diskgroup as you don't want to have 2 diskgroups with the same name.

 

Mike

View solution in original post

10 REPLIES 10

Gaurav_S
Moderator
Moderator
   VIP    Certified

Pretty old version  :)

When you moved Storage Luns B to Sol 10 machine, it were simply presented to new solaris box OR you have disassociated disks from vxvm view on Sol 7 machine & then have moved the disks ?

I am not very sure since its a very old version but looks like vxvm is not happy with half of the disks on new node & is expecting all the disks in the dg to be available on new Sol 10 node .. & clearly force import is not a choice in DG version 70.

I also believe DG split & join feature will not be available in that version as that would have made life little easy.

First I would suggest to try a "vxdg move" option, with this you can move objects between diskgorups. I am not sure if this will work in 3.1 but worth a try ..

# vxdg move <source dg > <target dg> <object>

If you use your SAN B disks as objects, you should be able to move disks & volumes with SAN A to different DG, once all objects are moved, then diskgroup information is updated with correct number of disks, with this you should be able to move any one of SAN A or SAN B diskgroup to new Sol 10 system

I can think of another solution but may be tricky ..

1. Get all the disks back to Sol 7 machine.

2. Do a clean import of all the diskgroups

3. Take detailed output of vxprint -htg <diskgroup>, vxprint -mvphsr <diskgroup>, vxprivutil dumpconfig <disk>, vxdisk list <disk> for each disk.

4. Disassociate 1 plex of SAN A from all the volumes.

5. Take all SAN A disks out of diskgroup so that diskgroup config is aware of number of disks

6. Move the diskgroup to new sol 10 machine (with all SAN A disks out)

7. On Sol 7 machine, perform a rebuild of dg & all the volumes using vxmake.

 

G

Steffen_Soerens
Level 3

Yes pretty old, but App has been / is running 24x7 and now I need to upgrade OS + VxDMP to support new storage array. So to get a cloned copy of data without stopping App on Sol7, I attached new san luns and mirrored onto these luns. Quick shutdown sol7, removed fiber from san B from sol7, rebooted sol7 and removed mirrored plexs from san B on sol7, everything is happy on sol7, and App is still running.

Now I planned to import DGs from san B luns to new[er] host OS+VxDMP to verify App will be working here, only...

Yes it's properly sol10 missing quorum in DG that's making vxvm unhappy as san B lun are 3x size of san A luns and thus: 3x #B luns = #A luns, hence properly quorum is lost as on sol7 it were okay...

Forced import were tried on sol10 & DG v.140 (vxvm 5.0 :)

Anyway to rebuild/reconstruct new DGs, SDs, PLXs & VOLs on san B luns without loosing data in SDs?

Gaurav_S
Moderator
Moderator
   VIP    Certified

I believe import on new node should be possible without loosing data .. however as a safe step, take an appropriate backup of data

Since you have mirrored data from SAN B (new Storage)  to SAN A (old storage), that means disks were initialized from SAN B & later plexes were created which were mirrored to SAN A

I believe on new node, if you reinitialize the disks with exact offsets & length, you should be able to recover them on new node

Do you have a vxprint -ht output of server when disks were mirroed from SAN A & SAN B ? & also paste out a sample output of "vxdisk list <disk_from_SAN A> & vxdisk list <disk_from_SAN B>

 

G

Steffen_Soerens
Level 3

1. I've mirrored from A(old)  to B(new) :)

I've got these output of A+B mirrored config:

​$vxdisk list > vxdisk_list
$vxdisk -q list | awk '$3!="-" && $5=="online"{print $1}' | xargs $vxdisk list > vxdisk_list.detail
$vxdisk -q list | awk '$2!="simple" && $3!="-" && $5=="online"{print "/dev/rdsk/"$1}' | xargs $prtvtoc > prtvtoc_all_disk
$vxprint -qG | awk '/^dg /{print $2}' - | while read dg; do
  $vxprint -g $dg -rhmvps > vxprint_config.$dg
done

Attaching outputs...

mikebounds
Level 6
Partner Accredited

You should be able to rebuild On Solaris 10 while app is running on Sol 7 as follows:

  1. On Sol 7 system, for each diskgroup run vxprint -mvpsg dg_name > dg_name.make
  2. On Sol 7 system, run "vxdisk list" > vxdisk_list
  3. On Sol 7 system disassociate SAN B plexes and remove SAN disks from diskgroup 
  4. Edit files created above to:
    a. Remove subdisk records for SAN A
    b. Remove the associated plex record to SAN A
    c. Edit plex attribute of the volulme record to remove the deleted plex from SAN A

    Note you do not need to change disk device names if these are different when disks are present on Sol10 system
     
  5. On Sol 10 system create a diskgroup from the SAN B disks and gives the disk names the original names using the output of vxdisk_list.  Note the private region on the disk should not have changed as the private region is created when you run "vxdisksetup" which you do not need to re-run.
    You may need to use "cds=off" when you create diskgroup or it may be better to create diskgroup on Sol 7 system and then rename on Sol 10 system if you want to keep the same block device for mounts
  6. On Sol 10 system, for each diskgroup run "vxmake -g dg_name -d dg_name.make" and then "vxvol -g dg_name startall
  7. On Sol 10 system, you should now be able to mount filesystem

You may want to script steps 4 and 5 if you have a lot of disks and volumes

Mike

 

Steffen_Soerens
Level 3

Thanks, only your step 5 gives me this:

# trying to recreate DG arr1imsdg

ims-v3:/var/tmp/backup> vxdg init arr1imsdg svc1=c6t0d1s2 cds=off
VxVM vxdg ERROR V-5-1-2349 Device c6t0d1s2 appears to be owned by disk group arr1imsdg.

ims-v3:/var/tmp/backup> vxdg -o override init arr1imsdg svc1=c6t0d1s2 cds=off
VxVM vxdg ERROR V-5-1-2349 Device c6t0d1s2 appears to be owned by disk group arr1imsdg.

wondering howto overcome this...

 

Steffen_Soerens
Level 3

Should I run forced vxdisksetup on disks before step 5

mikebounds
Level 6
Partner Accredited

So did you do step 3 

3. On Sol 7 system disassociate SAN B plexes and remove SAN disks from diskgroup 

 

where I have just realised I missed out "B", so should say "remove SAN B disks from diskgroup"
So SAN B disks should not be in a diskgroup, so if SAN B is still connected to both hosts (there is no need to disconnect and reconnect SANs during this), then you should remove SAN B disks from diskgroup on Sol 7 system (no need to take down App for this), but you may have to remove disassociated plexes and subdisks to be able to remove disks.

 

Other alternative is to use vxdisksetup -f -i if SAN B is disconnected on Sol7  but you will need to specify privlen as this is bigger in more current versions so you wil need to make sure disk is created with same privlen as it had originally.
First run for exmaple "vxdisk list c6t0d1" to see what private region size is and output to a file so you can make sure it is the same afterwards

Then, for example for a 16MB private region, run:

 

vxdisksetup -fi c6t0d1 privlen=16384

 

And then run  "vxdisk list c6t0d1" to make sure offsets and lengths are the same for private and public regions.  

If you run vxdisksetup -f, then if you present LUNS back to Sol7, make sure you destroy  or rename diskgroup as you don't want to have 2 diskgroups with the same name.

 

Mike

Steffen_Soerens
Level 3

Tried to recreate the single-disk DG after vxdisksetup -fi DA privlen=1777 to get an aligned public region, I ran vxmake succesfull and find all vol,plex and sd, only they are marked EMPTY and won't start...

ims-v3:/var/tmp/backup> vxvol -g arr1imsdg startall

VxVM vxvol ERROR V-5-1-11804 Volume archlv is empty and cannot be started

VxVM vxvol ERROR V-5-1-11804 Volume data5lv is empty and cannot be started

VxVM vxvol ERROR V-5-1-11804 Volume redo2lv is empty and cannot be started

 

ims-v3:/var/tmp/backup> vxprint -g arr1imsdg

TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0

dg arr1imsdg    arr1imsdg    -        -        -        -        -       -

 

dm svc1         c6t0d1s2     -        62908416 -        -        -       -

 

v  archlv       fsgen        DISABLED 40899584 -        EMPTY    -       -

pl archlv-02    archlv       DISABLED 40903680 -        EMPTY    -       -

sd svc1-01      archlv-02    ENABLED  20451840 0        -        -       -

sd svc1-02      archlv-02    ENABLED  20451840 20451840 -        -       -

 

v  data5lv      fsgen        DISABLED 10485760 -        EMPTY    -       -

pl data5lv-01   data5lv      DISABLED 10485760 -        EMPTY    -       -

sd svc1-03      data5lv-01   ENABLED  10485760 0        -        -       -

 

v  redo2lv      fsgen        DISABLED 2097152  -        EMPTY    -       -

pl redo2lv-02   redo2lv      DISABLED 2104320  -        EMPTY    -       -

sd svc1-04      redo2lv-02   ENABLED  2104320  0        -        -       -

 

but altering plex state from empty was handled by init clean:

 

ims-v3:/var/tmp/backup> vxvol -g arr1imsdg init clean redo2lv

ims-v3:/var/tmp/backup> vxvol -g arr1imsdg init clean archlv

ims-v3:/var/tmp/backup> vxvol -g arr1imsdg init clean data5lv

 

and fsck:

 

ims-v3:/var/tmp/backup> fsck -F vxfs /dev/vx/rdsk/arr1imsdg/archlv 

file system is clean - log replay is not required

ims-v3:/var/tmp/backup> fsck -F vxfs /dev/vx/rdsk/arr1imsdg/redo2lv 

file system is clean - log replay is not required

ims-v3:/var/tmp/backup> fsck -F vxfs /dev/vx/rdsk/arr1imsdg/data5lv 

file system is clean - log replay is not required

ms-v3:/var/tmp/backup> fsck -F vxfs /dev/vx/rdsk/arr1imsdg/archlv 

file system is clean - log replay is not required

ims-v3:/var/tmp/backup> fsck -F vxfs /dev/vx/rdsk/arr1imsdg/redo2lv 

file system is clean - log replay is not required

ims-v3:/var/tmp/backup> mount -F vxfs /dev/vx/dsk/arr1imsdg/archlv /arch 

ims-v3:/var/tmp/backup> mount -F vxfs /dev/vx/dsk/arr1imsdg/redo2lv /redo2

ims-v3:/var/tmp/backup> ls -l /redo2/oradata/I321/

total 1228896

-rw-r-----   1 oracle   dba      157286912 Jun 25 10:19 redolog0102.rdo

-rw-r-----   1 oracle   dba      157286912 Jun 25 08:41 redolog0104.rdo

-rw-r-----   1 oracle   dba      157286912 Jun 25 09:44 redolog0201.rdo

-rw-r-----   1 oracle   dba      157286912 Jun 25 07:41 redolog0203.rdo

 

and seems I have my data, thanks all!

 

will work over the other DGs as well...

Steffen_Soerens
Level 3

Got rest of the DGs recreated fine as well and reimported with all data, thanks for all hints!