cancel
Showing results for 
Search instead for 
Did you mean: 

Mirroring in VxVM

harivxvm
Level 3

Hi,

I created a RAID-1 configuration on a diskgroup with two disks.

Disk group: dg_dd0
 
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
sd sdg-01       testvol-01   ENABLED  204800   0        -        -       -
sd sdg01        -            ENABLED  1024     -        -        -       -
sd sdh-01       testvol-02   ENABLED  204800   0        -        -       -
sd sdh01        -            ENABLED  1024     -        -        -       -
pl testvol-01   testvol      ENABLED  204800   -        ACTIVE   -       -
pl testvol-02   testvol      ENABLED  204800   -        ACTIVE   -       -
 
 
Now, if one of the underlying disks goes offline, VxVM doesnt detect it until I try to write on the mirrored volume.
Is it expected behavior ?
 
Once I try to write on volume, with disk "sdh" went offline, plex and disk become disabled.
 
Disk group: dg_dd0
 
TY NAME         ASSOC        KSTATE   LENGTH   PLOFFS   STATE    TUTIL0  PUTIL0
sd sdg-01       testvol-01   ENABLED  204800   0        -        -       -
sd sdg01        -            ENABLED  1024     -        -        -       -
sd sdh-01       testvol-02   DISABLED 204800   0        NODEVICE -       -
sd sdh01        -            ENABLED  1024     -        -        -       -
pl testvol-01   testvol      ENABLED  204800   -        ACTIVE   -       -
pl testvol-02   -            DISABLED 204800   -        NODEVICE -       -
 
 
If the disk comes back, again it is not detected automatically. I executed "vxdisk scandisks", which didnt help.
 
I can only get the plex back by deporting and importing the diskgroup. Is it expected behavior ?
 
Is there some setting, through which VxVM detects such scenarios autimatically?
 
Regards,
Hari.
 

 

 

1 ACCEPTED SOLUTION

Accepted Solutions

g_lee
Level 6

As mentioned in the first paragraph of the original reply:

Failure is detected automatically - DMP probes disk paths periodically to check their status, and if it detects they are offline / doesn't get a response within the defined timeframe it will fault the path (or disk if all paths have gone offline); to speed up the process you can send i/o to the disk so it will detect the failure sooner (as you've done by writing to the volume).

EDIT: further reading that may help

DMP 5.1SP1 (Linux) Administrator's Guide -> About the event source daemon

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/dmp_admin/ch06s01.htm

-> Fabric Monitoring and proactive error detection (note "previous releases" refers to pre-5.0 installs)

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/dmp_admin/ch06s02.htm

VxVM Administrator's Guide -> Performance monitoring and tuning -> Tuning VxVM -> DMP Tunable Parameters:

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/vxvm_admin/ch15s04s05.htm

(eg: dmp_monitor_fabric, dmp_monitor_osevent)

View solution in original post

3 REPLIES 3

g_lee
Level 6

Failure is detected automatically - DMP probes disk paths periodically to check their status, and if it detects they are offline / doesn't get a response within the defined timeframe it will fault the path (or disk if all paths have gone offline); to speed up the process you can send i/o to the disk so it will detect the failure sooner (as you've done by writing to the volume).

Once you replace the disk (or enable the disk in this case); VxVM will detect the disk has been repaired/replaced - this can be automatic eg: if vxesd is running, or you may need to do this with vxdisk scandisks - check by running vxdisk list after the scandisks and the repaired disk should be listed with online status.

The recovery of the mirror plexes is not automatic, as the disk could have been replaced with a new device, so you need to perform additional steps after vxdisk scandisks has been run to detect the disk is back/online to recover the volumes.

eg: vxreattach to reattach disk drives that have once again become accessible (your case above)

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/manualpages/html/manpages/volume_manager/html/man1m/vxreattach.1m.html

(vxreattach -c {accessname} to check if the device can be reattached, then vxreattach -r {accessname} to recover)

see also: vxdiskadm for failed/replaced disks (ie: where the disk has physically been replaced with another device)

VxVM 5.1SP1 (Linux) Administrator's Guide -> Administering Disks -> Removing and replacing disks -> Replacing a failed or removed disk

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/vxvm_admin/ch03s19s01.htm

see https://sort.symantec.com/documents and select the relevant platform/version combination if your versions differ / you need more details.

harivxvm
Level 3

Thanks for the reply.

My requirement was to reattach the mirror plex automatically, once the underlying disk comes back.

But looks like for mirror plexes, it is not automatic and some manual intervention is required. I can monitor the disk and once the disk comes back I can reattach it.

Another point which I mentioned above, once the underlying disk is becoming unavailable, the corresponding plex is not getting disabled unless I do some IO operation on the volume. Is that expected behavior?

Regards,

Hari.

 

g_lee
Level 6

As mentioned in the first paragraph of the original reply:

Failure is detected automatically - DMP probes disk paths periodically to check their status, and if it detects they are offline / doesn't get a response within the defined timeframe it will fault the path (or disk if all paths have gone offline); to speed up the process you can send i/o to the disk so it will detect the failure sooner (as you've done by writing to the volume).

EDIT: further reading that may help

DMP 5.1SP1 (Linux) Administrator's Guide -> About the event source daemon

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/dmp_admin/ch06s01.htm

-> Fabric Monitoring and proactive error detection (note "previous releases" refers to pre-5.0 installs)

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/dmp_admin/ch06s02.htm

VxVM Administrator's Guide -> Performance monitoring and tuning -> Tuning VxVM -> DMP Tunable Parameters:

https://sort.symantec.com/public/documents/sfha/5.1sp1/linux/productguides/html/vxvm_admin/ch15s04s05.htm

(eg: dmp_monitor_fabric, dmp_monitor_osevent)