Forum Discussion

tanislavm's avatar
tanislavm
Level 6
11 years ago

failed disk

Hi, I like to verify things. After the failed disk was replaced into SAN,and the OS see now the disk,and vxdisk scandisk insert this disk under vxvm.Now is necessary to perform vxdctl enable? Then initialize this new disk and I add it under dg. Supposing that the faulted disk had 3 subdisks,and the volume has 2 plexes and each plex has raid 5 config(3 dm for each plex),is there a requirement now to remake the 2 plexes on this new disk?If not,then when this configuration is applied to the new disk?Is this at the time of vxrecover -s? At the boot time if an dg has an volume formed from 2 plexes,and one plex is in STALE state,then the volume will be stopped? If I have 2 plexes,plex 1 has 3 subdisks associated as raid 5 and plex 2 has 1 subdisk associated,then could I form an volume with plex 1 and plex 2 in mirror? What is necessary to perform as proactive action beside of vxconfigbackup? thanks so much.
  • .... and the volume has 2 plexes and each plex has raid 5 config....

    Please show us vxprint -htr output.

    We need to see and understand your volume layout.
    '2 plexes' indicate that the volume is mirrored, but if memory serves me right, it is not possible to mirror a VxVM raid5 volume.

     

    If you show us your volume layout, we will be in a better posision to provide advise/assistance.

  • After the failed disk was replaced into SAN,and the OS see now the disk,and vxdisk scandisk insert this disk under vxvm.Now is necessary to perform vxdctl enable?

    >>> vxdctl enable performs "vxdisk scandisk" internally, so its not required to call again.


    Then initialize this new disk and I add it under dg. Supposing that the faulted disk had 3 subdisks,and the volume has 2 plexes and each plex has raid 5 config(3 dm for each plex),is there a requirement now to remake the 2 plexes on this new disk?If not,then when this configuration is applied to the new disk?Is this at the time of vxrecover -s?

    >>> If you replace the failed disk using "vxdiskadm" option4 /5, vxvm should take care of configuring the disk with same name however to your point, having 3 dm in each plex may also mean its a stripe & not a raid 5

    At the boot time if an dg has an volume formed from 2 plexes,and one plex is in STALE state,then the volume will be stopped?

    >>  If its a mirrored volume, volume should come up with 1 plex if it is in "enabled active" state

    If I have 2 plexes,plex 1 has 3 subdisks associated as raid 5 and plex 2 has 1 subdisk associated,then could I form an volume with plex 1 and plex 2 in mirror?

    >>> this is wierd structure, why would you have mirror of raid 5 ? you can have stripe-mirror or mirror-stripe, you can create mirror of such plex structures. Raid 5 is a different redundancy in its own.

     

    G
    What is necessary to perform as proactive action beside of vxconfigbackup?

  • Missed to answer last question

    What is necessary to perform as proactive action beside of vxconfigbackup?

    >>> vxconfigbackup (if running) will take autobackups if there is any change in dg configuration. you may take a dump of config once again (though not really required), before doing the change. I also suggest to take a "vxprint - g <dg> -mvphsr" output or a "vxprivutil dumpconfig" output. Both these are useful to recover the DGs if in case you have lost DG & have corrupted DG backups. Do remember though, better to take above mentioned outputs when configuration is clean & good.

     

    G

  • Hi Gaurav,

     

     

     

    Thanks so much.Please also give me your comments:

    If i have an volume with one plex and this volume also contain the file sysem on it,then if i add another plex to this volume to have mirrored volume,then the new plex will also have the file system on it and the same data as the original plex.right?so if i will use this volume with only the 2nd plex,all t is fine.

     

     

    if i replace an vxvm root disk,at a certain time i need to make this disk bootable.I could make it bootable by using installboot in sparc or install-grub in x86,but also using an vxvm command which i miss now.right?

    in vcs if we use public LAN as lowpriv hertbeat path,then over this public lan will goes also llt traffic.right?

  • Please... one topic per forum discussion....

    A mirrored volume means everything is mirrored at block level - filesystem as well.

    That is why we add mirrors - in case one plex fails as a result of disk falure, the remaining plex will handle all I/O.

    Steps to mirror or replace failed disk of a root mirror is well documented. 
    You can bookmark this link:  https://sort.symantec.com/public/documents/sfha/5.1sp1/solaris/productguides/html/vxvm_tshoot/ch03.htm 

    Low priority links carry LLT comms, but less often than high priority links.
    See:   https://sort.symantec.com/public/documents/sf/5.0MP3/solaris/html/vcs_users/ch_vcs_communications4.html

  • Hi,

    my last question.

    if i have an volume with 2 plexes and both plexes are in STALE what should i perform to have the volume online?

  • Hi,

    As Marianne said, please keep one discussion per forum, it will help to build better knowledge base & will be reusable by others.

    If both plexes are stale, you need to first judge which one would be best plex to start first (depending on series of failures)

    You many need to run various recovery options, usually a "DISABLED STALE" can be brought online using "vxplex -g <dg> att <plex>"  followed by "vxvol -g <dg> start <vol> " . Once you run "vxplex att" check for any tasks in "vxtask list"

     

    G

  • .... and the volume has 2 plexes and each plex has raid 5 config....

    Please show us vxprint -htr output.

    We need to see and understand your volume layout.
    '2 plexes' indicate that the volume is mirrored, but if memory serves me right, it is not possible to mirror a VxVM raid5 volume.

     

    If you show us your volume layout, we will be in a better posision to provide advise/assistance.

  • After the failed disk was replaced into SAN,and the OS see now the disk,and vxdisk scandisk insert this disk under vxvm.Now is necessary to perform vxdctl enable?

    >>> vxdctl enable performs "vxdisk scandisk" internally, so its not required to call again.


    Then initialize this new disk and I add it under dg. Supposing that the faulted disk had 3 subdisks,and the volume has 2 plexes and each plex has raid 5 config(3 dm for each plex),is there a requirement now to remake the 2 plexes on this new disk?If not,then when this configuration is applied to the new disk?Is this at the time of vxrecover -s?

    >>> If you replace the failed disk using "vxdiskadm" option4 /5, vxvm should take care of configuring the disk with same name however to your point, having 3 dm in each plex may also mean its a stripe & not a raid 5

    At the boot time if an dg has an volume formed from 2 plexes,and one plex is in STALE state,then the volume will be stopped?

    >>  If its a mirrored volume, volume should come up with 1 plex if it is in "enabled active" state

    If I have 2 plexes,plex 1 has 3 subdisks associated as raid 5 and plex 2 has 1 subdisk associated,then could I form an volume with plex 1 and plex 2 in mirror?

    >>> this is wierd structure, why would you have mirror of raid 5 ? you can have stripe-mirror or mirror-stripe, you can create mirror of such plex structures. Raid 5 is a different redundancy in its own.

     

    G
    What is necessary to perform as proactive action beside of vxconfigbackup?

  • Missed to answer last question

    What is necessary to perform as proactive action beside of vxconfigbackup?

    >>> vxconfigbackup (if running) will take autobackups if there is any change in dg configuration. you may take a dump of config once again (though not really required), before doing the change. I also suggest to take a "vxprint - g <dg> -mvphsr" output or a "vxprivutil dumpconfig" output. Both these are useful to recover the DGs if in case you have lost DG & have corrupted DG backups. Do remember though, better to take above mentioned outputs when configuration is clean & good.

     

    G

  • Hi Gaurav,

     

     

     

    Thanks so much.Please also give me your comments:

    If i have an volume with one plex and this volume also contain the file sysem on it,then if i add another plex to this volume to have mirrored volume,then the new plex will also have the file system on it and the same data as the original plex.right?so if i will use this volume with only the 2nd plex,all t is fine.

     

     

    if i replace an vxvm root disk,at a certain time i need to make this disk bootable.I could make it bootable by using installboot in sparc or install-grub in x86,but also using an vxvm command which i miss now.right?

    in vcs if we use public LAN as lowpriv hertbeat path,then over this public lan will goes also llt traffic.right?

  • Please... one topic per forum discussion....

    A mirrored volume means everything is mirrored at block level - filesystem as well.

    That is why we add mirrors - in case one plex fails as a result of disk falure, the remaining plex will handle all I/O.

    Steps to mirror or replace failed disk of a root mirror is well documented. 
    You can bookmark this link:  https://sort.symantec.com/public/documents/sfha/5.1sp1/solaris/productguides/html/vxvm_tshoot/ch03.htm 

    Low priority links carry LLT comms, but less often than high priority links.
    See:   https://sort.symantec.com/public/documents/sf/5.0MP3/solaris/html/vcs_users/ch_vcs_communications4.html

  • Hi,

    my last question.

    if i have an volume with 2 plexes and both plexes are in STALE what should i perform to have the volume online?

  • Hi,

    As Marianne said, please keep one discussion per forum, it will help to build better knowledge base & will be reusable by others.

    If both plexes are stale, you need to first judge which one would be best plex to start first (depending on series of failures)

    You many need to run various recovery options, usually a "DISABLED STALE" can be brought online using "vxplex -g <dg> att <plex>"  followed by "vxvol -g <dg> start <vol> " . Once you run "vxplex att" check for any tasks in "vxtask list"

     

    G