Forum Discussion

symsonu's avatar
symsonu
Level 6
13 years ago

vxdisk list showing dg disabled

Hello Friends,   I issues vxdctl enable command  in a two node VCS cluster and  it made all the dgs disabled as shown below [root@ruabon1 ~]# /opt/scripts/stgmgr/qlogic_scan_v23.sh Issuing L...
  • NathanBuck_Symc's avatar
    13 years ago

    When the "dgdisabled" flag is displayed like that for your diskgroup, it means that vxconfigd lost access to all enabled configuration copies for the disk group.  Despite the loss of access to the configuration copies, file systems remain enabled because no IO resulted in fatal errors.

    This is typical for a brief SAN outtage on the order of a few seconds, allowing file system IO to complete after retrying.

    The condition itself means that the disk group in-memory is no longer connected to the underlying disk group configuration copies.  VM will not allow any further modification to the disk group.  This condition is to allow you to sanely bring down file systems / applications.

    If you have a high confidence that your disks are all in a good state and no corruption has occurred, you can attempt to restart vxconfigd.  When vxconfigd starts back up, it will scan the underlying disks and if everything is clean and correct, it will reattach those devices to the disk group.

    NOTE however that this procedure will further degrade your environment if it fails.

    1. Freeze all Service Groups with VM resources:

    # hagrp -freeze <group> -sys <system>

    2. Restart vxconfigd

    # vxconfigd -k -x syslog

    3. Confirm resulting status:

    # vxdg list

    # vxdisk list

    4. Unfreeze Service Groups if DG is now corrected

    # hagrp -unfreeze <group> -sys <system>

     

    After correcting the disk group condition, you will need to look at your cluster configuration for the "oraappl_nfs_fs" resource and determine what its mount point and block device are.  From the block device you can determine disk group and volume name.  

    Umount the mount point and remount.  

    Verify that the volume is in a good state in vxprint -htg <diskgroup>.

     

    A node reboot could potentially correct all of this automatically as well.