Forum Discussion

Home_224's avatar
Home_224
Level 6
4 years ago

Failed to mount the service group

Dear All,

I encounter the problem when normal to start the VCS, find the cvm cannot start up .  I give up to use vcs to control, then try to mount the file system but return the error . 

UX:vxfs mount: ERROR: V-3-20003: Cannot open /dev/vx/dsk/iuat_app_dg_01/iuat_app_vol_01: No such device or address
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version 

I check the vxdg list , the the disk group already imported on the master node. no idea how to mount it by manual. 

 

  • hares -state
    #Resource Attribute System Value
    cfsmount1 State devuaeapp31 OFFLINE
    cfsmount1 State devuaeapp32 OFFLINE
    cfsmount2 State devuaeapp31 OFFLINE
    cfsmount2 State devuaeapp32 OFFLINE
    cvm_clus State devuaeapp31 ONLINE
    cvm_clus State devuaeapp32 OFFLINE
    cvm_vxconfigd State devuaeapp31 ONLINE
    cvm_vxconfigd State devuaeapp32 OFFLINE
    cvmvoldg1 State devuaeapp31 FAULTED
    cvmvoldg1 State devuaeapp32 OFFLINE
    cvmvoldg2 State devuaeapp31 FAULTED
    cvmvoldg2 State devuaeapp32 OFFLINE
    qlogckd State devuaeapp31 ONLINE
    qlogckd State devuaeapp32 OFFLINE
    vxfsckd State devuaeapp31 ONLINE
    vxfsckd State devuaeapp32 OFFLINE

  • most of cvm related issues are "shared-devices" related.

    the logs below

    cvm_clus State devuaeapp32 OFFLINE
    cvm_vxconfigd State devuaeapp32 OFFLINE
    cvmvoldg1 State devuaeapp31 FAULTED
    cvmvoldg1 State devuaeapp32 OFFLINE
    cvmvoldg2 State devuaeapp31 FAULTED

    show there are some cvm related resorurces are not onlined on the cluster nodes.

    you should check engine_A.log for cvm start up related errors and take the action to rectify the problem (if do not know how to reoubleshoot the issue, a cluster reboot is the way to go, if possible).

    run the commands below should  give you a bit more info about the issue and the state the cvm cluster is in:

    hastatus -sum

    gabconfig -a

    vxdctl -c mode

    vxclustadm -v nodestate

     

    PS - make sure that all nodes in the cluster have the same access (r/w) to all the shared storage.

     

     

     

     

     

     

  • in addition to the commands mentioned yesterday, to troubleshoot the issue further, you can also run the commands below:

    on both nodes, run

    vxdisk -o alldgs list } grep -i shared | wc -l

    // the numbers returned should be the same

    on the node that cvm is not online (vxconfigd is not in cluster enabled mode), run

    vxclustadm -m vcs -t gab startnode

    this command is to kick vxconfigd on the node into cluster enabled mode however, this command is likely to fail.  with the error returned, you should be able to identify the issue and troubleshoot the issue further.

     

    • Home_224's avatar
      Home_224
      Level 6

      frankgfan 

      Thank you for your reply!

      Both node is use the CVM and cfs mount , the node 31 is master, and 32 is slave.  The node 32 cannot mount as the zoning issue , so skip it for temporary, I focus on the node 31 to start up the service group. 

       

      • frankgfan's avatar
        frankgfan
        Moderator

        since the node 32 has thye zoning issue and you only want to start up the service groups on node 31, you can run the commands below on node 31

        1, vxdg -fC import <dg_name>              <<< this command imports the dg as a local dg

        2. vxvol -g <dgname> startall

        3. run mount command to mount up the volumes in the dg just imported

        4. start up applications

        Please be advised that you must not attempt to mount up any vols on node 32 as the service group(s) imported by the vxdg command shown in step2 is/are non-CVM/CFS meaning manually mount the file systems of the vols in the dg on other nodes will result in a data corruption - BE VERY CAREFUL.