Forum Discussion

brucemcgill_n1's avatar
14 years ago

CFSMount Error

Hi All,

I am using the CFSTypes.cf and CVMTypes.cf file types and am not able to mount the diskgroup named "oracledg" on VCS nodes. I am enclosing the main.cf file used in cluster configuration. Can anyone please help?

I get the following error message:

010/09/17 07:46:02 VCS ERROR V-16-20007-1010 (node1) CVMVolDg:oracle_cvmvoldg:online:online_change_activation: can not change activation of dg oracledg to shared-write
2010/09/17 07:46:02 VCS WARNING V-16-20007-1025 (node1) CVMVolDg:oracle_cvmvoldg:online:Can not set diskgroup oracledg activation to sw
2010/09/17 07:46:02 VCS ERROR V-16-20007-1045 (node1) CVMVolDg:oracle_cvmvoldg:online:Initial check failed
2010/09/17 07:46:03 VCS INFO V-16-2-13716 (node1) Resource(oracle_cvmvoldg): Output of the completed operation (online)
==============================================
VxVM vxdg ERROR V-5-1-3268  activation failed: Disk group oracledg: shared-write: Invalid mode for non-shared disk group

 

RROR V-16-2-13066 (node2) Agent is calling clean for resource(oracle_cvmvoldg) because the resource is not up even after online completed.
2010/09/17 07:48:01 VCS INFO V-16-2-13716 (node2) Resource(oracle_cvmvoldg): Output of the completed operation (clean)
==============================================
/var/VRTSvcs/lock/oracle_cvmvoldg_oracledg_stat: No such file or directory
==============================================

2010/09/17 07:48:01 VCS INFO V-16-2-13068 (node2) Resource(oracle_cvmvoldg) - clean completed successfully.
2010/09/17 07:48:01 VCS INFO V-16-2-13071 (node2) Resource(oracle_cvmvoldg): reached OnlineRetryLimit(2).
2010/09/17 07:48:02 VCS ERROR V-16-1-10303 Resource oracle_cvmvoldg (Owner: unknown, Group: cfs) is FAULTED (timed out) on sys node2
2010/09/17 07:48:02 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cfs_online_phantom (Owner: unknown, Group: cfs) on System node2
2010/09/17 07:48:04 VCS INFO V-16-1-10305 Resource cfs_online_phantom (Owner: unknown, Group: cfs) is offline on node2 (VCS initiated)
2010/09/17 07:48:04 VCS ERROR V-16-1-10205 Group cfs is faulted on system node2
2010/09/17 07:48:04 VCS NOTICE V-16-1-10446 Group cfs is offline on system node2
2010/09/17 07:48:04 VCS INFO V-16-1-10493 Evaluating node1 as potential target node for group cfs
2010/09/17 07:48:04 VCS INFO V-16-1-50010 Group cfs is online or faulted on system node1
2010/09/17 07:48:04 VCS INFO V-16-1-10493 Evaluating node2 as potential target node for group cfs
2010/09/17 07:48:04 VCS INFO V-16-1-50010 Group cfs is online or faulted on system node2
2010/09/17 07:48:04 VCS ERROR V-16-2-13066 (node1) Agent is calling clean for resource(oracle_cvmvoldg) because the resource is not up even after online completed.
2010/09/17 07:48:05 VCS INFO V-16-6-0 (node2) postoffline:Invoked with arg0=node2, arg1=cfs
2010/09/17 07:48:05 VCS INFO V-16-6-15002 (node2) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline node2 cfs   successfully
2010/09/17 07:48:05 VCS INFO V-16-2-13716 (node1) Resource(oracle_cvmvoldg): Output of the completed operation (clean)
==============================================
/var/VRTSvcs/lock/oracle_cvmvoldg_oracledg_stat: No such file or directory
==============================================

2010/09/17 07:48:05 VCS INFO V-16-2-13068 (node1) Resource(oracle_cvmvoldg) - clean completed successfully.
2010/09/17 07:48:05 VCS INFO V-16-2-13071 (node1) Resource(oracle_cvmvoldg): reached OnlineRetryLimit(2).
2010/09/17 07:48:07 VCS ERROR V-16-1-10303 Resource oracle_cvmvoldg (Owner: unknown, Group: cfs) is FAULTED (timed out) on sys node1
2010/09/17 07:48:07 VCS NOTICE V-16-1-10300 Initiating Offline of Resource cfs_online_phantom (Owner: unknown, Group: cfs) on System node1
2010/09/17 07:48:08 VCS INFO V-16-1-10305 Resource cfs_online_phantom (Owner: unknown, Group: cfs) is offline on node1 (VCS initiated)
2010/09/17 07:48:08 VCS ERROR V-16-1-10205 Group cfs is faulted on system node1
2010/09/17 07:48:08 VCS NOTICE V-16-1-10446 Group cfs is offline on system node1
2010/09/17 07:48:08 VCS INFO V-16-1-10493 Evaluating node1 as potential target node for group cfs
2010/09/17 07:48:08 VCS INFO V-16-1-50010 Group cfs is online or faulted on system node1
2010/09/17 07:48:08 VCS INFO V-16-1-10493 Evaluating node2 as potential target node for group cfs
2010/09/17 07:48:08 VCS INFO V-16-1-50010 Group cfs is online or faulted on system node2
2010/09/17 07:48:10 VCS INFO V-16-6-0 (node1) postoffline:Invoked with arg0=node1, arg1=cfs
2010/09/17 07:48:10 VCS INFO V-16-6-15002 (node1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/triggers/postoffline node1 cfs   successfully
 

 

Best Regards,

Bruce

 

 

 

  • Ok..

    So,

    -- whether disk_0 is a shared disk & visible across the cluster nodes ? You can verify this with serial number of the disk:

    # /etc/vx/diag.d/vxdmpinq /dev/vx/rdmp/disk_0

    OR

    If you run vxdisk -o alldgs list on node 2, you should see disk ora1 imported on second node as well....

     

    -- Secondly, error appearing in engine-A.log says, no such device or address.... that means volume or FS is not ready..

    Since you imported the diskgroup manually, did you started the volumes mannually ? can u see if volumes inside diskgroup are enabled active ?

    # vxprint -qthg oracledg             (all the volumes should be ENABLED ACTIVE), any other state won't work...

    To start volumes

    # vxvol -g oracledg startall

     

    Once volumes are ENABLED ACTIVE, can you see a filesystem on it ?

    # fstyp -v /dev/vx/rdsk/oracledg/oravol

    If you see the filesystem, it should be able to mount...

     

    Since you are using cfs commands, I believe you have already added mounts using cfsmntadm... If not then have a look at man page of cfsmntadm..

     

    Gaurav

6 Replies

  • what is the SF version & OS version ?

     

    Is the diskgroup oracledg imported ? (check with # vxdg list)

     

    Point to note us, CVMVolDG agent doesn't import the diskgroup, as part of "online" the resource, it just changes the activation mode of the diskgroup... see here

    http://sfdoccentral.symantec.com/sf/5.1/solaris/html/sfcfs_admin/ch05s09.htm

    In case diskgroup is not imported... I would suggest to manually import the diskgroup from CVM master node..

    --Freeze the cluster group

    # hagrp -freeze cfs -persistent

    -- identify master node

    # vxdctl -c mode

     

    -- from master node

    # vxdg -s import oracledg

    -- unfreeze group

    # hagrp -unfreeze cfs -persistent

    -- probe resource

    # hares -probe oracle_cvmvoldg

    # hagrp -online cfs -sys <node1/node2>   (select proper node)

     

     

    Gaurav

  • I also checked the online script for CVMVolDg agent in /opt/VRTSvcs/bin/CVMVolDg directory.... the error you are getting

     

    2010/09/17 07:46:02 VCS ERROR V-16-20007-1045 (node1) CVMVolDg:oracle_cvmvoldg:online:Initial check failed

     

    See the error code above is 1045, from online script:

    # check if everything is imported and enabled etc.
    cvmvoldg_check_all "ONLINE" _cvm_res
    if [ $_cvm_res -ne 0 ] ; then
            VCSAG_LOG_MSG "E" "Initial check failed" 1045
            exit $CVMVOLDG_FAILURE
    fi
     

    seeing above, I think my first comment looks on the right track..... see if diskgroup is imported... if yes, then whether it is imported with shared flag or no..

    # vxdg list

     

    Gaurav

  • Hi,

    I made the changes incluster and have created the shared diskgroup.  I just want to mount a cluster file system on a 2-node cluster but am unable to get the desired result.

    node1 * / # vxdisk -o alldgs list


    DEVICE       TYPE            DISK         GROUP        STATUS
    c0t0d0s2     auto:SVM        -            -            SVM
    disk_0       auto:cdsdisk    ora1         oracledg     online shared
    disk_1       auto:none       -            -            online invalid
    disk_2       auto:none       -            -            online invalid
    disk_3       auto:none       -            -            online invalid
    disk_4       auto:none       -            -            online invalid
     

    node1 * / # cfscluster status

      Node             :  node1
      Cluster Manager  :  running
      CVM state        :  running
      MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS
      /oracle        oravol         oracledg          NOT MOUNTED


      Node             :  node2
      Cluster Manager  :  running
      CVM state        :  running
      MOUNT POINT    SHARED VOLUME  DISK GROUP        STATUS
      /oracle        oravol         oracledg          NOT MOUNTED
     

    I am getting the following error:

    node1 * / # cfsmount /oracle
      Mounting...
      Error: V-35-50: Could not mount [/dev/vx/dsk/oracledg/oravol] at /oracle on node1
    UX:vxfs mount: ERROR: V-3-24706: /dev/vx/dsk/oracledg/oravol no such device or filesystem on it missing one or more devices

      Error: V-35-50: Could not mount [/dev/vx/dsk/oracledg/oravol] at /oracle on node2
      Look at VCS engine_A.log on node2 for possible errors for resource cfsmount1
     
    node1 * / # vxdg list
    NAME         STATE           ID
    oracledg     enabled,shared,cds   1284744418.19.node2
     
    node1 * / # vxdg list oracledg
    Group:     oracledg
    dgid:      1284744418.19.node2
    import-id: 33792.9
    flags:     shared cds
    version:   150
    alignment: 8192 (bytes)
    local-activation: shared-write
    cluster-actv-modes: node2=sw node1=sw
    ssb:            on
    autotagging:    on
    detach-policy: global
    dg-fail-policy: dgdisable
    copies:    nconfig=default nlog=default
    config:    seqno=0.1047 permlen=48144 free=48140 templen=3 loglen=7296
    config disk disk_0 copy 1 len=48144 state=clean online
    log disk disk_0 copy 1 len=7296

    node1 * / # vxdg -s list
     
    NAME         STATE           ID
    oracledg     enabled,shared,cds   1284744418.19.node2
     
    Regards,
    Bruce
  • Ok..

    So,

    -- whether disk_0 is a shared disk & visible across the cluster nodes ? You can verify this with serial number of the disk:

    # /etc/vx/diag.d/vxdmpinq /dev/vx/rdmp/disk_0

    OR

    If you run vxdisk -o alldgs list on node 2, you should see disk ora1 imported on second node as well....

     

    -- Secondly, error appearing in engine-A.log says, no such device or address.... that means volume or FS is not ready..

    Since you imported the diskgroup manually, did you started the volumes mannually ? can u see if volumes inside diskgroup are enabled active ?

    # vxprint -qthg oracledg             (all the volumes should be ENABLED ACTIVE), any other state won't work...

    To start volumes

    # vxvol -g oracledg startall

     

    Once volumes are ENABLED ACTIVE, can you see a filesystem on it ?

    # fstyp -v /dev/vx/rdsk/oracledg/oravol

    If you see the filesystem, it should be able to mount...

     

    Since you are using cfs commands, I believe you have already added mounts using cfsmntadm... If not then have a look at man page of cfsmntadm..

     

    Gaurav

  • Hi Gaurav,

        I recreated the shared disk group, followed by cluster file system and it is working fine now. I guess I must have created the shared disk group from the "slave" cluster node earlier. Thank you for your help.

    Regards,

    Bruce

  • good to know its working..

    just FYI, all config changes can happen from master node only... might have missed something else :-)

     

    Gaurav