cancel
Showing results for 
Search instead for 
Did you mean: 

CFS cluster disks question

Sven_2020
Level 3

Hi, I'm pretty new to this so apologies if this is a dumb question.

I'm running on RHEL 6.10 with Symantec Storage Foundation Cluster File System HA 6.2 installed on 4 servers. SAN storage has been presented to them. My goal is to mount a shared CFS filesystem on them called /infa .

I can see the disks but am not able to mount them yet. The output of vxdisk list is:

DEVICE TYPE DISK GROUP STATUS
disk_0 auto:LVM - - online invalid
hus_1500_10 auto:cdsdisk infa01dg01 infa01dg online thinrclm
hus_1500_11 auto:cdsdisk infa01dg02 infa01dg online thinrclm
hus_1500_12 auto:cdsdisk infa01dg03 infa01dg online thinrclm
hus_1500_13 auto:cdsdisk infa01dg04 infa01dg online thinrclm
hus_1500_14 auto:cdsdisk infa01dg05 infa01dg online thinrclm
hus_1500_15 auto:cdsdisk infa01dg06 infa01dg online thinrclm
hus_1500_16 auto:cdsdisk infa01dg07 infa01dg online thinrclm
hus_1500_17 auto:cdsdisk infa01dg08 infa01dg online thinrclm
hus_1500_20 auto:none - - online invalid thinrclm
hus_1500_21 auto:none - - online invalid thinrclm
hus_1500_22 auto:none - - online invalid thinrclm

I believe the disks should say shared instead of thinrclm. How can I do that?  If I try running :

vxdg deport infa01dg

vxdg -s import infa01dg

It says:  VxVM vxdg ERROR V-5-1-10978 Disk group infa01dg: import failed: Operation must be executed on master

I've tried it on all 4 nodes.

1 ACCEPTED SOLUTION

Accepted Solutions

I called tech support and was able to get it worked out.  Thanks for the input, everyone!

View solution in original post

14 REPLIES 14

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified
Hi,

Is this a new install or an existing one? If new, which steps have you completed?

Is the cfs cluster running? Output from the command below should give some idea.

cfsluster status

frankgfan
Moderator
Moderator
   VIP   

first thing first, is vxconfigd in CVM (cluster) mode?  to find out, run the command below

vxdctl -c mode

most likely vxconfiguid is up running in enabled mode howebver vxconfigd cluster is inactive.

 

please consult this technote https://www.veritas.com/support/en_US/article.100000548

for a bit more details regarding vxconfigd cluster mode

please do not hesitate to share any progress made or any issue encpunterred

 

This cluster actually used to be running Solaris 10, using different disks on the SAN. Since then I've put RHEL on the cluster nodes and am trying to use new disks.  Aside from installing & configuring the software I've run vxdiskadm to try to set up the disks.  The command cfsluster status hangs indefinitely.

It shows  mode: enabled: cluster inactive

Also:

$ vxlicrep -e | grep CVM
CVM_LITE_OPS = Disabled
CVM_FULL = Enabled
CVM_LITE = Disabled
CVM_FULL#VERITAS Volume Manager = Enabled

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified
I guess you’ve not setup the cluster yet. CVM/CFS requires veritas cluster to be up and running. It will allows cvm to work.

Easiest way to configure:

From any node in the cluster:

# install -configure

Make sure to select the option to configure Cluster File System. If the cluster is already configured, many of the answers will be pre-populated. When done, the cluster will contain 4 resources (I think) that are the daemons that need to run for CFS.

Hope that helps.

_______________________________
My customers spend the weekends at home with family, not in the datacenter.

frankgfan
Moderator
Moderator
   VIP   

your lic is OK for CVM

from what you said, you need to reconfigure cvm/cfs.

you may want to download this admin guide at https://origin-download.veritas.com/resources/content/live/DOCUMENTATION/7000/DOC7818/en_US/sfcfs_ad...

read and follow chapter 15 titiled "AdministeringStorageFoundationClusterFileSystemHigh Availabilityand its components", in particular both sections "administering CFS" and "administering CVM"

 

frankgfan
Moderator
Moderator
   VIP   

Since you mentioned that "This cluster actually used to be running Solaris 10, using different disks on the SAN", its not clear to me if this cluster was a failover cluster configuration or it was a parallel (CVM/CS) configuration.  If it was a  parallel cluster configuration and if you saved away a copy of vxexplorer outout, you should be able to review the saved vcs configuration (main.cf) as well as some vxvm output.  You should be then able to use these info as a "template" to rebuild the cluster with cvm/cfs.  If it was a failover cluster, you then need to follow the user guide emailed to build up cvm/cfs.

here is another technote for your reference  https://sort.veritas.com/public/documents/sf/5.0/aix/html/sf_rac_install/sfrac_ora9i_add_rem_nd8.htm...

 

although it was prepared for an old version (5.0 for RAC cluster) for AIX, the setps involved are the same

Hello, it was a parallel cluster.  I do have a copy of main.cf from its prior state.

I'll check that link out.  Thanks!

frankgfan
Moderator
Moderator
   VIP   

it should not be too difficult for you to re-setup cvm/cfs then.

first, you need to make sure that the new SAN storages are SCSI-III PR compliant which most of decent storages deployed these days are.  make sure all the nodes in the cluster have the same r/w permission to the the shared storage

second, fix the main.cf by putting back at least cvm_clus resource so that you can test to kick vxconfiogd in cluster mode by ruuning the command below on one of the nodes

vxclustadm -m cvs -t gab startnode

if the command ran successfully, the node will be the current cvm master (vxdctl -c mode)

once cvm is up, fixing cfs is easy.

 

good luck

 

It feels like I'm getting closer but currently have 2 resources (vxfen and CFSMount) which are faulted, and I can't clear the faults. I was (finally) able to get vxclustadm -m vcs -t gab startnode to work (if I run it again it says Node already initialized.)  The output of vxdctl -c mode is now:

mode: enabled: cluster active - MASTER
master: infaapp01-n1

My main.cf looks like:

include "OracleASMTypes.cf"
include "types.cf"
include "CFSTypes.cf"
include "CVMTypes.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster infaapp01 (
        UserNames = { admin = <censored> }
        ClusterAddress = "10.1.8.247"
        Administrators = { admin }
        UseFence = SCSI3
        )

system infaapp01-n1 (
        )

system infaapp01-n2 (
        )

system infaapp01-n3 (
        )

system infaapp01-n4 (
        )

group ClusterService (
        SystemList = { infaapp01-n1 = 0, infaapp01-n2 = 1, infaapp01-n3 = 2,
                 infaapp01-n4 = 3 }
        AutoStartList = { infaapp01-n1, infaapp01-n2, infaapp01-n3, infaapp01-n4 }
        OnlineRetryLimit = 3
        OnlineRetryInterval = 120
        )

        IP webip (
                Device = eth0
                Address = "10.1.8.247"
                NetMask = "255.255.192.0"
                )

        NIC csgnic (
                Device = eth0
                )

        NotifierMngr ntfr (
                SmtpServer = "<censored>"
                SmtpRecipients = { "<censored>" = Error }
                )

        ntfr requires csgnic
        webip requires csgnic


        // resource dependency tree
        //
        //      group ClusterService
        //      {
        //      NotifierMngr ntfr
        //          {
        //          NIC csgnic
        //          }
        //      IP webip
        //          {
        //          NIC csgnic
        //          }
        //      }


group cvm (
        SystemList = { infaapp01-n4 = 0, infaapp01-n3 = 1, infaapp01-n2 = 2,
                 infaapp01-n1 = 3 }
        Parallel = 1
        AutoStartList = { infaapp01-n1, infaapp01-n2, infaapp01-n3, infaapp01-n4 }
        )

        CFSMount cfsmount1 (
                Critical = 0
                ResourceRecipients = { "<censored>" = Warning }
                MountPoint = "/infa"
                MountType = "cluster"
                BlockDevice = "/dev/vx/dsk/infa01dg/infavol"
                SetPrimary = infaapp01-n1
                )

        CFSfsckd vxfsckd (
                )

        CVMCluster cvm_clus (
                ResContainerInfo = { Type = Name, Enabled = "" }
                Critical = 0
                CVMTransport = gab
                CVMClustName = infaapp01
                CVMTimeout = 200
                CVMNodeId = { infaapp01-n1 = 0, infaapp01-n2 = 1,
                         infaapp01-n3 = 2,
                         infaapp01-n4 = 3 }
                )



        // resource dependency tree
        //
        //      group cvm
        //      {
        //      CFSMount cfsmount1
        //      CVMCluster cvm_clus
        //      CFSfsckd vxfsckd
        //      }


group vxfen (
        SystemList = { infaapp01-n1 = 0, infaapp01-n2 = 1, infaapp01-n3 = 2,
                 infaapp01-n4 = 3 }
        AutoFailOver = 0
        Parallel = 1
        )

        CoordPoint coordpoint (
                LevelTwoMonitorFreq = 10
                )

        Phantom RES_phantom_vxfen (
                )



        // resource dependency tree
        //
        //      group vxfen
        //      {
        //      Phantom RES_phantom_vxfen
        //      CoordPoint coordpoint
        //      }

Thanks for any advice...

I did find a clue in the logs:

2020/06/17 09:27:48 VCS WARNING V-16-20011-5508 CFSMount:cfsmount1:online:Mount Error : UX:vxfs mount.vxfs: ERROR: V-3-20012: not a valid vxfs file system
UX:vxfs mount.vxfs: ERROR: V-3-24996: Unable to get disk layout version

I called tech support and was able to get it worked out.  Thanks for the input, everyone!