Forum Discussion

Sven_2020's avatar
Sven_2020
Level 3
5 years ago

CFS cluster disks question

Hi, I'm pretty new to this so apologies if this is a dumb question.

I'm running on RHEL 6.10 with Symantec Storage Foundation Cluster File System HA 6.2 installed on 4 servers. SAN storage has been presented to them. My goal is to mount a shared CFS filesystem on them called /infa .

I can see the disks but am not able to mount them yet. The output of vxdisk list is:

DEVICE TYPE DISK GROUP STATUS
disk_0 auto:LVM - - online invalid
hus_1500_10 auto:cdsdisk infa01dg01 infa01dg online thinrclm
hus_1500_11 auto:cdsdisk infa01dg02 infa01dg online thinrclm
hus_1500_12 auto:cdsdisk infa01dg03 infa01dg online thinrclm
hus_1500_13 auto:cdsdisk infa01dg04 infa01dg online thinrclm
hus_1500_14 auto:cdsdisk infa01dg05 infa01dg online thinrclm
hus_1500_15 auto:cdsdisk infa01dg06 infa01dg online thinrclm
hus_1500_16 auto:cdsdisk infa01dg07 infa01dg online thinrclm
hus_1500_17 auto:cdsdisk infa01dg08 infa01dg online thinrclm
hus_1500_20 auto:none - - online invalid thinrclm
hus_1500_21 auto:none - - online invalid thinrclm
hus_1500_22 auto:none - - online invalid thinrclm

I believe the disks should say shared instead of thinrclm. How can I do that?  If I try running :

vxdg deport infa01dg

vxdg -s import infa01dg

It says:  VxVM vxdg ERROR V-5-1-10978 Disk group infa01dg: import failed: Operation must be executed on master

I've tried it on all 4 nodes.

  • I called tech support and was able to get it worked out.  Thanks for the input, everyone!

  • Hi,

    Is this a new install or an existing one? If new, which steps have you completed?

    Is the cfs cluster running? Output from the command below should give some idea.

    cfsluster status
    • Sven_2020's avatar
      Sven_2020
      Level 3

      This cluster actually used to be running Solaris 10, using different disks on the SAN. Since then I've put RHEL on the cluster nodes and am trying to use new disks.  Aside from installing & configuring the software I've run vxdiskadm to try to set up the disks.  The command cfsluster status hangs indefinitely.

      • frankgfan's avatar
        frankgfan
        Moderator

        Since you mentioned that "This cluster actually used to be running Solaris 10, using different disks on the SAN", its not clear to me if this cluster was a failover cluster configuration or it was a parallel (CVM/CS) configuration.  If it was a  parallel cluster configuration and if you saved away a copy of vxexplorer outout, you should be able to review the saved vcs configuration (main.cf) as well as some vxvm output.  You should be then able to use these info as a "template" to rebuild the cluster with cvm/cfs.  If it was a failover cluster, you then need to follow the user guide emailed to build up cvm/cfs.

        here is another technote for your reference  https://sort.veritas.com/public/documents/sf/5.0/aix/html/sf_rac_install/sfrac_ora9i_add_rem_nd8.html

         

        although it was prepared for an old version (5.0 for RAC cluster) for AIX, the setps involved are the same

  • first thing first, is vxconfigd in CVM (cluster) mode?  to find out, run the command below

    vxdctl -c mode

    most likely vxconfiguid is up running in enabled mode howebver vxconfigd cluster is inactive.

     

    please consult this technote https://www.veritas.com/support/en_US/article.100000548

    for a bit more details regarding vxconfigd cluster mode

    please do not hesitate to share any progress made or any issue encpunterred