06-05-2020 04:27 PM
Hi, I'm pretty new to this so apologies if this is a dumb question.
I'm running on RHEL 6.10 with Symantec Storage Foundation Cluster File System HA 6.2 installed on 4 servers. SAN storage has been presented to them. My goal is to mount a shared CFS filesystem on them called /infa .
I can see the disks but am not able to mount them yet. The output of vxdisk list is:
DEVICE TYPE DISK GROUP STATUS disk_0 auto:LVM - - online invalid hus_1500_10 auto:cdsdisk infa01dg01 infa01dg online thinrclm hus_1500_11 auto:cdsdisk infa01dg02 infa01dg online thinrclm hus_1500_12 auto:cdsdisk infa01dg03 infa01dg online thinrclm hus_1500_13 auto:cdsdisk infa01dg04 infa01dg online thinrclm hus_1500_14 auto:cdsdisk infa01dg05 infa01dg online thinrclm hus_1500_15 auto:cdsdisk infa01dg06 infa01dg online thinrclm hus_1500_16 auto:cdsdisk infa01dg07 infa01dg online thinrclm hus_1500_17 auto:cdsdisk infa01dg08 infa01dg online thinrclm hus_1500_20 auto:none - - online invalid thinrclm hus_1500_21 auto:none - - online invalid thinrclm hus_1500_22 auto:none - - online invalid thinrclm
I believe the disks should say shared instead of thinrclm. How can I do that? If I try running :
vxdg deport infa01dg vxdg -s import infa01dg
It says: VxVM vxdg ERROR V-5-1-10978 Disk group infa01dg: import failed: Operation must be executed on master
I've tried it on all 4 nodes.
Solved! Go to Solution.
06-17-2020 06:24 PM
I called tech support and was able to get it worked out. Thanks for the input, everyone!
06-06-2020 01:48 AM
06-08-2020 04:56 AM
first thing first, is vxconfigd in CVM (cluster) mode? to find out, run the command below
vxdctl -c mode
most likely vxconfiguid is up running in enabled mode howebver vxconfigd cluster is inactive.
please consult this technote https://www.veritas.com/support/en_US/article.100000548
for a bit more details regarding vxconfigd cluster mode
please do not hesitate to share any progress made or any issue encpunterred
06-08-2020 08:23 AM
This cluster actually used to be running Solaris 10, using different disks on the SAN. Since then I've put RHEL on the cluster nodes and am trying to use new disks. Aside from installing & configuring the software I've run vxdiskadm to try to set up the disks. The command cfsluster status hangs indefinitely.
06-08-2020 08:24 AM
It shows mode: enabled: cluster inactive
06-08-2020 08:27 AM
Also:
$ vxlicrep -e | grep CVM CVM_LITE_OPS = Disabled CVM_FULL = Enabled CVM_LITE = Disabled CVM_FULL#VERITAS Volume Manager = Enabled
06-08-2020 10:31 AM
06-08-2020 10:49 AM
06-09-2020 04:56 AM
your lic is OK for CVM
from what you said, you need to reconfigure cvm/cfs.
you may want to download this admin guide at https://origin-download.veritas.com/resources/content/live/DOCUMENTATION/7000/DOC7818/en_US/sfcfs_ad...
read and follow chapter 15 titiled "AdministeringStorageFoundationClusterFileSystemHigh Availabilityand its components", in particular both sections "administering CFS" and "administering CVM"
06-09-2020 11:49 PM
Since you mentioned that "This cluster actually used to be running Solaris 10, using different disks on the SAN", its not clear to me if this cluster was a failover cluster configuration or it was a parallel (CVM/CS) configuration. If it was a parallel cluster configuration and if you saved away a copy of vxexplorer outout, you should be able to review the saved vcs configuration (main.cf) as well as some vxvm output. You should be then able to use these info as a "template" to rebuild the cluster with cvm/cfs. If it was a failover cluster, you then need to follow the user guide emailed to build up cvm/cfs.
here is another technote for your reference https://sort.veritas.com/public/documents/sf/5.0/aix/html/sf_rac_install/sfrac_ora9i_add_rem_nd8.htm...
although it was prepared for an old version (5.0 for RAC cluster) for AIX, the setps involved are the same
06-10-2020 08:44 AM
Hello, it was a parallel cluster. I do have a copy of main.cf from its prior state.
I'll check that link out. Thanks!
06-10-2020 09:47 PM
it should not be too difficult for you to re-setup cvm/cfs then.
first, you need to make sure that the new SAN storages are SCSI-III PR compliant which most of decent storages deployed these days are. make sure all the nodes in the cluster have the same r/w permission to the the shared storage
second, fix the main.cf by putting back at least cvm_clus resource so that you can test to kick vxconfiogd in cluster mode by ruuning the command below on one of the nodes
vxclustadm -m cvs -t gab startnode
if the command ran successfully, the node will be the current cvm master (vxdctl -c mode)
once cvm is up, fixing cfs is easy.
good luck
06-17-2020 11:52 AM
It feels like I'm getting closer but currently have 2 resources (vxfen and CFSMount) which are faulted, and I can't clear the faults. I was (finally) able to get vxclustadm -m vcs -t gab startnode to work (if I run it again it says Node already initialized.) The output of vxdctl -c mode is now:
mode: enabled: cluster active - MASTER master: infaapp01-n1
My main.cf looks like:
include "OracleASMTypes.cf" include "types.cf" include "CFSTypes.cf" include "CVMTypes.cf" include "Db2udbTypes.cf" include "OracleTypes.cf" include "SybaseTypes.cf" cluster infaapp01 ( UserNames = { admin = <censored> } ClusterAddress = "10.1.8.247" Administrators = { admin } UseFence = SCSI3 ) system infaapp01-n1 ( ) system infaapp01-n2 ( ) system infaapp01-n3 ( ) system infaapp01-n4 ( ) group ClusterService ( SystemList = { infaapp01-n1 = 0, infaapp01-n2 = 1, infaapp01-n3 = 2, infaapp01-n4 = 3 } AutoStartList = { infaapp01-n1, infaapp01-n2, infaapp01-n3, infaapp01-n4 } OnlineRetryLimit = 3 OnlineRetryInterval = 120 ) IP webip ( Device = eth0 Address = "10.1.8.247" NetMask = "255.255.192.0" ) NIC csgnic ( Device = eth0 ) NotifierMngr ntfr ( SmtpServer = "<censored>" SmtpRecipients = { "<censored>" = Error } ) ntfr requires csgnic webip requires csgnic // resource dependency tree // // group ClusterService // { // NotifierMngr ntfr // { // NIC csgnic // } // IP webip // { // NIC csgnic // } // } group cvm ( SystemList = { infaapp01-n4 = 0, infaapp01-n3 = 1, infaapp01-n2 = 2, infaapp01-n1 = 3 } Parallel = 1 AutoStartList = { infaapp01-n1, infaapp01-n2, infaapp01-n3, infaapp01-n4 } ) CFSMount cfsmount1 ( Critical = 0 ResourceRecipients = { "<censored>" = Warning } MountPoint = "/infa" MountType = "cluster" BlockDevice = "/dev/vx/dsk/infa01dg/infavol" SetPrimary = infaapp01-n1 ) CFSfsckd vxfsckd ( ) CVMCluster cvm_clus ( ResContainerInfo = { Type = Name, Enabled = "" } Critical = 0 CVMTransport = gab CVMClustName = infaapp01 CVMTimeout = 200 CVMNodeId = { infaapp01-n1 = 0, infaapp01-n2 = 1, infaapp01-n3 = 2, infaapp01-n4 = 3 } ) // resource dependency tree // // group cvm // { // CFSMount cfsmount1 // CVMCluster cvm_clus // CFSfsckd vxfsckd // } group vxfen ( SystemList = { infaapp01-n1 = 0, infaapp01-n2 = 1, infaapp01-n3 = 2, infaapp01-n4 = 3 } AutoFailOver = 0 Parallel = 1 ) CoordPoint coordpoint ( LevelTwoMonitorFreq = 10 ) Phantom RES_phantom_vxfen ( ) // resource dependency tree // // group vxfen // { // Phantom RES_phantom_vxfen // CoordPoint coordpoint // }
Thanks for any advice...
06-17-2020 02:10 PM
I did find a clue in the logs:
2020/06/17 09:27:48 VCS WARNING V-16-20011-5508 CFSMount:cfsmount1:online:Mount Error : UX:vxfs mount.vxfs: ERROR: V-3-20012: not a valid vxfs file system
UX:vxfs mount.vxfs: ERROR: V-3-24996: Unable to get disk layout version
06-17-2020 06:24 PM
I called tech support and was able to get it worked out. Thanks for the input, everyone!