cancel
Showing results for 
Search instead for 
Did you mean: 

VCS status show starting

Home_224
Level 6

Dear All,

The cluster is active /active, try to startup the VCS by hastart, but the status show starting, check the log find the issue below 

2020/12/24 15:19:18 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp41
2020/12/24 15:19:18 VCS NOTICE V-16-1-10460 Clearing start attribute for resource cfsmount1 of group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS NOTICE V-16-1-10460 Clearing start attribute for resource cvmvoldg1 of group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:18 VCS WARNING V-16-1-10294 Faulted resource cvmvoldg1 is part of the online dependency;clear the faulted resource
2020/12/24 15:19:30 VCS INFO V-16-1-50135 User root fired command: hagrp -online vrts_vea_cfs_int_cfsmount1 devuaeapp32 from localhost
2020/12/24 15:19:30 VCS NOTICE V-16-1-10166 Initiating manual online of group vrts_vea_cfs_int_cfsmount1 on system oemapp42
2020/12/24 15:19:30 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp41
2020/12/24 15:19:30 VCS NOTICE V-16-1-10232 Clearing Restart attribute for group vrts_vea_cfs_int_cfsmount1 on node oemapp42
2020/12/24 15:19:30 VCS WARNING V-16-1-10294 Faulted resource cvmvoldg1 is part of the online dependency;clear the faulted resource

I try to stop and start, even reboot the server, the status same as below,  then stop the vcs by hastop I try to manual mount by   manual  not to using the VCS to control   , but it returns the error unable to get disk layout version.  I can see the disk group and disk show on the node . I really don't know what happen.   May I know if there is any solutin to fix the problem?  Thanks you 

1 ACCEPTED SOLUTION

Accepted Solutions

frankgfan
Level 6
   VIP   

here is what you can do to determine the root cause of the issue.

1, stop VCS and manually import the dg as a local dg by running

vxdg -Cf import <dg_name>  (run the commmand on one node only)

2.  start up the volumes by running

vxvol -g <dg_name> startall 

3. manually mount the file system in question with mount command

mount -v /dev/vx/dsk/dg/vol /mnt_point     (https://sort.veritas.com/public/documents/vie/7.1/linux/productguides/html/sfcfs_admin/ch09s07.htm

4. if the mount command failed, run the command below

fstyp -v /dev/vx/dsk/dg/vol | head -50   (post the output here)

5. if the output of step 4 looks ok but unable to mount, create a new temp mount point then mount the fifle system to the new temp mount point.

 

PS - make sure that yjr file system disk layout version of the volumes are supported by the Veritas version running on the cluster  (that is the storage is not newly allocated to this cluster from another system which runs a newer version of Veritas).

for disk layout version and storage foundation version support info matrix, please visit and search Veritas online KB.

 

View solution in original post

7 REPLIES 7

CliffordB
Level 4
Employee
In general, for anything VCS, run all commands manually. If they don’t run at the command line, don’t bother adding to VCS config.

In this case, import the disks at the command line.

It looks like you are trying to mound a CFS disk group. Using the commands from the Admin Guide for ClusterFileSystem, manually start each of the CFS daemons on each node, then, import the diskgroup on one server. That will help isolate any problems.

Cheers




_______________________________
My customers spend the weekends at home with family, not in the datacenter.

....also, see this topic for commands for testing your configuration.



https://vox.veritas.com/t5/Cluster-Server/CFSMount-Error/td-p/335296

_______________________________
My customers spend the weekends at home with family, not in the datacenter.

@CliffordB 

I check the disk group already in the server , but no idea the VCS cannot start up on the node, keep starting starting.  

 

root@devuaeapp31 # hastatus
attempting to connect....connected

group resource system message
--------------- -------------------- -------------------- --------------------
devuaeapp31 RUNNING
cvm devuaeapp31 ONLINE
iuat_app_sg_02 devuaeapp31 STARTING OFFLINE
vrts_vea_cfs_int_cfsmount1 devuaeapp31 OFFLINE
-------------------------------------------------------------------------
qlogckd devuaeapp31 ONLINE
vxfsckd devuaeapp31 ONLINE
cvm_clus devuaeapp31 ONLINE
cvm_vxconfigd devuaeapp31 ONLINE
cfsmount2 devuaeapp31 OFFLINE
-------------------------------------------------------------------------
cfsmount2 devuaeapp31 WAITING FOR CHILDREN ONLINE
cvmvoldg2 devuaeapp31 *FAULTED*
cfsmount1 devuaeapp31 OFFLINE
cvmvoldg1 devuaeapp31 *FAULTED*

 

020/12/25 20:04:51 VCS INFO V-16-2-13071 (devuaeapp31) Resource(cvmvoldg1): reached OnlineRetryLimit(0).
2020/12/25 20:04:51 VCS INFO V-16-2-13071 (devuaeapp31) Resource(cvmvoldg2): reached OnlineRetryLimit(0).
2020/12/25 20:04:52 VCS ERROR V-16-1-10303 Resource cvmvoldg1 (Owner: unknown, Group: vrts_vea_cfs_int_cfsmount1) is FAULTED (timed out) on sys devuaeapp31
2020/12/25 20:04:52 VCS NOTICE V-16-1-10446 Group vrts_vea_cfs_int_cfsmount1 is offline on system devuaeapp31
2020/12/25 20:04:52 VCS ERROR V-16-1-10303 Resource cvmvoldg2 (Owner: unknown, Group: iuat_app_sg_02) is FAULTED (timed out) on sys devuaeapp31
2020/12/25 20:04:52 VCS NOTICE V-16-1-10446 Group iuat_app_sg_02 is offline on system devuaeapp31

No idea how to fix it 

frankgfan
Level 6
   VIP   

here is what you can do to determine the root cause of the issue.

1, stop VCS and manually import the dg as a local dg by running

vxdg -Cf import <dg_name>  (run the commmand on one node only)

2.  start up the volumes by running

vxvol -g <dg_name> startall 

3. manually mount the file system in question with mount command

mount -v /dev/vx/dsk/dg/vol /mnt_point     (https://sort.veritas.com/public/documents/vie/7.1/linux/productguides/html/sfcfs_admin/ch09s07.htm

4. if the mount command failed, run the command below

fstyp -v /dev/vx/dsk/dg/vol | head -50   (post the output here)

5. if the output of step 4 looks ok but unable to mount, create a new temp mount point then mount the fifle system to the new temp mount point.

 

PS - make sure that yjr file system disk layout version of the volumes are supported by the Veritas version running on the cluster  (that is the storage is not newly allocated to this cluster from another system which runs a newer version of Veritas).

for disk layout version and storage foundation version support info matrix, please visit and search Veritas online KB.

 

View solution in original post

Manual mount the file system for application team to use. 

 

Thank you