cancel
Showing results for 
Search instead for 
Did you mean: 

Failed to mount the service group

Home_224
Level 6

Dear All,

I encounter the problem when normal to start the VCS, find the cvm cannot start up .  I give up to use vcs to control, then try to mount the file system but return the error . 

UX:vxfs mount: ERROR: V-3-20003: Cannot open /dev/vx/dsk/iuat_app_dg_01/iuat_app_vol_01: No such device or address
UX:vxfs mount: ERROR: V-3-24996: Unable to get disk layout version 

I check the vxdg list , the the disk group already imported on the master node. no idea how to mount it by manual. 

 

9 REPLIES 9

Home_224
Level 6

hares -state
#Resource Attribute System Value
cfsmount1 State devuaeapp31 OFFLINE
cfsmount1 State devuaeapp32 OFFLINE
cfsmount2 State devuaeapp31 OFFLINE
cfsmount2 State devuaeapp32 OFFLINE
cvm_clus State devuaeapp31 ONLINE
cvm_clus State devuaeapp32 OFFLINE
cvm_vxconfigd State devuaeapp31 ONLINE
cvm_vxconfigd State devuaeapp32 OFFLINE
cvmvoldg1 State devuaeapp31 FAULTED
cvmvoldg1 State devuaeapp32 OFFLINE
cvmvoldg2 State devuaeapp31 FAULTED
cvmvoldg2 State devuaeapp32 OFFLINE
qlogckd State devuaeapp31 ONLINE
qlogckd State devuaeapp32 OFFLINE
vxfsckd State devuaeapp31 ONLINE
vxfsckd State devuaeapp32 OFFLINE

frankgfan
Moderator
Moderator
   VIP   

most of cvm related issues are "shared-devices" related.

the logs below

cvm_clus State devuaeapp32 OFFLINE
cvm_vxconfigd State devuaeapp32 OFFLINE
cvmvoldg1 State devuaeapp31 FAULTED
cvmvoldg1 State devuaeapp32 OFFLINE
cvmvoldg2 State devuaeapp31 FAULTED

show there are some cvm related resorurces are not onlined on the cluster nodes.

you should check engine_A.log for cvm start up related errors and take the action to rectify the problem (if do not know how to reoubleshoot the issue, a cluster reboot is the way to go, if possible).

run the commands below should  give you a bit more info about the issue and the state the cvm cluster is in:

hastatus -sum

gabconfig -a

vxdctl -c mode

vxclustadm -v nodestate

 

PS - make sure that all nodes in the cluster have the same access (r/w) to all the shared storage.

 

 

 

 

 

 

frankgfan
Moderator
Moderator
   VIP   

in addition to the commands mentioned yesterday, to troubleshoot the issue further, you can also run the commands below:

on both nodes, run

vxdisk -o alldgs list } grep -i shared | wc -l

// the numbers returned should be the same

on the node that cvm is not online (vxconfigd is not in cluster enabled mode), run

vxclustadm -m vcs -t gab startnode

this command is to kick vxconfigd on the node into cluster enabled mode however, this command is likely to fail.  with the error returned, you should be able to identify the issue and troubleshoot the issue further.

 

@frankgfan 

Thank you for your reply!

Both node is use the CVM and cfs mount , the node 31 is master, and 32 is slave.  The node 32 cannot mount as the zoning issue , so skip it for temporary, I focus on the node 31 to start up the service group. 

 

frankgfan
Moderator
Moderator
   VIP   

since the node 32 has thye zoning issue and you only want to start up the service groups on node 31, you can run the commands below on node 31

1, vxdg -fC import <dg_name>              <<< this command imports the dg as a local dg

2. vxvol -g <dgname> startall

3. run mount command to mount up the volumes in the dg just imported

4. start up applications

Please be advised that you must not attempt to mount up any vols on node 32 as the service group(s) imported by the vxdg command shown in step2 is/are non-CVM/CFS meaning manually mount the file systems of the vols in the dg on other nodes will result in a data corruption - BE VERY CAREFUL.

 

@frankgfan 

Thank you !

I can manual mount the mount file system right now.

But I still no idea the reason the VCS not starting up the service.

frankgfan
Moderator
Moderator
   VIP   

check the engine_A.log for the service group?resources start up errors and fix any issues identified then start up the service group again (a maintainance windw is needed as you will need to offline the applications and deport the dgs manually imported.

@frankgfan   

Thank you .  

I check the log, there is no abnormal and reboot the server. Then it auto resumed 

frankgfan
Moderator
Moderator
   VIP   

Veritas software s developed in such a way that all the abnormal behaviors are logged.

 

for example, the resource issue below:

cvmvoldg1 State devuaeapp31 FAULTED

a shared dg import hence  the file system mount issue are most often storage or reg key related.

A system reboot is an easiest way to  troubleshoot and rectify the problem