cancel
Showing results for 
Search instead for 
Did you mean: 

After upgrade cvmvoldg fails for all diskgroups

AlexBP
Level 2
Hi, after upgrading my SFHA 6.1 cluster from Rhel6.5 to Rhel 6.7 and SFHA 6.1.0.200 to 6.1.1-300 , cvm is online, but cvmvoldg fails to startup for all diskgroups. (the installer ended with message ...successful....) Information in /var/log/messages: CVMVolDg:cvmvoldg9:online:Diskgroup 0 is not imported from CVM Master. Failing online. or on the master: CVMVolDg:cvmvoldg9:online:could not find diskgroup 0 imported. If it was previously deported, it will have to be manually imported Starting in debug mode reveals the following information: (0) entering main (1) entering cvmvoldg_make_vollist (1) exiting cvmvoldg_make_vollist 0 (1) entering cvmvoldg_make_voliolist (1) exiting cvmvoldg_make_voliolist 0 (1) debug :: monitor: activation requested is sw (1) debug :: monitor: nvols is 1 (1) debug :: monitor: dg is 0 (1) debug :: monitor: resource name is cvmvoldg9 (1) debug :: monitor: volume list is < vlv_some > (1) debug :: monitor: volume io list is < > (1) entering cvmvoldg_monitor_setup (2) entering cvmvoldg_res_online (2) exiting cvmvoldg_res_online (1) exiting cvmvoldg_monitor_setup (0) exiting main 100 In a properly working cluster, the line (1) debug :: monitor: dg is 0 should more or less look like this: (1) debug :: monitor: dg is dgdisk_1 Debugging a bit deeper into the startup , I can there is a return value =3 in function VCSAG_GET_ATTR_INDEX, which seems to be a VCSAG_ATTR_NOTFOUND exception. + VCSAG_GET_ATTR_INDEX CVMDiskGroup -1 CVMActivation 1 sw CVMVolume 1 vlv_bgh CVMVolumeIoTest 0 CVMDGAction 1 CVMDeportOnOffline 1 0 CVMDeactivateOnOffline 1 0 State 1 0 ClearClone 1 0 OpenStatus 1 0 + VCSAG_RETVAL=3 + [ 3 -ne 0 ] + return 3 That may be the reason why there's no disk group name available, so dg =0 ... vxprint output seems to be normal for that diskgroup: Disk group: dg_somename TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg dg_some dg_some - - - - - - dm emc_clariion0_47 emc_clariion0_47 - 1048469696 - - - - dm emc_clariion0_48 emc_clariion0_48 - 1048469696 - - - - v vlv_some fsgen ENABLED 2096939008 - ACTIVE - - pl vlv_some-01 vlv_some ENABLED 2096939008 - ACTIVE - - sd emc_clariion0_47-01 vlv_some-01 ENABLED 1048469696 0 - - - sd emc_clariion0_48-01 vlv_some-01 ENABLED 1048469312 1048469696 - - - Any help or hint what may cause this strange behavior would be appreciated. Thanks.
1 ACCEPTED SOLUTION

Accepted Solutions

AlexBP
Level 2

Strange, after the upgrade the file /etc/VRTSvcs/conf/config/CVMTypes.cf was inconsistent. The entries

for  CVMDiskGroup, CVMVolume were missing for type CVMVolDg , Variable ArgList[].

After adding both everything starts up and filesystems are mounted...

 

View solution in original post

1 REPLY 1

AlexBP
Level 2

Strange, after the upgrade the file /etc/VRTSvcs/conf/config/CVMTypes.cf was inconsistent. The entries

for  CVMDiskGroup, CVMVolume were missing for type CVMVolDg , Variable ArgList[].

After adding both everything starts up and filesystems are mounted...