Forum Discussion

Sathish_Poojary's avatar
18 years ago

VCS linux issue with LVM Disk Group

Hi

I have setup an Linux cluster(failover) with LVM DiskGroup and LVMLogicalVolume. This setup works fine for sometime, but after that the DiskGroup goes offline automatically. vgdisplay says that the diskgroup is exported. Another issue is that, once this DG goes offline, LVMLogicalVolume remains in the online state and there is no way to make it offline. I have seen this issue both with SAN and iSCSI setup. I am using NetApp filer for storage.

Here is the main.cf entries for DG and volume

        LVMLogicalVolume dfm_vol (
                LogicalVolume = vcs_lun_SdHv
                VolumeGroup = vcs_lun_SdDg
                )

        LVMVolumeGroup dfm_dg (
                VolumeGroup = vcs_lun_SdDg
                )

        Mount data_mount (
                MountPoint = "/vcs_lun"
                BlockDevice = "/dev/mapper/vcs_lun_SdDg-vcs_lun_SdHv"
                FSType = ext3
                FsckOpt = "-y"
                )

Is there any special configuration to avoid this problem?

3 Replies

  • /var/log/messages entry during the DG offline time

    Dec 18 21:04:06 scissors Had[9266]: VCS ERROR V-16-1-10214 Concurrency Violation:CurrentCount increased above 1 for failover group dfm_sg
    Dec 18 21:04:40 scissors AgentFramework[9281]: VCS ERROR V-16-1-13067 Thread(3073498032) Agent is calling clean for resource(dfm_dg) because the resource became OFFLINE unexpectedly, on its own.
    Dec 18 21:04:40 scissors Had[9266]: VCS ERROR V-16-1-13067 (scissors) Agent is calling clean for resource(dfm_dg) because the resource became OFFLINE unexpectedly, on its own.
    Dec 18 21:04:41 scissors AgentFramework[9281]: VCS ERROR V-16-1-13068 Thread(3073498032) Resource(dfm_dg) - clean completed successfully.


  • It worked after removing the LVMVolumeGroup entry.

            LVMVolumeGroup dfm_dg (
                    VolumeGroup = vcs_lun_SdDg
                    )

    For some reason vgdisplay shows VG as online on both the nodes. I guess this is because we don't unmap the LUN from host. I couldn't see any Agents which will do this LUN unmap.