Forum Discussion

gmathew's avatar
gmathew
Level 3
15 years ago

Cannot remove last disk group configuration copy


Hi,

I have a diskgroup with 6 EMC SAN disks in it. I got 6 new SAN storage and added it to the same disk group and mirrored volume. This host is running on centos 4.8.

After mirroing I have removed the old plex. When I try to remove the last disk from the old SAN on the host using "vxdg -g dg rmdisk <disk_name> ", it throws an error as mentioned below.

# vxdg -g dg01 rmdisk disk06
VxVM vxdg ERROR V-5-1-10127 disassociating disk-media disk06:
        Cannot remove last disk group configuration copy

I would like to remove this last disk from the Disk group, because the volume is running on the new disks. How can I remove this disk from the disk group?

Thanks for the help in advance.


  • Thanks Everyone. I could resolve the issue. It was not the problem with the disk. There seems to be a bug or a issue with VxVM 4.1mp3 on Linux, wherein if we add the disk to a dg and not reboot the linux box, we cannot remove the unused disk because the config copies doesnt get created on the new disks after we add it to the diskgroup.

    Rebooting the linux box solved the issue. The config copies are created on 5 of the new disks after the reboot.

    Thanks again for all your valuable help!

7 Replies

Replies have been turned off for this discussion
  • Hello,
    I think you want to remove last disk from a type of storage but not to destroy the diskgroup, because the volumes are on the other storage.
    In this case maybe increase the number of copy in the diskgroup

    Note the number of copies (attribute nconfig) :
    vxdg list yourdg

    increase this number :
    vxedit set nconfig=5 yourdg

    Remove  the disk from the dg :
    vxdg -g yourdg rmdisk yourdisk

    Decrease the number of copies:
    vxedit set nconfig=5 yourdg

    Regards,
    Herve


  • If you are trying to remove the last disk from the disk group then it means that you are actually trying to delete the disk group and if that is the purpose then you should rather try the command "vxdg destroy dg01".

    Also paste the output of the following command, if deletion of the DG is not your purpose:

    vxdisk -o alldgs -e list
    vxprint -qhtrg dg01


    Regards,
    Dev
    • aaru's avatar
      aaru
      Level 0

      yes correct, same issue i got ..it is resolved now ..thnx

       

  • As Herve mentioned, changing the number of configuration copies may help

    If you have 5.0 or above the following technote may also assist
    TN 303352: How to force VM to keep configuration database on a specific disk
    http://seer.entsupport.symantec.com/docs/303352.htm

    although ... in an earlier post you mentioned this was 4.1MP3 so perhaps not - ie: if the keepmeta attribute is not available in 4.1 then increasing the number of config copies is probably the best bet (to put copies on the new disks, so the old emc disk can be removed)

    By default 4 (or 5? depending on version) copies are kept, so if you had 6 disks before, this could be why additional copies were not added to the new disks.

    If still not sure / wish to look further before proceeding, the output Dev suggested would be helpful, as well as the following for each disk:
    # vxdisk list <disk>
    (ie: for all 7 disks)
  • Thanks everyone for the valuable feedback. I am running VXVM 4.1MP3 on Centos 4.8

    Just to clarify I need the disk group and it should not be destroyed. Below is the vxdisk list output. The DG in question is dumpdg and it has currently 7 disks out of which dumpdg06(old disk) is what I need to remove from the dumpdg.

    # vxdisk list
    DEVICE       TYPE            DISK         GROUP        STATUS
    emcpowerf    auto:cdsdisk    dumpdg06     dumpdg       online
    emcpowerg    auto:cdsdisk    datadev01    datadg       online
    emcpowerh    auto:cdsdisk    datadev02    datadg       online
    emcpoweri    auto:cdsdisk    datadev03    datadg       online
    emcpowerj    auto:cdsdisk    datadev04    datadg       online
    emcpowerk    auto:cdsdisk    datadev05    datadg       online
    emcpowerl    auto:cdsdisk    datadev06    datadg       online
    emcpowerm    auto:cdsdisk    datadev07    datadg       online
    emcpowern    auto:cdsdisk    datadev08    datadg       online
    emcpowero    auto:cdsdisk    logdev01     datadg       online
    emcpowerp    auto:cdsdisk    miscdev01    datadg       online
    emcpowerq    auto:cdsdisk    dumpdg07     dumpdg       online
    emcpowerr    auto:cdsdisk    dumpdg08     dumpdg       online
    emcpowers    auto:cdsdisk    dumpdg09     dumpdg       online
    emcpowert    auto:cdsdisk    dumpdg10     dumpdg       online
    emcpoweru    auto:cdsdisk    dumpdg11     dumpdg       online
    emcpowerv    auto:cdsdisk    dumpdg12     dumpdg       online
    sdb          auto:none       -            -            online invalid
    sdc          auto            -            -            error



    # vxdg list dumpdg
    Group:     dumpdg
    dgid:      1211488732.22.hostname.com
    import-id: 1024.29
    flags:     cds
    version:   120
    alignment: 8192 (bytes)
    ssb:            on
    detach-policy: global
    dg-fail-policy: dgdisable
    copies:    nconfig=default nlog=default
    config:    seqno=0.1113 permlen=1280 free=1267 templen=7 loglen=192
    config disk emcpowerf copy 1 len=1280 state=clean online
    config disk emcpowerq copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    config disk emcpowerr copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    config disk emcpowers copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    config disk emcpowert copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    config disk emcpoweru copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    config disk emcpowerv copy 1 len=1280 state=iofail failed
           config-tid=0.0 pending-tid=0.0
           Error: error=Disk read failure
    log disk emcpowerf copy 1 len=192
    log disk emcpowerq copy 1 len=192 disabled
    log disk emcpowerr copy 1 len=192 disabled
    log disk emcpowers copy 1 len=192
    log disk emcpowert copy 1 len=192
    log disk emcpoweru copy 1 len=192
    log disk emcpowerv copy 1 len=192


  • at the risk of stating the obvious, that dg doesn't look healthy!

    eg: config copies on the 6 "new" disks (emcpower[q-v]) state=iofailed, Error: error=Disk read failure
    That state tends to be seen when the dg has lost access to the disk and has gone to dgdisabled state, eg: in this technote:

    TN 229701: What to do when "vxdisk list" shows status of 'online dgdisabled'
    http://seer.entsupport.symantec.com/docs/229701.htm

    The dg probably hasn't gone into a disabled state (yet) as you still have a good copy on the "old" disk emcpowerf (dumpdg06) -- are there any issues on the volumes themselves?

    Would you be able to provide the output of vxprint -qthrg <dg> as Dev requested? ie:
    # vxprint -qhtrg dumpdg

    From the currently available output it looks like the system has lost access to the disk at some point (possibly temporarily, or at least long enough for it to not be able to get to its config copy), OR something has happened to the label/header/portion on the disk where the config copy was stored.

    The disks and dg are still showing online, which is why it would be good to check the vxprint output before proceeding with anything to see what is going on with the actual volumes - also, are the volumes in use at the moment?
  • Thanks Everyone. I could resolve the issue. It was not the problem with the disk. There seems to be a bug or a issue with VxVM 4.1mp3 on Linux, wherein if we add the disk to a dg and not reboot the linux box, we cannot remove the unused disk because the config copies doesnt get created on the new disks after we add it to the diskgroup.

    Rebooting the linux box solved the issue. The config copies are created on 5 of the new disks after the reboot.

    Thanks again for all your valuable help!