Forum Discussion

mhab11's avatar
mhab11
Level 5
13 years ago

Snapshot will not snap back with VCS.

I have been working on a more redundant solution for SFWHA 5.1 then what we currently have and am testing this layout.

 

I have 2 disk arrays that I present 6 LUNs from each to Veritas. In VEA I have one diskgroup that has these 12 drives. 2 of the drives have the data and are mirrored, the other 10 are split into groups of 2 one from each disk array for redundancy. With these 5 groups I am cover for each day of the week. I take a snapshot with one of the drives, then mirror to the other drive so that if one of my disk arrays goes down my service group does not fail. This is working great, I can then remove the mirror, snap back and take a new snap no problems.

Where the problem lies is failover, with all these extra volumes my second server does not know what to do with them when I do a switch. I solved this problem by adding the volumes to my VCS service group. There we go I can now fail all the volumes over with no problem. The new problem know is I can remove the mirror from the snap, but when I then do the snap back I get "Operation not allowed. Volume is configured as a VCS resource. Please change the VCS configuration and retry the operation".

I could remove the disk/volumes from the disk group and not fail them over but I find that doing a snapback and re-snap takes really long as the drives have to resync, basically it is like doing a Prepare all over. I have checked the manuals but cant find an answer, does anyone have an idea of how to configure this to do what I am looking for?

  • Yes you can schedule by putting all commands in a script and then calling from a scheduling tool like "at" or "schtasks".  The script would look something like:

     

    haconf -makerw
    hares -delete snap-monday
    
    vxassist -g diskgroup remove mirror snap-monday-vol
    vxassist -g diskgroup snapback snap-monday-vol
    vxassist -g diskgroup snapshot data plex=snap-monday-plex DriveLetter=W snap-monday-vol
    vxassist -g diskgroup mirror snap-monday-vol Harddisk3
    
    hares -add snap-monday Volume servicegroup 
    hares -modify snap-monday attributes (for all attributes)
    hares -link (add dependencies - like diskgroup)
    haconf -dump -makero

    Your plexes at the moment probably have default names like data-03, data-04 etc - you can use these names, but I would recommend renaming these using vxedit (example "vxedit -g diskgroup data-04 snap-monday-plex").

    Note an alternative to mirroring snapshot is to just take 2 snapshots per day, which could be taken at same time or staggered to give you more range when restoring - i.e at the moment I presume the snapshots are for protection against logical corruption (someone deletes a file rather than loosing a disk), but it is a bit unlikely you would have logical corruption + physical lost of an array.  So suppose you took one snapshot Tuesday am on array A and one Tuesday pm on array B, then you then have 2 point-in-times to choose from for restoring file and if you were unlucky enough to loose array A as well, then you still have 2 snapshots 24 hrs apart to restore from Mon pm and Tue pm.

    Mike

     

  • Firstly I want to clarify what you have - I think you have a diskgroup containing one "real" mirrored volume and 5 mirrored snapshot volumes for each day of the week.  This being the case:

    What do you mean by "my second server does not know what to do with them when I do a switch".  You say to solve this you added snap volumes to VCS - why do you want the snap volumes to be mounted (i.e have a drive letter/folder mount assigned to them) - is this because you allow users to manually recover files from the snapshots. If you have the snapshots so that you can do a snap restore, then you don't need the volumes to be mounted so you can remove them from VCS.  

    If you do want the snap volumes mounted, then I would remove temporarily remove resource from VCS while you do the snapback.  I assume the snapshot and snapbacks are automated as you do them every day so just run something like:

    haconf -makerw
    hares -delete snap-monday
    
    vxassist snapback
    
    hares -add snap-monday Volume servicegroup
    hares -modify snap-monday attributes (for all attributes)
    hares -link (add dependencies - like diskgroup)
    haconf -dump -makero

     

    Mike

     

  • Yes you got the idea correct. When I failed over to the second server the volumes would not attach a drive letter therefore the service group failed and the diskgroup was move back to the first server. What is "have a drive letter/folder mount assigned to them". That sound like what I want. I dont want VCS managing the snap volumes I just needed the service group to move to the second server without failing.

    I am the only one accessing Veritas on these servers.

    I am not good with the CLI, is there a way to do this in the GUI?

    haconf -makerw
    hares -delete snap-monday
    
    vxassist snapback
    
    hares -add snap-monday Volume servicegroup
    hares -modify snap-monday attributes (for all attributes)
    hares -link (add dependencies - like diskgroup)
    haconf -dump -makero
  • I don't understand why VCS is failing.  Suppose your data volume is called data with drive letter D, then when VCS onlines the service group, it imports diskgroup and assigns D to volume "data".  Whether or not other volumes exist which are snapshots of "data" SHOULD make no difference to VCS.  When you do the snapshot in VEA GUI, you do not need to give snapshot a drive letter, but if you do, then this should still be ok as long as drive letter is not in use when diskgroup deports (if group tries to failover), which is why I would not assign drive letters to your snap volumes.  If you do need drive letters assigned to your snap volumes, then it is probably better to add them to VCS to make sure VCS can deport service group.

    You can do the VCS commands in the VCS java GUI and the easist way to do this would be to:

    In VCS, right click on snap-volulme resource and choose Copy

     

    In VCS, rightt click on snap-volulme resource and choose Delete (say yes to open config)

    Perform snapback in VEA

    In VCS, right click on service group (or blank space in resource dependency diagram) and choose Paste (this adds resource and configures all the attributes)

    In VCS add dependency from snap-volulme resource to diskgroup resource.

    Mike

     
  • So I removed the snap volumes from VCS and was able to manually add the drive letter and all is working as it should.

    One other question, once I have my snapshot I mirror it to my second drive. In order to do a snapback I have to "remove" the mirror, snapback, snap new then readd the mirror. Currently this is all manual but I would love to be able to schedule the process. Can I do that with the mirroring of the drive?

    Thanks for all the help.

  • Yes you can schedule by putting all commands in a script and then calling from a scheduling tool like "at" or "schtasks".  The script would look something like:

     

    haconf -makerw
    hares -delete snap-monday
    
    vxassist -g diskgroup remove mirror snap-monday-vol
    vxassist -g diskgroup snapback snap-monday-vol
    vxassist -g diskgroup snapshot data plex=snap-monday-plex DriveLetter=W snap-monday-vol
    vxassist -g diskgroup mirror snap-monday-vol Harddisk3
    
    hares -add snap-monday Volume servicegroup 
    hares -modify snap-monday attributes (for all attributes)
    hares -link (add dependencies - like diskgroup)
    haconf -dump -makero

    Your plexes at the moment probably have default names like data-03, data-04 etc - you can use these names, but I would recommend renaming these using vxedit (example "vxedit -g diskgroup data-04 snap-monday-plex").

    Note an alternative to mirroring snapshot is to just take 2 snapshots per day, which could be taken at same time or staggered to give you more range when restoring - i.e at the moment I presume the snapshots are for protection against logical corruption (someone deletes a file rather than loosing a disk), but it is a bit unlikely you would have logical corruption + physical lost of an array.  So suppose you took one snapshot Tuesday am on array A and one Tuesday pm on array B, then you then have 2 point-in-times to choose from for restoring file and if you were unlucky enough to loose array A as well, then you still have 2 snapshots 24 hrs apart to restore from Mon pm and Tue pm.

    Mike

     

  • Is there a Veritas guide that I can learn and understand CLI? I see what the commands are going to do but to get them right i really what to know what I am typing.

     

    Here is what I have, I know it is wrong cause I get error "Invalid argument" What am I missing?

     

    vxassist -g AvMailDiskGroup remove mirror snap-Monday1-vol
    vxassist -g AvMailDiskGroup snapback snap-Monday1-vol
    vxassist -g AvMailDiskGroup snapshot AvMailVolume plex=snap-monday1-plex DriveLetter=E snap-monday1-vol
    vxassist -g AvMailDiskGroup mirror snap-monday1-vol Harddisk8

     

    I also tried just one command and changing it

    >vxassist remove e: plex=harddisk2
    Failed to complete the operation...
    V-76-58645-304: Remove/Delete mirror is not supported on snap mirror.

  • The SFW_Admin guide has a "Command line interface" section with syntax and examples.

    For removing mirror you need the keyword mirror.  I have never tried to add/remove a mirror from a snapshot (only a normal volume), but if it works from GUI, then it should work from CLI.

    I would run vxprint -g AvMailDiskGroup first and this will give you the volume names (lines starting "v ") and the plex names (lines starting "pl ") and the subdisk (lines starting "sd ") will show you what disk the plex is on.

    Mike

     
  • vxprint help me out. I have my batch file built and it works perfect. Thanks for the help.