Forum Discussion

Tiger09's avatar
Tiger09
Level 4
11 years ago

Unistall Veritas VXVM & VCS from LINUX 5.8

Hi,

We have 2 node cluster, now we have to shutdown 1 node & kept both DB & CI on single node, for this activity i need to remove veritas (vxvm + vcs) from servers & create LVM file system for both DB & CI.

I need help for unistall veritas from both node can u tell me what are the steps to perform this activity smothly...

 

 

 

  • Please follow to remove node from the cluster, after taking backup for all configuration files.

    then remove the pakages for VxVm and VCS

     

    Step 1 : - Remove node from VCS cluster

    Verify the status of nodes and service groups
    Start by issuing the following commands from one of the nodes to remain, node
    A or node B.
    To verify the status of the nodes and the service groups
    1 Make a backup copy of the current configuration file, main.cf.
    # cp -p /etc/VRTSvcs/conf/config/main.cf \
    /etc/VRTSvcs/conf/config/main.cf.goodcopy
    2 Check the status of the systems and the service groups.
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C RUNNING 0
    Table 7-2 Tasks involved in removing a node
    Task Reference
    ■ Back up the configuration file.
    ■ Check the status of the nodes and the
    service groups.
    “Verify the status of nodes and service
    groups” on page 10
    ■ Switch or remove any VCS service groups
    on the node leaving the cluster.
    ■ Delete the node from VCS configuration.
    “Deleting the leaving node from VCS
    configuration” on page 11
    Modify the llthosts and gabtab files to reflect
    the change.
    “Modifying configuration files on each
    remaining node” on page 13
    On the node leaving the cluster:
    ■ Modify startup scripts for LLT, GAB, and
    VCS to allow reboot of the node without
    affecting the cluster.
    ■ Unconfigure and unload the LLT and
    GAB utilities.
    ■ Remove the VCS RPMs.
    “Unloading LLT and GAB and removing
    VCS on the leaving node” on page 13
    Adding and removing cluster nodes 11
    Removing a node from a cluster
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N OFFLINE
    B grp3 C Y N ONLINE
    B grp4 C Y N ONLINE
    The example output from the hastatus command shows that nodes A, B,
    and C are the nodes in the cluster. Also, service group grp3 is configured to
    run on node B and node C, the leaving node. Service group grp4 runs only on
    node C. Service groups grp1 and grp2 do not run on node C.
    Deleting the leaving node from VCS configuration
    Before removing a node from the cluster, you must remove or switch from the
    leaving node the service groups on which other service groups depend.
    To remove or switch service groups from the leaving node
    1 Switch failover service groups from the leaving node. You can switch grp3
    from node C to node B.
    # hagrp -switch grp3 -to B
    2 Check for any dependencies involving any service groups that run on the
    leaving node; for example, grp4 runs only on the leaving node.
    # hagrp -dep
    3 If the service group on the leaving node requires other service groups, that
    is, if it is a parent to service groups on other nodes, then unlink the service
    groups.
    # haconf -makerw
    # hagrp -unlink grp4 grp1
    These commands enable you to edit the configuration and to remove the
    requirement grp4 has for grp1.
    4 Stop VCS on the leaving node:
    # hastop -sys C
    5 Check the status again. The state of the leaving node should be EXITED. Also,
    any service groups set up for failover should be online on other nodes:
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C EXITED 0
    12 Adding and removing cluster nodes
    Removing a node from a cluster
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N ONLINE
    B grp3 C Y Y OFFLINE
    B grp4 C Y N OFFLINE
    6 Delete the leaving node from the SystemList of service groups grp3 and
    grp4.
    # hagrp -modify grp3 SystemList -delete C
    # hagrp -modify grp4 SystemList -delete C
    7 For service groups that run only on the leaving node, delete the resources
    from the group before deleting the group.
    # hagrp -resources grp4
    processx_grp4
    processy_grp4
    # hares -delete processx_grp4
    # hares -delete processy_grp4
    8 Delete the service group configured to run on the leaving node.
    # hagrp -delete grp4
    9 Check the status.
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C EXITED 0
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N ONLINE
    10 Delete the node from the cluster.
    # hasys -delete C
    11 Save the configuration, making it read only.
    # haconf -dump -makero
    Adding and removing cluster nodes 13
    Removing a node from a cluster
    Modifying configuration files on each remaining node
    Perform the following tasks on each of the remaining nodes of the cluster.
    To modify the configuration files on a remaining node
    1 If necessary, modify the /etc/gabtab file.
    No change is required to this file if the /sbin/gabconfig command has only
    the argument -c, although Symantec recommends using the -nN option,
    where N is the number of cluster systems.
    If the command has the form /sbin/gabconfig -c -nN, where N is the
    number of cluster systems, then make sure that N is not greater than the
    actual number of nodes in the cluster, or GAB does not automatically seed.
    Note: Symantec does not recommend the use of the -c -x option for /sbin/
    gabconfig. The Gigabit Ethernet controller does not support the use of
    -c -x.
    2 Modify /etc/llthosts file on each remaining nodes to remove the entry of the
    leaving node.
    For example, change:
    0 A
    1 B
    2 C
    to:
    0 A
    1 B
    Unloading LLT and GAB and removing VCS on the leaving node
    Perform the tasks on the node leaving the cluster.
    To stop LLT and GAB and remove VCS
    1 Stop GAB and LLT:
    # /etc/init.d/gab stop
    # /etc/init.d/llt stop
    2 To determine the RPMs to remove, enter:
    # rpm -qa | grep VRTS
    3 To permanently remove the VCS RPMs from the system, use the rpm -e
    command.
    # rpm -e VRTScmccc
    # rpm -e VRTScmcs
    # rpm -e VRTScssim
    # rpm -e VRTScscm
    # rpm -e VRTSvcsdc
    14 Adding and removing cluster nodes
    Removing a node from a cluster
    # rpm -e VRTSvcsmn
    # rpm -e VRTScutil
    # rpm -e VRTSweb
    # rpm -e VRTScscw
    # rpm -e VRTSjre15
    # rpm -e VRTSjre
    # rpm -e VRTSvcsdr
    # rpm -e VRTSvcsag
    # rpm -e VRTSacclib
    # rpm -e VRTSvcsmg
    # rpm -e VRTSvcs
    # rpm -e VRTSvxfen
    # rpm -e VRTSgab
    # rpm -e VRTSllt
    # rpm -e VRTSvlic
    # rpm -e VRTSspt
    # rpm -e VRTSsmf
    # rpm -e VRTSperl
    # rpm -e VRTSpbx
    # rpm -e VRTSicsco
    # rpm -e VRTSatServer
    # rpm -e VRTSatClient
    # rpm -e SYMClma
    4 Remove the LLT and GAB configuration files.
    # rm /etc/llttab
    # rm /etc/gabtab
    # rm /etc/llthosts

     

    Step 2 : - Remove disk from VxVM control

    https://sort.symantec.com/public/documents/sf/5.1/linux/html/vxvm_admin/ch03s18.htm

     

    rpm -qa |grep -i vx

    To permanently remove the VxVm RPMs from the system, use the rpm -e

     

    rpm -e <VxVM pkg name> from the command output (rpm -qa |grep -i vx).

     

    Step 3:- Once the node removed from the cluster and removed disk from the VXVM then you have to relable the disk under OS control by fdisk command.

    and follow the below usuall procedure to create LVMs

    (1) vgcreate

    (2) pvcreate

    (3) lvcreate

    (4) create mount points

    (5) create file system

    (6) mount the filesystem

    (7) make entry in /etc/fstab.

     

     

  • I am little curious on your query, are you saying you are shutting down only 1 node & then want to keep a single node cluster running  or you are looking for a complete uninstallation as later in the query you mentioned you want to remove veritas software from both the nodes.

    Detailed steps mentioned by Naveen above .. you can refer the steps in uninstall section in the guide, please node guide also have various sections which you need to refer if you are using them. For e.g if you are using vxfs filesystems, you need to remove veritas filesystems which is explained in first section in below link

    refer below

    https://sort.symantec.com/public/documents/sfha/6.1/linux/productguides/html/sfha_install/ch26.htm

     

    G

     

     

  • Please follow to remove node from the cluster, after taking backup for all configuration files.

    then remove the pakages for VxVm and VCS

     

    Step 1 : - Remove node from VCS cluster

    Verify the status of nodes and service groups
    Start by issuing the following commands from one of the nodes to remain, node
    A or node B.
    To verify the status of the nodes and the service groups
    1 Make a backup copy of the current configuration file, main.cf.
    # cp -p /etc/VRTSvcs/conf/config/main.cf \
    /etc/VRTSvcs/conf/config/main.cf.goodcopy
    2 Check the status of the systems and the service groups.
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C RUNNING 0
    Table 7-2 Tasks involved in removing a node
    Task Reference
    ■ Back up the configuration file.
    ■ Check the status of the nodes and the
    service groups.
    “Verify the status of nodes and service
    groups” on page 10
    ■ Switch or remove any VCS service groups
    on the node leaving the cluster.
    ■ Delete the node from VCS configuration.
    “Deleting the leaving node from VCS
    configuration” on page 11
    Modify the llthosts and gabtab files to reflect
    the change.
    “Modifying configuration files on each
    remaining node” on page 13
    On the node leaving the cluster:
    ■ Modify startup scripts for LLT, GAB, and
    VCS to allow reboot of the node without
    affecting the cluster.
    ■ Unconfigure and unload the LLT and
    GAB utilities.
    ■ Remove the VCS RPMs.
    “Unloading LLT and GAB and removing
    VCS on the leaving node” on page 13
    Adding and removing cluster nodes 11
    Removing a node from a cluster
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N OFFLINE
    B grp3 C Y N ONLINE
    B grp4 C Y N ONLINE
    The example output from the hastatus command shows that nodes A, B,
    and C are the nodes in the cluster. Also, service group grp3 is configured to
    run on node B and node C, the leaving node. Service group grp4 runs only on
    node C. Service groups grp1 and grp2 do not run on node C.
    Deleting the leaving node from VCS configuration
    Before removing a node from the cluster, you must remove or switch from the
    leaving node the service groups on which other service groups depend.
    To remove or switch service groups from the leaving node
    1 Switch failover service groups from the leaving node. You can switch grp3
    from node C to node B.
    # hagrp -switch grp3 -to B
    2 Check for any dependencies involving any service groups that run on the
    leaving node; for example, grp4 runs only on the leaving node.
    # hagrp -dep
    3 If the service group on the leaving node requires other service groups, that
    is, if it is a parent to service groups on other nodes, then unlink the service
    groups.
    # haconf -makerw
    # hagrp -unlink grp4 grp1
    These commands enable you to edit the configuration and to remove the
    requirement grp4 has for grp1.
    4 Stop VCS on the leaving node:
    # hastop -sys C
    5 Check the status again. The state of the leaving node should be EXITED. Also,
    any service groups set up for failover should be online on other nodes:
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C EXITED 0
    12 Adding and removing cluster nodes
    Removing a node from a cluster
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N ONLINE
    B grp3 C Y Y OFFLINE
    B grp4 C Y N OFFLINE
    6 Delete the leaving node from the SystemList of service groups grp3 and
    grp4.
    # hagrp -modify grp3 SystemList -delete C
    # hagrp -modify grp4 SystemList -delete C
    7 For service groups that run only on the leaving node, delete the resources
    from the group before deleting the group.
    # hagrp -resources grp4
    processx_grp4
    processy_grp4
    # hares -delete processx_grp4
    # hares -delete processy_grp4
    8 Delete the service group configured to run on the leaving node.
    # hagrp -delete grp4
    9 Check the status.
    # hastatus -summary
    -- SYSTEM STATE
    -- System State Frozen
    A A RUNNING 0
    A B RUNNING 0
    A C EXITED 0
    -- GROUP STATE
    -- Group System Probed AutoDisabled State
    B grp1 A Y N ONLINE
    B grp1 B Y N OFFLINE
    B grp2 A Y N ONLINE
    B grp3 B Y N ONLINE
    10 Delete the node from the cluster.
    # hasys -delete C
    11 Save the configuration, making it read only.
    # haconf -dump -makero
    Adding and removing cluster nodes 13
    Removing a node from a cluster
    Modifying configuration files on each remaining node
    Perform the following tasks on each of the remaining nodes of the cluster.
    To modify the configuration files on a remaining node
    1 If necessary, modify the /etc/gabtab file.
    No change is required to this file if the /sbin/gabconfig command has only
    the argument -c, although Symantec recommends using the -nN option,
    where N is the number of cluster systems.
    If the command has the form /sbin/gabconfig -c -nN, where N is the
    number of cluster systems, then make sure that N is not greater than the
    actual number of nodes in the cluster, or GAB does not automatically seed.
    Note: Symantec does not recommend the use of the -c -x option for /sbin/
    gabconfig. The Gigabit Ethernet controller does not support the use of
    -c -x.
    2 Modify /etc/llthosts file on each remaining nodes to remove the entry of the
    leaving node.
    For example, change:
    0 A
    1 B
    2 C
    to:
    0 A
    1 B
    Unloading LLT and GAB and removing VCS on the leaving node
    Perform the tasks on the node leaving the cluster.
    To stop LLT and GAB and remove VCS
    1 Stop GAB and LLT:
    # /etc/init.d/gab stop
    # /etc/init.d/llt stop
    2 To determine the RPMs to remove, enter:
    # rpm -qa | grep VRTS
    3 To permanently remove the VCS RPMs from the system, use the rpm -e
    command.
    # rpm -e VRTScmccc
    # rpm -e VRTScmcs
    # rpm -e VRTScssim
    # rpm -e VRTScscm
    # rpm -e VRTSvcsdc
    14 Adding and removing cluster nodes
    Removing a node from a cluster
    # rpm -e VRTSvcsmn
    # rpm -e VRTScutil
    # rpm -e VRTSweb
    # rpm -e VRTScscw
    # rpm -e VRTSjre15
    # rpm -e VRTSjre
    # rpm -e VRTSvcsdr
    # rpm -e VRTSvcsag
    # rpm -e VRTSacclib
    # rpm -e VRTSvcsmg
    # rpm -e VRTSvcs
    # rpm -e VRTSvxfen
    # rpm -e VRTSgab
    # rpm -e VRTSllt
    # rpm -e VRTSvlic
    # rpm -e VRTSspt
    # rpm -e VRTSsmf
    # rpm -e VRTSperl
    # rpm -e VRTSpbx
    # rpm -e VRTSicsco
    # rpm -e VRTSatServer
    # rpm -e VRTSatClient
    # rpm -e SYMClma
    4 Remove the LLT and GAB configuration files.
    # rm /etc/llttab
    # rm /etc/gabtab
    # rm /etc/llthosts

     

    Step 2 : - Remove disk from VxVM control

    https://sort.symantec.com/public/documents/sf/5.1/linux/html/vxvm_admin/ch03s18.htm

     

    rpm -qa |grep -i vx

    To permanently remove the VxVm RPMs from the system, use the rpm -e

     

    rpm -e <VxVM pkg name> from the command output (rpm -qa |grep -i vx).

     

    Step 3:- Once the node removed from the cluster and removed disk from the VXVM then you have to relable the disk under OS control by fdisk command.

    and follow the below usuall procedure to create LVMs

    (1) vgcreate

    (2) pvcreate

    (3) lvcreate

    (4) create mount points

    (5) create file system

    (6) mount the filesystem

    (7) make entry in /etc/fstab.

     

     

  • I am little curious on your query, are you saying you are shutting down only 1 node & then want to keep a single node cluster running  or you are looking for a complete uninstallation as later in the query you mentioned you want to remove veritas software from both the nodes.

    Detailed steps mentioned by Naveen above .. you can refer the steps in uninstall section in the guide, please node guide also have various sections which you need to refer if you are using them. For e.g if you are using vxfs filesystems, you need to remove veritas filesystems which is explained in first section in below link

    refer below

    https://sort.symantec.com/public/documents/sfha/6.1/linux/productguides/html/sfha_install/ch26.htm

     

    G

     

     

  • Hi G,

    Yes, we are looking for a complete uninstallation veritas (VXVM + VCS) from both node, shutdown 1 server & restore data from backup on 1 node & then start SAP & DB, now new file system belongs to LVM.

  • HI,

    I am still waiting for response , kindly suggest me ASAP...

  • I see detailed steps in Naveen's post.

    Gaurav has also mentioned uninstall section in the Installation guide.

    Not sure what else you are waiting for?