cancel
Showing results for 
Search instead for 
Did you mean: 

SF CFS for RAC on RHEL with Oracle 10G R2

shahfar
Level 5
Accredited Certified

Hi Everyone,

I currently have Oracle 10G R2  running on SF CFS for RAC on RHEL 5. Considering that I want to move to SF RAC (the full suite where CRS is handled by VCS) without migrating to Oracle 11G, what options do I have?

1 ACCEPTED SOLUTION

Accepted Solutions

joseph_dangelo
Level 6
Employee Accredited

Please refer to the following support matrix for which version of Oracle are supported with each version of Storage Foundation (CFS and SFRAC)

http://www.symantec.com/business/support/index?page=content&id=TECH44807

Unfortunately,  Oracle 10gR2 is not supported with SFRAC for Linux.  The only supported versions of Oracle on Linux with SFRAC are 11gR2 (0.1 and 0.2) I am unaware of any plans to add support for 10g. 

Joe D

View solution in original post

2 REPLIES 2

joseph_dangelo
Level 6
Employee Accredited

Please refer to the following support matrix for which version of Oracle are supported with each version of Storage Foundation (CFS and SFRAC)

http://www.symantec.com/business/support/index?page=content&id=TECH44807

Unfortunately,  Oracle 10gR2 is not supported with SFRAC for Linux.  The only supported versions of Oracle on Linux with SFRAC are 11gR2 (0.1 and 0.2) I am unaware of any plans to add support for 10g. 

Joe D

Tmy_70
Level 5
Partner Accredited Certified

Network Configurations
 
IP Address
 
    * SUNSRV01
          o 10.10.231.130 sunsrv01 Host IP
          o 10.11.196.192 sunsrv01-DRN E3/DR Link IP
 
    * SUNSRV02
          o 10.10.231.131 sunsrv02 Host IP
          o 10.11.196.193 sunsrv02-DRN E3/DR Link IP
 
    * SUNSRV01-DR
          o 10.10.231.130 sunsrv01-dr Host IP
          o 10.12.94.192 sunsrv01dr-DRN E3/DR Link IP
 
    * SUNSRV02-DR
          o 10.10.231.131 sunsrv02-dr Host IP
          o 10.12.94.193 sunsrv02dr-DRN E3/DR Link IP
 
 
 
VVR Config
 
    * RVG db2dg_rvg
    * SRL db2dg_srl
    * RLINKs rlk_db2instdr-vipvr_db2dg_rvg (E3)
      rlk_db2inste3-vipvr_db2dg_rvg (DR)
 
 
 
Virtual IPs
 
    * SUNSRV01/SUNSRV02
    * Application 10.10.231.191
    * E3/DR Link 10.11.196.191
    * SUNSRV01-DR/SUNSRV02-DR
    * Application 10.10.231.191
    * E3/DR Link 10.12.94.191
 
 
 
VCS Heartbeats
 
    * SUNSRV01 ce0/ce4
    * SUNSRV02 ce0/ce4
    * SUNSRV01-DR ce4
    * SUNSRV02-DR ce4
 
 
 
Essential Terminology
 
Data Change Map (DCM) - An object containing a bitmap that can be optionally associated with a data volume on the Primary RVG. The bits represent regions of data that are different between the Primary and the Secondary. DCMs are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage.
 
Disk Change Object (DCO) - DCO volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
 
Data Volume - Volumes that are associated with an RVG and contain application data.
 
Primary Node - The node on which the primary RVG resides.
 
Replication Link (RLINK) - RLINKs represent the communication link to the couterpart of the RVG on another node. At the Primary node a replicated volume object has one RLINK for each of its network mirrors. On the Secondary node a replicated volume has a single RLINK object that links it to its Primary. Each RLINK on a Primary RVG represents the communication link from the Primary RVG to a corresponding Secondary RVG, via an IP connection.
 
Replicated Data Set (RDS) - The group of the RVG on a Primary and its corresponding Secondary hosts.
 
Replicated Volume Group (RVG) - A component of VVR that is made up of a set of data volumes, one or more RLINKs and an SRL. An RVG is a subset of volumes within a given VxVM disk group configured for replication to one or more secondary systems. Data is replicated from a primary RVG to a secondary RVG. The primary RVG is in use by an application, while the secondary RVG receives replication and writes to local disk. The concept of primary and secondary is per RVG, not per system. A system can simultaneously be a primary RVG for some RVGs and secondary RVG for others. The RVG also contains the storage replicator log (SRL) and replication link (RLINK).
 
Storage Replication Log (SRL) - Writes to the Primary RVG are saved in the SRL on the Primary side. The SRL is used to aid in recovery, as well as to buffer writes when the system operates in asynchronous mode. Each write to a data volume in the RVG generates two write requests: one to the Secondary SRL, and another to the Primary SRL.
 
Secondary Node - The node too which the primary RVG replicates.
 
 
Volumes to be Replicated
 
These are the volumes that will be configured for replication. These volumes belong to the same diskgroup db2dg.
 
v  bcp          -            ENABLED  ACTIVE    585105408 SELECT   -        fsgen
v  db           -            ENABLED  ACTIVE   1172343808 SELECT   -        fsgen
v  dba          -            ENABLED  ACTIVE     98566144 SELECT   -        fsgen
v  db2          -            ENABLED  ACTIVE      8388608 SELECT   -        fsgen
v  lg1          -            ENABLED  ACTIVE    396361728 SELECT   -        fsgen
v  tp01         -            ENABLED  ACTIVE    192937984 SELECT   -        fsgen
 
 
 
Storage Replication Log (SRL)
 
Storage Replication Log. All data writes destined for volumes configured for replication are first persistently queued in a log called the Storage Replicator Log. VVR implements the SRL at the primary side to store all changes for transmission to the secondary(s). The SRL is a VxVM volume configured as part of an RVG. The SRL gives VVR the ability to associate writes to specific volumes within the replicated configuration in a specific order, maintaining write order fidelity at the secondary. All writes sent to the VxVM volume layer, whether from an application such as a database writing directly to storage, or an application accessing storage via a file system are faithfully replicated in application write order to the secondary.
 
v  db2dg_srl  -            ENABLED  ACTIVE    856350720 SELECT   -        fsgen
 
 
 
Data Change Map (DCM)
 
Data Change Maps (DCM) are used to mark sections of volumes on the primary that have changed during extended network outages in order to minimize the amount of data that must be synchronized to the secondary site during the outage. Only one disk will be used for DCM logs at this time.
 
In both primary and secondary sites, the same disk device name is used.
 
GROUP      DISK       DEVICE       TAG          OFFSET    LENGTH    FLAGS
db2dg      db2dg95    EMC0_94      EMC0_94      0         35681280  -
 
 
 
Disk Change Object (DCO)
 
Disk Change Object (DCO) volumes are used by Volume Manager FastResync (FR). FR is used to quickly resynchronize mirrored volumes that have been temporarily split and rejoined. FR works by copying only changes to the newly reattached volume using FR logging. This process reduces the time required to rejoin a split mirror, and requires less processing power than full mirror resynchronization without logging.
 
GROUP      DISK       DEVICE       TAG          OFFSET    LENGTH    FLAGS
db2dg      db2dg95    EMC0_94      EMC0_94      0         35681280  -
 
 
 
Technical Implementation Steps/Procedures
 
    * 1. Install VVR License on all nodes for both sites
    * 2. Install VVR Packages on all nodes for both sites
    * 3. Create a Diskgroup and volumes in the secondary site
    * 4. Update the .rdg file on both nodes in the secondary site
    * 5. Stop all applications – both clustered and non-clustered apps
    * 6. Bring up the virtual IPs on all nodes on both sites
    * 7. Create the SRL Volume in the primary site
    * 8. Add DCM logs to the all the primary site volumes that will be replicated
    * 9. Update the .rdg file on both nodes in the primary site
    * 10. Ensure that VVRTypes.cf exists in /etc/VRTSvcs/conf
    * 11. Ensure that other application types i.e. Db2udbTypes.cf exists in /etc/VRTSvcs/conf
    * 12. Create the Primary RVG
    * 13. Create the Secondary RVG
    * 14. Configure VCS to integrate VVR objects – Primary Site
    * 15. Configure VCS to integrate VVR objects – Secondary Site
    * 16. Bring up VCS engine and bring up the vvr service group in the Secondary Site
    * 17. Bring up VCS engine and bring up the vvr service group in the Primary Site
    * 18. Check the Rlink and RVG status
    * 20. Start the replication
    * 21. Check the Replication status
    * 22. Mount the Replicated Volumes
    * 23. Prepare the Replicated Volumes for Snapshot in the secondary site
    * 24. Create the SNAP Volumes in the secondary site
    * 25. Prepare the SNAP Volumes
    * 26. Run a point-in-time snapshot
    * 27. Mount the SNAP Volumes
 
 
 
Detailed Implementation
 
Install VVR License on all nodes for both sites
 
# /opt/VRTSvlic/sbin/vxlicinst
 
 
 
Install VVR packages on all nodes for both sites
 
List of Required and Optional Packages for VVR
 
      Required software packages for VVR:
 
            VRTSvlic VERITAS Licensing Utilities.
            VRTSvxvm VERITAS Volume Manager and Volume Replicator.
            VRTSob VERITAS Enterprise Administrator Service.
            VRTSvmpro VERITAS Volume Manager Management Services Provider.
            VRTSvrpro VERITAS Volume Replicator Management Services Provider.
            VRTSvcsvr VERITAS Cluster Server Agents for VERITAS Volume Replicator.
 
      Optional software packages for VVR:
 
            VRTSjre VERITAS JRE Redistribution.
            VRTSweb VERITAS Java Web Server.
            VRTSvrw VERITAS Volume Replicator Web Console.
            VRTSobgui VERITAS Enterprise Administrator.
            VRTSvmdoc VERITAS Volume Manager documentation.
            VRTSvrdoc VERITAS Volume Replicator documentation.
            VRTSvmman VERITAS Volume Manager manual pages.
            VRTSap VERITAS Action Provider.
            VRTStep VERITAS Task Execution Provider.
 
 
Installing the VVR Packages Using the pkgadd Command
 
1. Log in as root.
 
2. Mount from the software repository:
 
# mount software:/repos /mnt
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
 
 
3. Alternatively, you may choose to run the install script from the same directory. Refer to VVR Installation Guide for more information.
 
# ./installvvr
 
 
4. But this document follows the manual process.
 
# cd /mnt/software/os/SunOS/middleware/Veritas4.1/volume_replicator
 
 
Note: Install the packages in the order specified below to ensure proper installation.
 
5. Use the following command to install the required software packages in the specified order. Some of these may have already been installed.
 
# pkgadd -d . VRTSvlic VRTSvxvm VRTSob VRTSvcsvr
 
 
6. Install the following patch:
 
# patchadd 115209-<latest>
 
 
7. Install the following required packages in the specified order:
 
# pkgadd -d . VRTSvmpro VRTSvrpro
 
 
8. Use the following command to install the optional software packages:
 
# pkgadd -d . VRTSobgui VRTSjre VRTSweb VRTSvrw VRTSvmdoc \
              VRTSvrdoc VRTSvmman VRTSap VRTStep
 
 
The system prints out a series of status messages as the installation progresses and prompts you for any required information, such as the license key.
 
 
Create a Diskgroup and volumes in the secondary site
 
initialize all LUNs and assign to Diskgroup db2dg
 
sunsrv01-dr:# vxdiskadm
Create volumes with the same sizes as the primary site’s volumes.
 
sunsrv01-dr:# vxassist -g db2dg make bcp  585105408 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db  1172343808 layout=concat
sunsrv01-dr:# vxassist -g db2dg make dba   98566144 layout=concat
sunsrv01-dr:# vxassist -g db2dg make db2    8388608 layout=concat
sunsrv01-dr:# vxassist -g db2dg make lg1  396361728 layout=concat
sunsrv01-dr:# vxassist -g db2dg make tp01 192937984 layout=concat
 
 
Create the SRL Volume. Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
 
sunsrv01-dr:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \
> db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \
> db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \
> db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
 
 
Add DCM logs to the volumes. Use the disk that’s been assigned only for logs.
 
sunsrv01-dr:# vxassist -g db2dg addlog bcp  logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db   logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog dba  logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog db2  logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog lg1  logtype=dcm nlog=1 db2dg95
sunsrv01-dr:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
 
 
 
Update the .rdg file on both nodes in the secondary site
 
The Secondary Node must be given permission to manage the disk group created on the Primary Node. To do this add the diskgroup ID into /etc/vx/vras/.rdg. The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Primary Node.
 
sunsrv01:# vxprint -l db2dg | grep dgid
info:     dgid=1138140445.1393.sunsrv01 noautoimport
 
sunsrv01-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
sunsrv02-dr:# echo “1138140445.1393.sunsrv01” > /etc/vx/vras/.rdg
 
 
 
Stop all applications – both clustered and non-clustered apps
 
Stop all clustered and non-clustered applications that are using the volumes, which will be replicated.
 
Note: You may leave the applications running while replicating.
 
# su – db2inst –c “db2stop”
 
 
Unmount all filesystems whose underlying volumes will be replicated.
 
# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
 
 
Forcibly stop VCS so it leaves all running resources online – especially Volumes and the diskgroup
 
# hastop –all -force
 
 
 
Bring up the virtual IPs on all nodes on both sites
 
These are the VVR Replication IPs (R-Link IPs). Must be unique – one at the primary site and other at the secondary site.
 
Primary (sunsrv01 or sunsrv02):
 
ce3:1 - 10.11.196.191 netmask fffffe00 broadcast 10.11.197.255
 
 
Secondary (sunsrv01-dr or sunsrv02-dr):
 
ce5:1 - 10.12.94.191 netmask fffffe00 broadcast 10.12.95.255
 
 
 
Create the SRL Volume in the primary site
 
Allocate disks that are dedicated just for SRL volume. No other volumes should use those disks.
 
sunsrv01:# vxassist -g db2dg make db2dg_srl 856350720 layout=concat \
> db2dg71 db2dg72 db2dg73 db2dg74 db2dg75 db2dg76 db2dg77 db2dg78 \
> db2dg79 db2dg80 db2dg81 db2dg82 db2dg83 db2dg84 db2dg85 db2dg86 \
> db2dg87 db2dg88 db2dg89 db2dg90 db2dg91 db2dg92 db2dg93 db2dg94
 
 
 
Add DCM logs to all the primary site volumes that will be replicated
 
Use the disk that is dedicated for logs only.
 
sunsrv01:# vxassist -g db2dg addlog bcp  logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db   logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog dba  logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog db2  logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog lg1  logtype=dcm nlog=1 db2dg95
sunsrv01:# vxassist -g db2dg addlog tp01 logtype=dcm nlog=1 db2dg95
 
 
 
Update the .rdg file on both nodes in the primary site
 
The diskgroup ID is the value of dgid in the output of vxprint -l db2dg on the Secondary Node.
 
sunsrv02-dr:# vxprint -l db2dg | grep dgid
info:     dgid=1182230506.2373.sunsrv01-dr noautoimport  
 
sunsrv01:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
sunsrv02:# echo “1182230506.2373.sunsrv01-dr” > /etc/vx/vras/.rdg
 
 
 
Ensure that VVRTypes.cf exists in /etc/VRTSvcs/conf
 
This types file must exist on all nodes in the cluster.
 
sunsrv01:# ls -l /etc/VRTSvcs/conf/VVRTypes.cf
-rw-rw-r--   1 root     sys    1811 Jan 20  2005 /etc/VRTSvcs/conf/VVRTypes.cf
 
 
 
Ensure that other application types i.e. Db2udbTypes.cf exists in /etc/VRTSvcs/conf
 
This types file must exist on all nodes in the cluster.
 
sunsrv01:# ls -l /etc/VRTSvcs/conf/Db2udbTypes.cf
-rw-rw-r--   1 root     sys    1080 Jun 26  2006 /etc/VRTSvcs/conf/Db2udbTypes.cf
 
 
 
Create the Primary RVG
 
First, ensure that /usr/sbin/vradmind is running. If not, start it with this command:
 
sunsrv01:# /etc/init.d/vras-vradmind.sh start
 
 
 
Run this from the primary site.
 
sunsrv01:# vradmin –g db2dg createpri db2dg_rvg bcp,db,dba,db2,lg1,tp01 \
           db2dg_srl
 
 
Usage:
 
vradmin -g <diskgroup> createpri <RVGname> <vol1,vol2...> <SRLname>
 
 
Where:
 
<diskgroup>    is the VxVM diskgroup name
<RVGname>    is the name for the RVG, usually <diskgroupname>_rvg
<vol1,vol2...>    is a comma separated list of all volumes in the diskgroup to be replicated.
<SRLname>    is the name of the SRL volume
 
 
Create the Secondary RVG
 
After creating the Primary RVG, go on to adding a Secondary. Use the vradmin addsec command to add a Secondary RVG. The VIP hostnames are in the /etc/hosts file.
 
sunsrv01:# vradmin -g db2dg addsec db2dg_rvg db2inste3-vipvr db2instdr-vipvr
 
 
 
Usage:
 
vradmin -g <diskgroup> addsec <RVGname> <primaryhost> <secondaryhost>
 
 
Where:
 
<diskgroup>    is the VxVM diskgroup name
<RVGname>    is the name of the RVG created on the Primary Node
<primaryhost>    is the hostname of the Primary Node, this could be a VCS ServiceGroup name
<secondaryhost>    is the hostname of the Secondary Node, this could be a VCS ServiceGroup name
 
 
Note: The vradmin addsec command performs the following operations:
 
    * Creates and adds a Secondary RVG of the same name as the Primary RVG to the specified RDS on the Secondary host. By default, the Secondary RVG is added to the disk group with the same name as the Primary disk group. Use the option -sdg with the vradminaddsec command to specify a different disk group on the Secondary.
    * Automatically adds DCMs to the Primary and Secondary data volumes if they do not have DCMs.
    * Associates to the Secondary RVG, existing data volumes of the same names and sizes as the Primary data volumes; it also associates an existing volume with the same name as the Primary SRL, as the Secondary SRL.
    * Creates and associates to the Primary and Secondary RVGs respectively, the Primary and Secondary RLINKs with default RLINK names rlk_remotehost_rvgname.  
 
 
 
Configure VCS to integrate VVR objects – Primary Site
 
There would be at least two VCS service Groups. One group represents replication group and the other group is the application (DB) group. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource.
 
In this particular environment, VVR Service Group and Application Service Group must be online on the same node because the diskgroup is on the VVR Service Group.
 
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
 
VVR Service group maintains the synchronization between the primary and secondary sites.
 
VCS Service Groups:
 
db2inst_vvr VVR Service Group
 
            Resources:
      db2dgAgent (RVG)
      vvrnic (NIC)
      vvrip (VIP for VVR communications)
      db2dg_dg (Diskgroup)
 
 
 
db2inst_grp Application Service Group
 
            Resources:
      db2dg_rvg_primary (RVG Primary)
      db2inst_db (DB2 Database)
      db2inst_ce1 (NIC)
      db2inst_ip (VIP for the application)
      db2inst__mnt (Filesystems)
      adsm_db2inst (TSM Backup)
 
 
 
Primary Site VCS Configuration File:
 
include "types.cf"
include "Db2udbTypes.cf"
include "VVRTypes.cf"
 
cluster e3clus177 (
        UserNames = { admin = ajkCjeJgkFkkIskEjh }
        ClusterAddress = "10.10.231.191"
        Administrators = { admin }
        CredRenewFrequency = 0
        CounterInterval = 5
        )
 
system sunsrv01 (
        )
 
system sunsrv02 (
        )
 
group db2inst_grp (
        SystemList = { sunsrv01 = 0, sunsrv02 = 1 }
        AutoStart = 0
        )
 
        Application adsm_db2inst (
                Critical = 0
                User = root
                StartProgram = "/bcp/db2inst/tsm/adsmcad.db start"
                StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop"
                MonitorProcesses = {
                         "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" }
                )
 
        Db2udb db2inst_db (
                Critical = 0
                DB2InstOwner = db2inst
                DB2InstHome = "/db2/db2inst"
                MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl"
                )
 
        DiskGroup db2dgB_dg (
                DiskGroup = db2dgB
                )
 
        IP db2inst_ip (
                Device = ce1
                Address = "10.10.231.191"
                )
 
        Mount db2dgB_bkup_mnt (
                MountPoint = "/backup/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dgB/bkp"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount db2inst_bcp_mnt (
                MountPoint = "/bcp/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/bcp"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_db2_mnt (
                MountPoint = "/db2/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/db2"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_db_mnt (
                MountPoint = "/db/db2inst/PEMMP00P/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/db"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_dba_mnt (
                MountPoint = "/dba/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/dba"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_lg1_mnt (
                MountPoint = "/db/db2inst/log1"
                BlockDevice = "/dev/vx/dsk/db2dg/lg1"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_tp01_mnt (
                MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/tp01"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        NIC db2inst_ce1 (
                Device = ce1
                NetworkType = ether
                )
 
        RVGPrimary db2dg_rvg_primary (
                Critical = 0
                RvgResourceName = db2dgAgent
                )
 
        requires group db2inst_vvr online local hard
        adsm_db2inst requires db2inst_bcp_mnt
        db2dgB_bkup_mnt requires db2dgB_dg
        db2inst_bcp_mnt requires db2dg_rvg_primary
        db2inst_db requires db2inst_bcp_mnt
        db2inst_db requires db2inst_db2_mnt
        db2inst_db requires db2inst_db_mnt
        db2inst_db requires db2inst_dba_mnt
        db2inst_db requires db2inst_ip
        db2inst_db requires db2inst_lg1_mnt
        db2inst_db requires db2inst_tp01_mnt
        db2inst_db2_mnt requires db2dg_rvg_primary
        db2inst_db_mnt requires db2dg_rvg_primary
        db2inst_dba_mnt requires db2dg_rvg_primary
        db2inst_ip requires db2inst_ce1
        db2inst_lg1_mnt requires db2dg_rvg_primary
        db2inst_tp01_mnt requires db2dg_rvg_primary
 
 
 
group db2inst_vvr (
        SystemList = { sunsrv01 = 0, sunsrv02 = 1 }
        )
 
        DiskGroup db2dg_dg (
                DiskGroup = db2dg
                )
 
        IP vvrip (
                Device = ce3
                Address = "10.11.196.191"
                NetMask = "255.255.254.0"
                )
 
        NIC vvrnic (
                Device = ce3
                NetworkType = ether
                )
 
        RVG db2dgAgent (
                Critical = 0
                RVG = db2dg_rvg
                DiskGroup = db2dg
                )
 
        db2dgAgent requires db2dg_dg
     vvrip requires vvrnic
 
 
 
Configure VCS to integrate VVR objects – Secondary Site
 
In this particular environment, there will be four VCS service Groups – CCA Group, Replication Group, Application (DB) Group, and SNAP Group. CCA group is for remote cluster administration. Application group contains RVGPrimary resource. Replication group contains IP, RVG and the DiskGroup resource. SNAP Group contains the resources for SNAP copy. This document will not discuss Veritas CCA.
 
In this particular environment, VVR Service Group, Application Service Group, and SNAP Service Group must be online on the same node because the diskgroup is on the VVR Service Group. SNAP volumes are on the same diskgroup as the application/DB volumes.
 
SNAP Configuration is discussed later in this document.
 
VVR Service group must be started first before the Application Service Group. The dependencies are configured in the VCS configuration file.
 
VVR Service group maintains the synchronization between the primary and secondary sites.
 
VCS Service Groups:
 
db2inst_vvr VVR Service Group
Resources:
· db2dgAgent (RVG)
· vvrnic (NIC)
· vvrip (VIP for VVR communications)
· db2dg_dg (Diskgroup)
 
db2inst_grp Application Service Group
Resources:
· db2dg_rvg_primary (RVG Primary)
· db2inst_db (DB2 Database)
· db2inst_ce1 (NIC)
· db2inst_ip (VIP for the application)
· db2inst__mnt (Filesystems)
· adsm_db2inst (TSM Backup)
 
snap_db2inst_grp Snap Copy Service Group
Resources:
· snap_db2inst_db (DB2 Database)
· snap_db2inst_ce1 (NIC)
· snap_db2inst_ip (VIP for the application)
· snap_db2inst__mnt (Filesystems)
 
Secondary Site VCS Configuration File:
 
include "types.cf"
include "ClusterMonitorConfigType.cf"
include "Db2udbTypes.cf"
include "VVRTypes.cf"
 
cluster e3clus177 (
        UserNames = { admin = ajkCjeJgkFkkIskEjh }
        ClusterAddress = "10.10.231.191"
        Administrators = { admin }
        CredRenewFrequency = 0
        CounterInterval = 5
        )
 
system sunsrv01-dr (
        )
 
system sunsrv02-dr (
        )
 
group CCAvail (
        SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 }
        AutoStartList = { sunsrv01-dr, sunsrv02-dr }
        )
 
        ClusterMonitorConfig CCAvail_ClusterConfig (
                MSAddress = "10.11.198.53"
                ClusterId = 1183482234
                VCSLoggingLevel = TAG_A
                Logging = "/opt/VRTSccacm/conf/k2_logging.properties"
                ClusterMonitorVersion = "4.1.2272.1"
                )
 
        Process CCAvail_ClusterMonitor (
                PathName = "/opt/VRTSccacm/bin/ClusterMonitor"
                Arguments = "-config"
                )
 
        CCAvail_ClusterMonitor requires CCAvail_ClusterConfig
 
 
group db2inst_grp (
        SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 }
        Enabled @sunsrv01-dr = 0
        Enabled @sunsrv02-dr = 0
        AutoStart = 0
        )
 
        Application adsm_db2inst (
                Enabled = 0
                Critical = 0
                User = root
                StartProgram = "/bcp/db2inst/tsm/adsmcad.db start"
                StopProgram = "/bcp/db2inst/tsm/adsmcad.db stop"
                MonitorProcesses = {
                         "/usr/bin/dsmcad -optfile=/bcp/db2inst/tsm/dsm.opt.db2inst" }
                )
 
        Db2udb db2inst_db (
                Enabled = 0
                Critical = 0
                DB2InstOwner = db2inst
                DB2InstHome = "/db2/db2inst"
                MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl"
                )
 
        DiskGroup db2dgB_dg (
                Enabled = 0
                DiskGroup = db2dgB
                )
 
        IP db2inst_ip (
                Enabled = 0
                Device = ce1
                Address = "10.10.231.191"
                )
 
        Mount db2dgB_bkup_mnt (
                Enabled = 0
                MountPoint = "/backup/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dgB/bkp"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount db2inst_bcp_mnt (
                Enabled = 0
                MountPoint = "/bcp/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/bcp"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_db2_mnt (
                Enabled = 0
                MountPoint = "/db2/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/db2"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_db_mnt (
                Enabled = 0
                MountPoint = "/db/db2inst/PEMMP00P/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/db"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_dba_mnt (
                Enabled = 0
                MountPoint = "/dba/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/dba"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_lg1_mnt (
                Enabled = 0
                MountPoint = "/db/db2inst/log1"
                BlockDevice = "/dev/vx/dsk/db2dg/lg1"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        Mount db2inst_tp01_mnt (
                Enabled = 0
                MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/tp01"
                FSType = vxfs
                MountOpt = rw
                FsckOpt = "-y"
                )
 
        NIC db2inst_ce1 (
                Enabled = 0
                Device = ce1
                NetworkType = ether
                )
 
        RVGPrimary db2dg_rvg_primary (
                Enabled = 0
                Critical = 0
                RvgResourceName = db2dgAgent
                )
 
        requires group db2inst_vvr online local firm
        adsm_db2inst requires db2inst_bcp_mnt
        db2dgB_bkup_mnt requires db2dgB_dg
        db2inst_bcp_mnt requires db2dg_rvg_primary
        db2inst_db requires db2inst_bcp_mnt
        db2inst_db requires db2inst_db2_mnt
        db2inst_db requires db2inst_db_mnt
        db2inst_db requires db2inst_dba_mnt
        db2inst_db requires db2inst_ip
        db2inst_db requires db2inst_lg1_mnt
        db2inst_db requires db2inst_tp01_mnt
        db2inst_db2_mnt requires db2dg_rvg_primary
        db2inst_db_mnt requires db2dg_rvg_primary
        db2inst_dba_mnt requires db2dg_rvg_primary
        db2inst_ip requires db2inst_ce1
        db2inst_lg1_mnt requires db2dg_rvg_primary
        db2inst_tp01_mnt requires db2dg_rvg_primary
 
 
group db2inst_vvr (
        SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 }
        )
 
        DiskGroup db2dg_dg (
                DiskGroup = db2dg
                )
 
        IP vvrip (
                Device = ce5
                Address = "10.12.94.191"
                NetMask = "255.255.254.0"
                )
 
        NIC vvrnic (
                Device = ce5
                NetworkType = ether
                )
 
        RVG db2dgAgent (
                Critical = 0
                RVG = db2dg_rvg
                DiskGroup = db2dg
                )
 
        db2dgAgent requires db2dg_dg
        vvrip requires vvrnic
 
 
group snap_db2inst_grp (
        SystemList = { sunsrv01-dr = 0, sunsrv02-dr = 1 }
        AutoStart = 0
        )
 
        Db2udb snap_db2inst_db (
                Critical = 0
                DB2InstOwner = db2inst
                DB2InstHome = "/db2/db2inst"
                MonScript = "/opt/VRTSvcs/bin/Db2udb/SqlTest.pl"
                )
 
        DiskGroup snap_db2dgB_dg (
                DiskGroup = db2dgB
                )
 
        IP snap_db2inst_ip (
                Device = ce1
                Address = "10.10.231.191"
                )
 
        Mount snap_db2dgB_bkup_mnt (
                MountPoint = "/backup/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dgB/bkp"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_bcp_mnt (
                MountPoint = "/bcp/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/bcp_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_db2_mnt (
                MountPoint = "/db2/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/db2_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_db_mnt (
                MountPoint = "/db/db2inst/PEMMP00P/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/db_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_dba_mnt (
                MountPoint = "/dba/db2inst"
                BlockDevice = "/dev/vx/dsk/db2dg/dba_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_lg1_mnt (
                MountPoint = "/db/db2inst/log1"
                BlockDevice = "/dev/vx/dsk/db2dg/lg1_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        Mount snap_db2inst_tp01_mnt (
                MountPoint = "/db/db2inst/PEMMP00P/tempspace01/NODE0000"
                BlockDevice = "/dev/vx/dsk/db2dg/tp01_snapvol"
                FSType = vxfs
                FsckOpt = "-y"
                )
 
        NIC snap_db2inst_ce1 (
                Device = ce1
                NetworkType = ether
                )
 
        requires group db2inst_vvr online local firm
        snap_db2dgB_bkup_mnt requires snap_db2dgB_dg
        snap_db2inst_db requires snap_db2inst_bcp_mnt
        snap_db2inst_db requires snap_db2inst_db2_mnt
        snap_db2inst_db requires snap_db2inst_db_mnt
        snap_db2inst_db requires snap_db2inst_dba_mnt
        snap_db2inst_db requires snap_db2inst_lg1_mnt
        snap_db2inst_db requires snap_db2inst_tp01_mnt
     snap_db2inst_ip requires snap_db2inst_ce1
 
 
 
Bring up VCS engine and bring up the vvr service group in the Secondary Site
 
Start VCS on both nodes
 
sunsrv01-dr:# hastart
sunsrv02-dr:# hastart
 
 
Bring up the VVR Service Group on one node
 
sunsrv01-dr:# hagrp –online db2inst_vvr –sys sunsrv01-dr
 
 
 
Bring up VCS engine and bring up the vvr service group in the Primary Site
 
Start VCS on both nodes
 
sunsrv01:# hastart
sunsrv02:# hastart
 
 
Bring up the VVR Service Group on one node
 
sunsrv01:# hagrp –online db2inst_vvr –sys sunsrv01
 
 
Bring up the Application Service Group
 
sunsrv01:# hagrp –online db2inst_grp –sys sunsrv01
 
 
 
Check the Rlink and RVG status
 
Make sure that the flags are attached and connected. If for some reason links were detached or disconnected, check to make sure that the communications between the primary site and the secondary site are working fine. You should be able to ping, ssh, or telnet from each site thru the VIPs. If communications are good, you may restart VVR (see next step for restarting VVR engine).
 
sunsrv01:# vxprint -Pl
Disk group: db2dg
 
Rlink:    rlk_db2instdr-vipvr_db2dg_rvg
info:     timeout=500 packet_size=8400 rid=0.2007
          latency_high_mark=10000 latency_low_mark=9950
          bandwidth_limit=none
state:    state=ACTIVE
          synchronous=off latencyprot=off srlprot=autodcm
assoc:    rvg=db2dg_rvg
          remote_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145
          remote_dg=db2dg
          remote_dg_dgid=1182230506.2373.sunsrv01-dr
          remote_rvg_version=21
          remote_rlink=rlk_db2inste3-vipvr_db2dg_rvg
          remote_rlink_rid=0.2127
          local_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145
protocol: UDP/IP
flags:    write enabled attached consistent cant_sync connected asynchronous autosync
 
sunsrv01:# vradmin -l printrvg
Replicated Data Set: db2dg_rvg
Primary:
        HostName: db2inste3-vipvr        
        RvgName: db2dg_rvg
        DgName: db2dg
        datavol_cnt: 6
        srl: db2dg_srl
        RLinks:
            name=rlk_db2instdr-vipvr_db2dg_rvg, detached=off, synchronous=off
Secondary:
        HostName: db2instdr-vipvr
        RvgName: db2dg_rvg
        DgName: db2dg
        datavol_cnt: 6
        srl: db2dg_srl
        RLinks:
            name=rlk_db2inste3-vipvr_db2dg_rvg, detached=off, synchronous=off
 
 
 
Restart VVR engine (if the link is detached or disconnected)
 
This is only required if you see the links between primary and secondary as detached or disconnected. If you think that the communication between the two is working fine, then run the following commands in the primary site:
 
sunsrv01:# vxstart_vvr stop
sunsrv01:# vxstart_vvr start
 
 
Then, check the status again.
 
sunsrv01:# vxprint -Pl
 
 
 
Start the replication
 
Initiate the command from the primary site.
 
sunsrv01:# vradmin -g db2dg -a startrep db2dg_rvg db2instdr-vipvr
Message from Primary:
VxVM VVR vxrlink WARNING V-5-1-3359 Attaching rlink to non-empty rvg.
                                    Autosync will be performed.
 
VxVM VVR vxrlink INFO V-5-1-3614 Secondary data volumes detected with rvg db2dg_rvg as parent:
 
VxVM VVR vxrlink INFO V-5-1-6183 bcp:    len=585105408     primary_datavol=bcp
VxVM VVR vxrlink INFO V-5-1-6183 db:     len=1172343808    primary_datavol=db
VxVM VVR vxrlink INFO V-5-1-6183 db2:    len=8388608       primary_datavol=db2
VxVM VVR vxrlink INFO V-5-1-6183 dba:    len=98566144      primary_datavol=dba
VxVM VVR vxrlink INFO V-5-1-6183 lg1:    len=396361728     primary_datavol=lg1
VxVM VVR vxrlink INFO V-5-1-6183 tp01:   len=192937984     primary_datavol=tp01
 
VxVM VVR vxrlink INFO V-5-1-3365 Autosync operation has started
 
 
Usage:
 
vradmin -g <diskgroup> -a startrep <RVGname> <secondaryhost>
 
 
Where:
 
<diskgroup>    is the VxVM diskgroup name
<RVGname>    is the name for the RVG, usually <diskgroupname>_rvg
<secondaryhost>    is the hostname of the Secondary Site. Check the /etc/hosts file.
 
 
 
Check the Replication status
 
Initiate the command from the primary site. You can tell if it’s syncing by looking at the number of bytes remaining. If it changes, then it’s working.
 
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg
Fri Jun 22 00:01:37 MST 2007
VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226649984 Kbytes remaining.
VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226644224 Kbytes remaining.
VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226638464 Kbytes remaining.
VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226632128 Kbytes remaining.
VxVM VVR vxrlink INFO V-5-1-4464 Rlink rlk_db2instdr-vipvr_db2dg_rvg is in AUTOSYNC. 1226626368 Kbytes remaining.
 
 
 
Another way to check is to use the vradmin repstatus command. The key bits of information here are Data status and Replication status.
 
sunsrv01:# vradmin -g db2dg repstatus db2dg_rvg
Replicated Data Set: db2dg_rvg
Primary:
  Host name:                  db2inste3-vipvr
  RVG name:                   db2dg_rvg
  DG name:                    db2dg
  RVG state:                  enabled for I/O
  Data volumes:               6
  SRL name:                   db2dg_srl
  SRL size:                   952.84 G
  Total secondaries:          1
 
Secondary:
  Host name:                  db2instdr-vipvr
  RVG name:                   db2dg_rvg
  DG name:                    db2dg
  Data status:                consistent, up-to-date
  Replication status:         replicating (connected)
  Current mode:               asynchronous
  Logging to:                 SRL
  Timestamp Information:      N/A
 
 
 
Mount the Replicated Volumes
 
When replication is 100% complete, you may check the volumes on the secondary site by mounting the filesystems. But first, you need to check the status of replication.
 
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvgThu Jun 28 08:54:42 MST 2007
VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
 
 
 
Use VCS to mount the replicated Volumes
 
sunsrv01-dr:# hagrp –online db2inst_grp –sys sunsrv01-dr
 
 
Display the filesystems and compare them with the primary.
 
sunsrv01-dr:# df -k | grep db2dg
 
/dev/vx/dsk/db2dg/db2     4194304   78059   3859030 2% /db2/db2inst
/dev/vx/dsk/db2dg/bcp   292552704 1527508 272836173 1% /bcp/db2inst
/dev/vx/dsk/db2dg/dba    49283072   28555  46176114 1% /dba/db2inst
/dev/vx/dsk/db2dg/db    586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000
/dev/vx/dsk/db2dg/tp01   96468992   40176  90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000
/dev/vx/dsk/db2dg/lg1   198180864 1779853 184126012 1% /db/db2inst/log1
 
sunsrv01:# df -k | grep db2dg
 
/dev/vx/dsk/db2dg/db2     4194304   78059   3859030 2% /db2/db2inst
/dev/vx/dsk/db2dg/bcp   292552704 1527508 272836173 1% /bcp/db2inst
/dev/vx/dsk/db2dg/dba    49283072   28555  46176114 1% /dba/db2inst
/dev/vx/dsk/db2dg/db    586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000
/dev/vx/dsk/db2dg/tp01   96468992   40176  90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000
/dev/vx/dsk/db2dg/lg1   198180864 1779853 184126012 1% /db/db2inst/log1
 
 
This is the best time to test VCS for switching the service group.
 
sunsrv01-dr:# hagrp –switch db2inst_grp –to sunsrv02-dr
 
 
 
 
Prepare the Replicated Volumes for Snapshot in the secondary site
 
First, display the volume information
 
sunsrv02-dr:# vxprint -ht bcp
Disk group: db2dg
 
v  bcp           db2dg_rvg   ENABLED  ACTIVE 585105408 SELECT    -        fsgen
pl bcp-01        bcp         ENABLED  ACTIVE 585108480 CONCAT    -        RW
sd db2dg214-01   bcp-03      db2dg214 5376   285473280 0         EMC0_219 ENA
sd db2dg215-01   bcp-03      db2dg215 5376   285473280 285473280 EMC0_220 ENA
sd db2dg219-01   bcp-03      db2dg219 5376   14161920  570946560 EMC0_224 ENA
pl bcp-02        bcp         ENABLED  ACTIVE LOGONLY   CONCAT    -        RW
sd db2dg95-06    bcp-02      db2dg95  0      512       LOG       EMC0_154 ENA
 
 
Prepare the volume for snapshot by adding a DCO log. The same disks is used for DCO and DCM logs.
 
sunsrv02-dr:# vxsnap -g db2dg prepare bcp ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
 
Display the volume information to see the changes. Note that the DCO volume has been added with two logs.
 
sunsrv02-dr:# vxprint -ht bcp
Disk group: db2dg
 
v  bcp           db2dg_rvg   ENABLED  ACTIVE 585105408 SELECT    -        fsgen
pl bcp-01        bcp         ENABLED  ACTIVE 585108480 CONCAT    -        RW
sd db2dg214-01   bcp-03      db2dg214 5376   285473280 0         EMC0_219 ENA
sd db2dg215-01   bcp-03      db2dg215 5376   285473280 285473280 EMC0_220 ENA
sd db2dg219-01   bcp-03      db2dg219 5376   14161920  570946560 EMC0_224 ENA
pl bcp-02        bcp         ENABLED  ACTIVE LOGONLY   CONCAT    -        RW
sd db2dg95-06    bcp-02      db2dg95  0      512       LOG       EMC0_154 ENA
dc bcp_dco       bcp         bcp_dcl      
v  bcp_dcl       -           ENABLED  ACTIVE 40368     SELECT    -        gen
pl bcp_dcl-01    bcp_dcl     ENABLED  ACTIVE 40368     CONCAT    -        RW
sd db2dg95-07    bcp_dcl-01  db2dg95  960    40368     0         EMC0_154 ENA
 
 
Run the same steps on the rest of the volumes
 
sunsrv02-dr:# vxsnap -g db2dg prepare db ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
sunsrv02-dr:# vxsnap -g db2dg prepare dba ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
sunsrv02-dr:# vxsnap -g db2dg prepare db2 ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
sunsrv02-dr:# vxsnap -g db2dg prepare lg1 ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
sunsrv02-dr:# vxsnap -g db2dg prepare tp01 ndcomirs=1 alloc=db2dg95
VxVM vxsnap INFO V-5-1-9270 Volume is under RVG, setting drl=no.
 
 
 
Create the SNAP Volumes in the secondary site
 
sunsrv02-dr:# vxassist -g db2dg make bcp_snapvol  585105408 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db_snapvol  1172343808 layout=concat
sunsrv02-dr:# vxassist -g db2dg make dba_snapvol   98566144 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol    8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make db2_snapvol    8388608 layout=concat
sunsrv02-dr:# vxassist -g db2dg make lg1_snapvol  396361728 layout=concat
sunsrv02-dr:# vxassist -g db2dg make tp01_snapvol 192937984 layout=concat
 
 
 
Prepare the SNAP Volumes
 
Identify the region size of the main volumes. The region size must be the same for both the main volume and the snap volume.
 
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'`
> do
>   echo "$DCONAME\t\c"
>   vxprint -g db2dg -F%regionsz $DCONAME
> done
bcp_dco   128
db_dco    128
dba_dco   128
db2_dco   128
lg1_dco   128
tp01_dco  128
 
 
 
Display the volume information
 
sunsrv02-dr:# vxprint -ht bcp_snapvol
Disk group: db2dg
 
v  bcp_snapvol  fsgen        ENABLED  585105408 -       ACTIVE   -       -
pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 -       ACTIVE   -       -
sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0      -        -       -
sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 -      -       -
sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 -      -       -
sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 -     -       -
sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 -     -       -
sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 -     -       -
sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 -     -       -
sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 -     -       -
sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 -     -       -
sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 -     -       -
sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 -     -       -
sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 -     -       -
sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 -     -       -
sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 -     -       -
sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 -     -       -
sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 -     -       -
sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 -     -       -
 
 
Prepare the snapshot volume by adding a DCO log and the same region size as the main volume. The same disks is used for DCO and DCM logs.
 
sunsrv02-dr:# vxsnap -g db2dg prepare bcp_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
 
 
Display the volume information to see the changes. Note that the DCO volume has been added.
 
sunsrv02-dr:# vxprint -ht bcp_snapvol
Disk group: db2dg
 
v  bcp_snapvol  fsgen        ENABLED  585105408 -       ACTIVE   -       -
pl bcp_snapvol-01 bcp_snapvol ENABLED 585105600 -       ACTIVE   -       -
sd db2dg114-01 bcp_snapvol-01 ENABLED 35726400 0      -        -       -
sd db2dg106-01 bcp_snapvol-01 ENABLED 35681280 35726400 -      -       -
sd db2dg101-01 bcp_snapvol-01 ENABLED 35681280 71407680 -      -       -
sd db2dg102-01 bcp_snapvol-01 ENABLED 35681280 107088960 -     -       -
sd db2dg103-01 bcp_snapvol-01 ENABLED 35681280 142770240 -     -       -
sd db2dg104-01 bcp_snapvol-01 ENABLED 35681280 178451520 -     -       -
sd db2dg105-01 bcp_snapvol-01 ENABLED 35681280 214132800 -     -       -
sd db2dg107-01 bcp_snapvol-01 ENABLED 35681280 249814080 -     -       -
sd db2dg108-01 bcp_snapvol-01 ENABLED 35681280 285495360 -     -       -
sd db2dg109-01 bcp_snapvol-01 ENABLED 13844160 321176640 -     -       -
sd db2dg110-01 bcp_snapvol-01 ENABLED 35726400 335020800 -     -       -
sd db2dg111-01 bcp_snapvol-01 ENABLED 35726400 370747200 -     -       -
sd db2dg112-01 bcp_snapvol-01 ENABLED 35726400 406473600 -     -       -
sd db2dg113-01 bcp_snapvol-01 ENABLED 35726400 442200000 -     -       -
sd db2dg115-01 bcp_snapvol-01 ENABLED 35726400 477926400 -     -       -
sd db2dg116-01 bcp_snapvol-01 ENABLED 35726400 513652800 -     -       -
sd db2dg117-01 bcp_snapvol-01 ENABLED 35726400 549379200 -     -       -
dc bcp_snapvol_dco bcp_snapvol -      -        -        -        -       -
v  bcp_snapvol_dcl gen       ENABLED  40368    -        ACTIVE   -       -
pl bcp_snapvol_dcl-01 bcp_snapvol_dcl ENABLED 40368 -   ACTIVE   -       -
sd db2dg95-01 bcp_snapvol_dcl-01 ENABLED 40368 0     -        -       -
 
 
 
Run the same steps on the rest of the volumes
 
sunsrv02-dr:# vxsnap -g db2dg prepare db_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare dba_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare db2_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare lg1_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
sunsrv02-dr:# vxsnap -g db2dg prepare tp01_snapvol ndcomirs=1 regionsize=128 \
              alloc=db2dg95
 
 
Verify the region sizes are the same
 
sunsrv02-dr:# for DCONAME in `vxprint -tg db2dg | grep "^dc" | awk '{ print $2 }'`
> do
>   echo "$DCONAME\t\c"
>   vxprint -g db2dg -F%regionsz $DCONAME
> done
bcp_dco           128
bcp_snapvol_dco   128
db_dco            128
db_snapvol_dco    128
dba_dco           128
dba_snapvol_dco   128
db2_dco           128
db2_snapvol_dco   128
lg1_dco           128
lg1_snapvol_dco   128
tp01_dco          128
tp01_snapvol_dco  128
 
 
 
 
Run a point-in-time snapshot
 
Verify the RLINK and RVG are active and up to date. Run this command from the primary site.
 
sunsrv01:# vxrlink -g db2dg -i 2 status rlk_db2instdr-vipvr_db2dg_rvg
Thu Jun 28 08:54:42 MST 2007
VxVM VVR vxrlink INFO V-5-1-4467 Rlink rlk_db2instdr-vipvr_db2dg_rvg is up to date
 
sunsrv02-dr:# vxprint -Pl
Disk group: db2dg
 
Rlink:    rlk_db2inste3-vipvr_db2dg_rvg
info:     timeout=500 packet_size=8400 rid=0.2127
          latency_high_mark=10000 latency_low_mark=9950
          bandwidth_limit=none
state:    state=ACTIVE
          synchronous=off latencyprot=off srlprot=autodcm
assoc:    rvg=db2dg_rvg
          remote_host=db2inste3-vipvr IP_addr=10.11.196.191 port=4145
          remote_dg=db2dg
          remote_dg_dgid=1138140445.1393.sunsrv01
          remote_rvg_version=21
          remote_rlink=rlk_db2instdr-vipvr_db2dg_rvg
          remote_rlink_rid=0.2007
          local_host=db2instdr-vipvr IP_addr=10.12.94.191 port=4145
protocol: UDP/IP
flags:    write enabled attached consistent connected
 
 
Start the Snap Copy
 
sunsrv02-dr:# vxsnap -g db2dg make source=bcp/snapvol=bcp_snapvol \
              source=db/snapvol=db_snapvol source=dba/snapvol=dba_snapvol \
              source=db2/snapvol=db2_snapvol source=lg1/snapvol=lg1_snapvol \
              source=tp01/snapvol=tp01_snapvol
 
 
Display the sync status
 
sunsrv02-dr:# vxtask list
TASKID  PTID TYPE/STATE    PCT   PROGRESS
   168         SNAPSYNC/R 00.07% 0/585105408/430080  SNAPSYNC bcp_snapvol  db2dg
   170         SNAPSYNC/R 00.05% 0/1172343808/567296 SNAPSYNC db_snapvol   db2dg
   172         SNAPSYNC/R 00.46% 0/98566144/452608   SNAPSYNC dba_snapvol  db2dg
   174         SNAPSYNC/R 04.30% 0/8388608/360448    SNAPSYNC db2_snapvol  db2dg
   176         SNAPSYNC/R 00.16% 0/396361728/641024  SNAPSYNC lg1_snapvol  db2dg
   178         SNAPSYNC/R 00.18% 0/192937984/346112  SNAPSYNC tp01_snapvol db2dg
 
sunsrv02-dr:# vxsnap -g db2dg print
 
NAME          SNAPOBJECT        TYPE     PARENT   SNAPSHOT    %DIRTY    %VALID
 
bcp           --                volume   --       --           --       100.00
              bcp_snapvol_snp   volume   --       bcp_snapvol  0.00     --
 
db            --                volume   --       --           --       100.00
              db_snapvol_snp    volume   --       db_snapvol   0.00     --
 
dba           --                volume   --       --           --       100.00
              dba_snapvol_snp   volume   --       dba_snapvol  0.00     --
 
db2           --                volume   --       --           --       100.00
              db2_snapvol_snp   volume   --       db2_snapvol  0.00     --
 
lg1           --                volume   --       --           --       100.00
              lg1_snapvol_snp   volume   --       lg1_snapvol  0.00     --
 
tp01          --                volume   --       --           --       100.00
              tp01_snapvol_snp  volume   --       tp01_snapvol 0.00     --
 
 
bcp_snapvol   bcp_snp           volume   bcp      --           0.00     0.11
 
db_snapvol    db_snp            volume   db       --           0.00     0.02
 
dba_snapvol   dba_snp           volume   dba      --           0.00     0.77
 
db2_snapvol   db2_snp           volume   db2      --           0.00     9.13
 
lg1_snapvol   lg1_snp           volume   lg1      --           0.00     0.09
 
tp01_snapvol  tp01_snp          volume   tp01     --           0.00     0.10
 
 
 
Mount the SNAP Volumes
 
When the Sync is complete, bring up the snap service group in VCS to verify all changes. But first, unmount the filesystems on replicated volumes. In this particular server, the replicated volumes and the snap volumes use the same mountpoints.
 
sunsrv02-dr:# umount ` mount | grep '/dev/vx/dsk/db2dg/' | awk '{ print $1 }'`
 
sunsrv02-dr:# vxtask list
TASKID  PTID TYPE/STATE    PCT   PROGRESS
 
 
sunsrv02-dr:# hastatus -sum
 
-- SYSTEM STATE
-- System               State                Frozen
 
A  sunsrv01-dr          RUNNING              0
A  sunsrv02-dr          RUNNING              0
 
-- GROUP STATE
-- Group            System               Probed     AutoDisabled    State
 
B  CCAvail          sunsrv01-dr          Y          N               ONLINE
B  CCAvail          sunsrv02-dr          Y          N               OFFLINE
B  db2inst_grp      sunsrv01-dr          Y          N               OFFLINE
B  db2inst_grp      sunsrv02-dr          Y          N               OFFLINE
B  db2inst_vvr      sunsrv01-dr          Y          N               OFFLINE
B  db2inst_vvr      sunsrv02-dr          Y          N               ONLINE
B  snap_db2inst_grp sunsrv01-dr          Y          N               OFFLINE
B  snap_db2inst_grp sunsrv02-dr          Y          N               OFFLINE
 
 
sunsrv02-dr:# hagrp -online snap_db2inst_grp -sys sunsrv02-dr
 
 
sunsrv02-dr:# hastatus -sum
 
-- SYSTEM STATE
-- System               State                Frozen
 
A  sunsrv01-dr          RUNNING              0
A  sunsrv02-dr          RUNNING              0
 
-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State
 
B  CCAvail          sunsrv01-dr          Y          N               ONLINE
B  CCAvail          sunsrv02-dr          Y          N               OFFLINE
B  db2inst_grp      sunsrv01-dr          Y          N               OFFLINE
B  db2inst_grp      sunsrv02-dr          Y          N               OFFLINE
B  db2inst_vvr      sunsrv01-dr          Y          N               OFFLINE
B  db2inst_vvr      sunsrv02-dr          Y          N               ONLINE
B  snap_db2inst_grp sunsrv01-dr          Y          N               OFFLINE
B  snap_db2inst_grp sunsrv02-dr          Y          N               ONLINE
 
 
 
Verify the filesystems and compare the sizes with the primary site.
 
sunsrv02-dr:# df -k | grep snapvol
 
/dev/vx/dsk/db2dg/db2_snapvol    4194304   78059 3859030   2% /db2/db2inst
/dev/vx/dsk/db2dg/bcp_snapvol  292552704 1527508 272836173 1% /bcp/db2inst
/dev/vx/dsk/db2dg/dba_snapvol   49283072   28555 46176114  1% /dba/db2inst
/dev/vx/dsk/db2dg/db_snapvol   586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000
/dev/vx/dsk/db2dg/tp01_snapvol  96468992   40176  90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000
/dev/vx/dsk/db2dg/lg1_snapvol  198180864 1779853 184126012 1% /db/db2inst/log1
 
 
 
sunsrv01:# df -k | grep db2dg
 
/dev/vx/dsk/db2dg/db2            4194304   78059 3859030   2% /db2/db2inst
/dev/vx/dsk/db2dg/bcp          292552704 1527508 272836173 1% /bcp/db2inst
/dev/vx/dsk/db2dg/dba           49283072   28555 46176114  1% /dba/db2inst
/dev/vx/dsk/db2dg/db           586171904 1733538 547911037 1% /db/db2inst/PEMMP00P/NODE0000
/dev/vx/dsk/db2dg/tp01          96468992   40176  90402086 1% /db/db2inst/PEMMP00P/tempspace01/NODE0000
/dev/vx/dsk/db2dg/lg1          198180864 1779853 184126012 1% /db/db2inst/log1
 
###############################################################################################
1. On the Volume Replicator Primary, create a vset called v1 consisting of component volumes v1a and v1b:
 
root@primary# vxassist -g vvrdg make v1a 1g
root@primary# vxassist -g vvrdg make v1b 2g
root@primary# vxvset -g vvrdg make v1 v1a
root@primary# vxvset -g vvrdg addvol v1 v1b
 
2. On the Volume Replicator Primary, create a multi-device VERITAS file system on the vset and mount it:
 
root@primary# mkfs -F vxfs /dev/vx/rdsk/vvrdg/v1
root@primary# mount -F vxfs /dev/vx/dsk/vvrdg/v1 /v1
 
3. On the Volume Replicator Secondary, create a matching vset called v1 consisting of component volumes v1a and v1b:
 
root@secondary# vxassist -g vvrdg make v1a 1g
root@secondary# vxassist -g vvrdg make v1b 2g
root@secondary# vxvset -g vvrdg make v1 v1a
root@secondary# vxvset -g vvrdg addvol v1 v1b
 
4. On both the Volume Replicator Primary and Secondary, verify the indices of the vset match:
 
root@primary# vxvset -g vvrdg list v1vset
VOLUME           INDEX        LENGTH     KSTATE  CONTEXT
v1a                  0       2097152    ENABLED  -
v1b                  1       4194304    ENABLED  -
root@secondary# vxvset -g vvrdg list v1vset
VOLUME           INDEX        LENGTH     KSTATE  CONTEXT
v1a                  0       2097152    ENABLED  -
v1b                  1       4194304    ENABLED  -
 
If the indices do not match, the vset will need to be recreated with matching indices.  For details on creating or recreating a vset,  refer to the VERITAS Volume Manager 4.0 Administrator's Guide
 
5. On both the Volume Replicator Primary and Secondary, create a volume to use for the storage replicator log (SRL) volume:
 
root@primary# vxassist -g vvrdg make v1_srl 3g
root@secondary# vxassist -g vvrdg make v1_srl 3g
 
6. Create the Primary RVG specifying the component volumes of the v1 vset as the data volumes:
 
root@primary# vradmin -g vvrdg createpri v1_rvg v1a,v1b v1_srl
 
7. Create the Secondary RVG, again specifying the component volumes of the v1 vset as the data volumes::
 
root@primary# vradmin -g vvrdg addsec v1_rvg primary secondary
 
8. Start replication between the Volume Replicator Primary and Secondary:
 
root@primary# vradmin -g vvrdg -a startrep v1_rvg
 
Note: The vradmin syncvol command cannot be used to synchronize volumes that are components of a volume set.  See TechNote 274870 in the Related Documents section below for further information
 
Note: The vradmin syncrvg command cannot be used to synchronize a replicated data set (RDS) that has a volume set associated with it.  See TechNote 274871 in the Related Documents section below for further information
 
 
 
############################################################################################
remove and recreate the DCM logs:
 
# vxassist -g vrdg remove log data nlog=0
# vxassist -g vrdg addlog data logtype=dcm
 
and then synchronize the data volumes:
 
# vradmin -g vrdg -c ckpt1 syncrvg rvg1 clerks
Message from Primary:
vxrsync: INFO: Starting differences volume synchronization to remote
...
vxrsync: INFO: VxRSync operation completed.
vxrsync: INFO: Total elapsed time: 1:00:15
 
# vradmin -g vrdg -c ckpt1 startrep rvg1
Message from Primary:
vxvm:vxrlink: INFO: Secondary data volumes detected with rvg rvg1 as parent:
vxvm:vxrlink: INFO: data:         len=409600               primary_datavol=data
 
 
 
##########################################################################################
 
On Primary:
 
Create volumes (need to create diskgroup/volumes on both primary/secondarys)
        vxassist -g DG1_acb-ds1-p make ora01 5G
        vxassist -g DG1_acb-ds1-p make u01 1G
 
Create filesystems if need be:
        mkfs -F vxfs /dev/vx/rdsk/DG1_acb-ds1-p/ora01  
        mkfs -F vxfs /dev/vx/rdsk/DG1_acb-ds1-p/u01  
 
Need to create a dcm log for each volume to be replicated:
        vxassist -g DG1_acb-ds1-p addlog ora01 logtype=dcm nlog=1
        vxassist -g DG1_acb-ds1-p addlog u01 logtype=dcm nlog=1
 
Create the SRL (Storage Replicator Log) in the DiskGroup:
        vxassist -g DG1_acb-ds1-p make DG1_acb-ds1-srl 1g
        ** Size depends on how much data is being replicated **
        ** SRL MUST BE ON DIFFERENT DISKS THAN DATA **
 
Create the Primary RVG:
 
(Map ora01 and u01 to RVG and to srl)
        vradmin -g DG1_acb-ds1-p createpri DG1_acb-ds1-rvg \
                   ora01,u01 DG1_acb-ds1-srl
 
Add the Secondary to RVG:
        vradmin -g DG1_acb-ds1-p addsec DG1_acb-ds1-rvg acb-ds1-p acb-ds1-b
 
Start Replication:
        vradmin -g DG1_acb-ds1-p -f startrep DG1_acb-ds1-rvg acb-ds1-b
 
Print RVG info:
        vradmin -g DG1_acb-ds1-p printrvg
 
Adding a volume to an existing RVG:
        vradmin -g DG1_acb-ds1-p addvol DG1_acb-ds1-rvg ora02
 
Remove a volume from Replication:
        vradmin -g DG1_acb-ds1-p delvol DG1_acb-ds1-rvg ora02
 
To Stop Replication to a secondary:
        vradmin -g DG1_acb-ds1-p stoprep DG1_acb-ds1-rvg acb-ds1-b
 
To Pause Replication to a secondary:
        vradmin -g DG1_acb-ds1-p pauserep DG1_acb-ds1-rvg acb-ds1-b
Resume: vradmin -g DG1_acb-ds1-p resumerep DG1_acb-ds1-rvg acb-ds1-b
 
To Check Replication Status:
    vxrlink -i 5 status rlink_name
 
 
Change synchronous mode:
        vxedit set synchronous=off rlink_name # Get name from vradmin -g DG1_acb-ds1-p printrvg
 
 
To make secondary takeover as Primary:
        vradmin -g DG1_acb-ds1-p migrate DG1_acb-ds1-rvg acb-ds1-b
 
Start replication back to new secondarys:
        vradmin -f startrep DG1_acb-ds1-rvg acb-ds1-p
 
To takeover from a failed primary:
        vradmin -g DG1_acb-ds1-p takeover DG1_acb-ds1-rvg
 
Convert a failed primary to a secondary:
        vradmin -g DG1_acb-ds1-p makesec DG1_acb-ds1-rvg acb-ds1-b
 
To failback to original or failed primary:
        vradmin makesec DG1_acb-ds1-rvg acb-ds1-b
        vradmin -c checkpt_presync syncrvg DG1_acb-ds1-rvg acb-ds1-p
        vradmin -c checkpt_presync startrep DG1_acb-ds1-rvg acb-ds1-p
        vradmin migrate DG1_acb-ds1-rvg acb-ds1-p
        vradmin -f startrep DG1_acb-ds1-rvg acb-ds1-b
#
#
Cannot mount a volume while it is part of an RVG.
 
Volumes must be synced before addvol is run
 
For VVR 3.1.1
Must place the disk group id in /etc/VRTSvras/.rdg on secondary
before syncing occurs with new disk volume
 
Get DiskGroup ID from Primary to place on secondary:
    vxdg list
        NAME         STATE           ID
        rootdg       enabled  1020701296.1025.acb-apps22-p
        DG1_dsas-ds1-p enabled  1024399495.1543.acb-apps21-p
 
# sync command
vradmin -g DG1_acb-ds1-p syncvol ora01 acb-ds1-b