Forum Discussion

allaboutunix's avatar
10 years ago

Creation of new filesystem in vcs environment.

Hi Team,

We have to create a new filesystem and it is in  2 node cluster.

/RMPRDR01/oradata5---> Add this new mount point  of 252 GB

we have a request to create new filesystem which is in vcs cluster 5.0

For reference about main.cf, attached is the document enclosed.

 

df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/md/dsk/d0          15G   9.3G   5.4G    64%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    34G   1.7M    34G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
fd                       0K     0K     0K     0%    /dev/fd
/dev/md/dsk/d6         9.9G   7.5G   2.3G    77%    /var
swap                    36G   2.2G    34G     7%    /tmp
swap                    34G   104K    34G     1%    /var/run
swap                    34G     0K    34G     0%    /dev/vx/dmp
swap                    34G     0K    34G     0%    /dev/vx/rdmp
/dev/md/dsk/d9          32G   1.9G    29G     7%    /var/crash
/dev/odm                 0K     0K     0K     0%    /dev/odm
/dev/vx/dsk/RMPRDR01_ora_data_dg/oratemp
                       135G   127G   7.2G    95%    /RMPRDR01/oratemp
/dev/vx/dsk/RMPRDR01_ora_data_dg/oradata4
                       252G   244G   7.0G    98%    /RMPRDR01/oradata4
/dev/vx/dsk/RMPRDR01_ora_data_dg/oradata2
                       252G   251G   310M   100%    /RMPRDR01/oradata2
/dev/vx/dsk/RMPRDR01_ora_data_dg/oradata3
                       252G   243G   8.3G    97%    /RMPRDR01/oradata3
/dev/vx/dsk/RMPRDR01_ora_data_dg/oradata1
                       252G   251G   882M   100%    /RMPRDR01/oradata1
/dev/vx/dsk/RMPRDR01_ora_arch_dg/orafra
                       107G   5.0G    96G     5%    /RMPRDR01/orafra
/dev/vx/dsk/RMPRDR01_ora_arch_dg/oradmp
                        67G    17G    48G    26%    /RMPRDR01/oradmp
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oraredo2
                        10G   7.9G   1.9G    81%    /RMPRDR01/oraredo2
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oraredo1
                        10G   7.9G   1.9G    81%    /RMPRDR01/oraredo1
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oractl3
                       2.0G    38M   1.8G     2%    /RMPRDR01/oractl3
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oractl2
                       2.0G    38M   1.8G     2%    /RMPRDR01/oractl2
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oractl1
                       2.0G    38M   1.8G     2%    /RMPRDR01/oractl1
/dev/vx/dsk/RMPRDR01_ora_bin_dg/oracle
                        20G    12G   7.5G    62%    /RMPRDR01/oracle

====================================================================

vxdisk -o alldgs list  o/p -


DEVICE       TYPE            DISK         GROUP        STATUS
c0t0d0s2     auto:none       -            -            online invalid
c0t1d0s2     auto:none       -            -            online invalid
emcpower0s2  auto:cdsdisk    emcpowerN17  RMPRDR01_ora_arch_dg online
emcpower1s2  auto:cdsdisk    emcpowerN27  RMPRDR01_ora_data_dg online
emcpower2s2  auto:cdsdisk    emcpowerN28  RMPRDR01_ora_data_dg online
emcpower3s2  auto:cdsdisk    emcpowerN31  RMPRDR01_ora_arch_dg online
emcpower4s2  auto:cdsdisk    emcpowerN34  RMPRDR01_ora_data_dg online
emcpower5s2  auto:cdsdisk    RMPRDR01_ora_bin_dg01  RMPRDR01_ora_bin_dg online
emcpower6s2  auto:cdsdisk    emcpowerN32  RMPRDR01_ora_data_dg online
emcpower7s2  auto:cdsdisk    emcpowerN24  RMPRDR01_ora_data_dg online
emcpower8s2  auto:cdsdisk    emcpowerN26  RMPRDR01_ora_data_dg online
emcpower9s2  auto:cdsdisk    emcpowerN29  RMPRDR01_ora_data_dg online
emcpower10s2 auto:cdsdisk    emcpowerN16  RMPRDR01_ora_arch_dg online
emcpower11s2 auto:cdsdisk    emcpowerN30  RMPRDR01_ora_arch_dg online
emcpower12s2 auto:cdsdisk    emcpowerN33  RMPRDR01_ora_data_dg online
emcpower15s2 auto:cdsdisk    emcpowerN35  RMPRDR01_ora_data_dg online
emcpower16s2 auto:cdsdisk    emcpowerN23  RMPRDR01_ora_data_dg online
emcpower44s2 auto:cdsdisk    emcpowerN19  RMPRDR01_ora_data_dg online
emcpower45s2 auto:cdsdisk    emcpowerN20  RMPRDR01_ora_data_dg online
emcpower46s2 auto:cdsdisk    emcpowerN21  RMPRDR01_ora_data_dg online
emcpower47s2 auto:cdsdisk    emcpowerN22  RMPRDR01_ora_data_dg online
emcpower50s2 auto:cdsdisk    emcpowerN25  RMPRDR01_ora_data_dg online

===========================================================================
vxassist -g RMPRDR01_ora_data_dg maxsize
Maximum volume size: 963203072 (470314Mb)

We have a free size available in RMPRDR01_ora_data_dg to 459.29 GB 

as the system is in cluster 

============================================================================

hastatus -summ

-- SYSTEM STATE
-- System               State                Frozen

A  francona             RUNNING              0
A  lackey               RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  MODELN_RPT      francona             Y          N               OFFLINE
B  MODELN_RPT      lackey               Y          N               ONLINE


 

Kindly assist the steps to create a new filesystem, as in this request,
/RMPRDR01/oradata5---> Add this new mount point  of 252 GB.

Let me know if any other info required.

  • Hi,

    Replied to your other post ..

    https://www-secure.symantec.com/connect/forums/new-filesystem-creation-vxvm#comment-11241041

    Refer the FS creation steps in above thread ..

    Additionally for VCS here, you need to create Mount resource for this new filesystem & you need to set dependencies (link resources)

    to add resource in VCS, open VCS java GUI or VOM if you have, to add via command line, see below

    # haconf -makerw

    # hares -add <resource_name> Mount MODELN_RPT

    # hares -modify <resource_name> BlockDevice /dev/vx/dsk/<diskgroup>/<volume_name>

    # hares -modify <resource_name> MountPoint <Mount_Point>

    # hares -modify <resource_name> FSType <FS_Type>   (vxfs /ufs )

    # hares -modify <resource_name> FsckOpt <option>      (-y or -n )

    # hares -link <parent_res> <child_res>       

    If mount depends on any resource, then Mount will be parent to that resource, if other resources are depending on Mount, then Mount will be child resource ..

    Once linking is done, save the config

    # haconf -dump -makero

    You can provide more optional attributes if you need, refer here for options

    https://sort.symantec.com/public/documents/sfha/6.2/solaris/productguides/html/vcs_bundled_agents/ch02s07s05.htm

     

    G

     

  • Hi Team,

     

    I am proceeding with the activity for the below filesystem creation in vcs, But Do we require downtime for it.

    For checking the failover and switchover, task.

    FYI: Class A apps is hosted on it.

  • You don't need down time to add filesystem.  No need to check failover, especially as there is existing space in the diskgroup (if you would have had to add a new disk, then you would just need to check the disk could be seen by both system, but still wouldn't really be necessary to test failover)

    Make sure you create mount point on both nodes in the cluster.

    For the safest way to add to cluster I would do in the following order so typos in the hares command (or in GUIs) do not make cluster failover:

    1. Add filesystem in Volume Manager
    2. Create mount point on both nodes.
    3. Create resource in VCS, but dont enable or link
    4. Make resource non-critical
    5. Enable resource
    6. Online resource
    7. Once resource has onlined sucessfully, make resource critical and create resource dependencies.

    Mike