cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

VCS new volume created. Need to add up under existing Resource Group under 2 node clustered Solaris 10 Server...Important..please assist

hirinkus1
Level 3
Hi,
I am newbi to VCS. OS is Solaris10. Two node cluster env.

New filesystem already created and mounted with dedicated Vertual IP and is already under VCS.
We have added two more Vol's (Vol03 & Vol04) in the existing Resource Group say 'node1_rg'

Guys, please help me out to know the sequential and step by step procedure with commands to take the newly created Volumes under existing Resource Group......and how to carry out the Failover Test. this is a test server on which we need to do the task. Oracle Database is also there and we can coordinate with App & DBA team to do this activity......

Pleaseeeeeeee help. this is first time I had been to this forum seeking the help...

Thanks,
Rinku,
Pune- India
1 ACCEPTED SOLUTION

Accepted Solutions

Gaurav_S
Moderator
Moderator
   VIP    Certified
Hello Rinku,

I hope you have added volumes in veritas volume manager diskgroup first, if not, following would be commands to do that:

# vxassist -g <diskgroup> make <volume> <size>

Disks of this diskgroup must be located on shared storage.

Create a filesystem
# mkfs -F <fstype> <raw_device>

Create mount points on both the cluster nodes

# mkdir <dir>

Once volumes are created, you need to put them under VCS control.... so you need to add them as resources into a VCS service group:

# hares -add <resource> <resource_type> <service_group>

Add both volumes with above step... your resource type here would be Volume (V is capital)

You will need to modify some attributes of your volume resource which are mandatory

# hares -modify <resource> Enabled 1  (to enable resource>
# hares -modify <resource> Critical 0  (Assume you don't want to make volume as critical resource)
# hares -modify <resource> BlockDevice <block_device>
# hares -modify <resource> DiskGroup <diskgroup>

Finally you have to link volume resource to diskgroup &/OR mount

# hares -link <parent> <child>

Please note in VCS, parent resource depends on child resource, so be careful while linking.... for example in a group where volume depends on diskgroup resource, diskgroup will be child & volume would be parent...

hope this helps

Gaurav

View solution in original post

9 REPLIES 9

Gaurav_S
Moderator
Moderator
   VIP    Certified
Hello Rinku,

I hope you have added volumes in veritas volume manager diskgroup first, if not, following would be commands to do that:

# vxassist -g <diskgroup> make <volume> <size>

Disks of this diskgroup must be located on shared storage.

Create a filesystem
# mkfs -F <fstype> <raw_device>

Create mount points on both the cluster nodes

# mkdir <dir>

Once volumes are created, you need to put them under VCS control.... so you need to add them as resources into a VCS service group:

# hares -add <resource> <resource_type> <service_group>

Add both volumes with above step... your resource type here would be Volume (V is capital)

You will need to modify some attributes of your volume resource which are mandatory

# hares -modify <resource> Enabled 1  (to enable resource>
# hares -modify <resource> Critical 0  (Assume you don't want to make volume as critical resource)
# hares -modify <resource> BlockDevice <block_device>
# hares -modify <resource> DiskGroup <diskgroup>

Finally you have to link volume resource to diskgroup &/OR mount

# hares -link <parent> <child>

Please note in VCS, parent resource depends on child resource, so be careful while linking.... for example in a group where volume depends on diskgroup resource, diskgroup will be child & volume would be parent...

hope this helps

Gaurav

hirinkus1
Level 3
Hi Gaurav,

Appreciate for taking care of my request and for prompt reply. This was important for me. Good to have guys like you here on this forum.

addition to above, some one also adviced me to use GUI interface of Veritas Cluster Manager to take new additional Volumes under already Resourgce Group active under VCS.

Also, I found the below information where the resources were taken offline and freezed to carry out the task. please refer the following link.

http://www.symantec.com/connect/forums/veritas-cluster-steps-beginner

Any addition inputs on Change Control implementation staep to do so on Solaris10 Cluster Servers plz let me know guys.

Gaurav_S
Moderator
Moderator
   VIP    Certified
Hi Rinku,

using Java based gui would be best for u considering you are beginer...

Also, I would recommend you to get hands on with veritas simulator , with simulator you can learn all these steps & it would also help you to learn command line ...

you can download simulator from here

http://www.symantec.com/business/products/utilities.jsp?pcid=2247&pvid=20_1

hope it helps..

Gaurav

hirinkus1
Level 3
Thank you Gaurav.

Request: Can anyone share full steps to be carried out through 'Veritas Cluster Manager' GUI to take newly created additional two Filesystem/Volumes under VCS control into existing SG

for e.g.:- Node1:Node2(clustered)

- Service Group say 'APP_sg' already existing
and it is into VCS control already:-

/opt/app/APP is filesystem already created and in use.

Two new filesystems created & mounted on Node1 as below in addition to existing ones:-
/opt/app/APP/test01   ..................(APP_vol03)
/opt/app/APP/test02   ..................(APP_vol04)

We need to take these two Volumes say  APP_vol03 & APP_vol04 under VCS control either by using GUI or command line step by step and there after need to carry ot the failover test. there is also a QA oracle database existing on the 2 node cluster test servers.

Please help to get GUI method steps and also method right from scratch till failover in addition to what above two guys had helped to assist. For a beginner like me, it is difficult to understand the complications or difficulties possibilities expected during the maintanance.

Thanks you in advance.

jayess
Level 3
 Hi Rinku, 

Can you clarify few more things from your post?

/opt/app/APP is filesystem already created and in use.

Two new filesystems created & mounted on Node1 as below in addition to existing ones:-
/opt/app/APP/test01   ..................(APP_vol03)
/opt/app/APP/test02   ..................(APP_vol04)

1) Do you have APP_vol01 and APP_vol02 filesystems?  are those 2 mounted on Node1 and under VCS control as well?
2) Is /opt/app/APP already a filesystem under VCS control and mounted on node1?
3) What version of VCS are you running?

hope to help you with your answers,
 

hirinkus1
Level 3

Request: Can anyone share full steps to be carried out through 'Veritas Cluster Manager' GUI to take newly created additional two Filesystem/Volumes under VCS control into existing SG

for e.g.:- Node1:Node2(clustered)

- Service Group say 'APP_sg' already existing
and it is into VCS control already:-

/opt/app/APP is filesystem already created and in use.

Two new filesystems created & mounted on Node1 as below in addition to existing ones:-
/opt/app/APP/test01   ..................(APP_vol03)
/opt/app/APP/test02   ..................(APP_vol04)

We need to take these two Volumes say  APP_vol03 & APP_vol04 under VCS control either by using GUI or command line step by step and there after need to carry ot the failover test. there is also a QA oracle database existing on the 2 node cluster test servers.

Please help to get GUI method steps and also method right from scratch till failover in addition to what above two guys had helped to assist. For a beginner like me, it is difficult to understand the complications or difficulties possibilities expected during the maintanance.

Thanks you in advance.

hirinkus1
Level 3
Hi,


As a root user privilege, one need to execute the following commands in sequence for taking newly created resource Volumes Say for e.g. Vol 4 & Vol5 for newly created respective Two filesystems,
under Veritas Control. Please see previous posts for the scenario and pre-existing enverionment.


for e.g.

/opt/app/<app name>/ggs  
/opt/app/<app name>//ggsarch01

we are taking <app name>/ as APP in our example. it shall be in small case in fatct.


Commands Carried out in sequence on Primary Node (Host) on a two node clustered env.:-

#haconf –makerw
#hagrp -freeze APP_sg
 
#hares -add APP_vol4 Volume APP_sg
#hares -add APP_vol3 Volume APP_sg
    #hares -add APP_ggsarch01_mnt Mount APP_sg
    #hares -add APP_ggs_mnt Mount APP_sg
 
 
#hares -link APP_vol4 APP_dg
#hares -link APP_ggs_mnt APP_vol4
#hares -link APP_ggs_mnt APP_mnt

#hares -link APP_vol3 APP_dg
#hares -link APP_ggsarch01_mnt APP_vol3
#hares -link APP_ggsarch01_mnt APP_mnt
 
 #hares -modify APP_vol4 Enabled 1
#hares -modify APP_vol3 Critical 0
 #hares -modify APP_vol4_mnt BlockDevice /dev/vx/dsk/APP_dg/APP_vol5
#hares -modify APP_vol4 DiskGroup APP_dg
 
 #hares -modify APP_vol3 Enabled 1
#hares -modify APP_vol3 Critical 0
#hares -modify APP_vol3_mnt BlockDevice /dev/vx/dsk/APP_dg/APP_vol3
#hares -modify APP_vol3 DiskGroup APP_dg
 
#hagrp -unfreeze APP_sg
#haconf -dump –makero

Now while viewing from Veritas Cluster Manager GUI, we can see the resources Vol3 & Vol4 had been added. We tried to take New added resources online after probing from command line. Its giving warnings and is not getting successfully online fully. So we decided to take the Servive Group offline as a whole and then bring it online as a whole so as all dependencies could come up recurrcively. still we got the probeing warnings. We the tried unmounting the newly created Vol4 & 3 volume's respective filesystems on primary node, but it was giving locked errors. The server then needed a reboot to unlock it as it was not been successfully even by force umount options.

We added the resources using our steps but service group was not coming online.
We had to remove the resources to start again, but at this point mount points are locked due to VCS control and are
not getting unlocked.
 
root@NODE1:/etc# /usr/lib/fs/vxfs/umount -o mntunlock=VCS /opt/app/qAPP/ggs
UX:vxfs umount: ERROR: V-3-26366: cannot mntunlock /opt/app/APP/ggs: Invalid argument
 
root@NODE1:/etc# fuser -k /opt/app/APP/ggs
/opt/app/APP/ggs:
 
root@NODE1:/etc# /usr/lib/fs/vxfs/umount -o mntunlock=VCS /opt/app/APP/ggs
UX:vxfs umount: ERROR: V-3-26366: cannot mntunlock /opt/app/APP/ggs: Invalid argument
 
root@hstst63:/etc# umount -o mntunlock=VCS /opt/app/APP/ggs
UX:vxfs umount: ERROR: V-3-26366: cannot mntunlock /opt/app/APP/ggs: Invalid argument
 
..Also to update you that the newly created volume resources vol3 & vol4 for two filesystems:
 /opt/app/APP/ggs, /opt/app/APP/ggsarch01 were showing in ? icon with red color in GUI.

before this, we had rolled out by undoing the steps in reverse order even though the newly created resources were not coming fully online. the hastatus -sum status was at the following state for very long time .....in hours or so...

root@NODE1:/root# hastatus -sum | grep APP

B  app_sg     NODE1              Y          N               ONLINE

B  app_sg     NODE2              Y          N               OFFLINE

F  app_sg     DiskGroup       APP_dg          NODE1              W_OFFLINE

now we have reverted back to original state.
Was anything missing.....in above steps.....

Thanks..
Rinku
 



g_lee
Level 6
Rinku,

According to the steps you provided above, the only attributes you set for the Mount resources was the BlockDevice:
#hares -modify APP_vol4_mnt BlockDevice /dev/vx/dsk/APP_dg/APP_vol5
#hares -modify APP_vol3_mnt BlockDevice /dev/vx/dsk/APP_dg/APP_vol3

The reason VCS would not have been able to online the Mount resources is because you did not set / you were missing the following required attributes:
FsckOpt
FSType
MountPoint
VxFSMountLock

NB: did you mount the filesystems manually? according to your steps above you didn't set the MountPoint attributes so VCS wouldn't have known where to mount the filesystems!!

Reference from VCS 5.0MP3 (Solaris) Bundled Agents Reference Guide:
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/html/vcs_bundled_agents/ch_sol_storage_agents43.html

Additionally, your Volume resource is missing the Volume attribute according to your steps above.
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/html/vcs_bundled_agents/ch_sol_storage_agents32.html

Once these are set the resources should be able to online/monitor/offline properly.

For future reference, suggest you take a look at the Bundled Agents Reference Guide and User's Guide in terms of how these fit together:

VCS 5.0MP3 Bundled Agent's Reference Guide (full document in pdf)
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/pdf/vcs_bundled_agents.pdf

VCS 5.0MP3 User's Guide
http://sfdoccentral.symantec.com/sf/5.0MP3/solaris/pdf/vcs_users.pdf

g_lee
Level 6
In the thread you linked to above, the group was frozen / resources were taken offline because in that case the resources had already been added but were not coming online properly and unable to failover due to the resources not being configured properly (in that case also due to missing required attributes!).

The freeze was to prevent further attempts to failover/online while troubleshooting was taking place. ie: it's not necessary to freeze the group to add new resources, but if you anticipate problems/don't want VCS to take any actions while you're adding then it won't hurt.