I configure two database on two system, one database production and one database test.
Find configuration details on main.cf files attached. Due to two listener entry database can startup on system.
[root@srv-cui-db-01 ~]# hagrp -state
#Group Attribute System Value
ClusterService State srv-cui-db-01 |ONLINE|
ClusterService State srv-cui-db-02 |OFFLINE|
ORACLE-ASM-VXVM State srv-cui-db-01 |ONLINE|
ORACLE-ASM-VXVM State srv-cui-db-02 |OFFLINE|FAULTED|
ORACLE-ASM-VXVM-TEST State srv-cui-db-01 |ONLINE|
ORACLE-ASM-VXVM-TEST State srv-cui-db-02 |OFFLINE|FAULTED|
cvm State srv-cui-db-01 |ONLINE|
cvm State srv-cui-db-02 |ONLINE|
At first glance the main.cf looks ok.
Can you attach engine_A.log file? That is where one finds details on what happens in a cluster. One common reason for failure is forgetting to register DNS entries for the listener and making sure the IP is valid on both hosts (ie the hosts network card is configured to work on that subnet).
Sorry for the last reply. Darn day-job getting in the way.
In the meantime, to help yourself narrow down the issue. It's what I use to do when troubleshooting clusters:
Save the config (main.cf).
Clear the "Critical" attribute for each resource. You will be starting and stopping things, so you don't want that flag set till everything works properly.
Assuming everything is down, manually online each Resource within the ServiceGroup. Don't start the next till you verify the Resource is verified Online.
Continue with the next Resource. If a resource fails, check the end of the engine_A.log and/or Application.log to see the error. This makes it very easy to isolate things.
Assuming you fix everything, don't forget to re-enable the Critical flag on the appropriate Resources.
If I have time this evening I'll look at the log.