Problem putting IP resource online

Hi Guys,

I'm very new to this, I have setup Veritas Cluster Server 5.1 for Windows on a two node cluster (should be active passive environment)

Hope you could take a look on the attachment for the screenshots.

Looking forward for your help. Thanks in advance guys!

 

Regards,

Casper

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

  Thanks guys! Looks like I

 

Thanks guys! Looks like I found what's wrong... I have change the IP address to other IP address seems like it should be a virtual IP address and not the IP address configured as the Public IP that I have assigned on the nodes.

Below are my next concerns:

1. How about to configure the Heartbeat1 and Heartbeat2. According to Symantec Docu, Hearbeat NICs should have no TCP/IP so I disabled it under the Properties NIC menu.Now, I would I configure a Heartbeat resource so we know if this one fails? I believed under MS Clustering the heartbeat is also monitored.. 

2. My next problem is, how about adding a resource for my shared disk? what should be the resources needed and the pareamters to set?

Im trying all of this using a VMWare workstation, where the LUNs (shared disks) came from using StarWind then using the MS SCSi initiator

 

Thanks guys!.

View solution in original post

13 Replies
Highlighted

Have you confirmed that mac

Have you confirmed that mac address entered is correct ? If not,

Is your IP agent running on Node 1 ?   error message clearly says that IP resource is not probed on node 1 ... Also, did u went through any software update for cluster..

If your IP agent is running, I will suggest to compare the types.cf file on both the nodes once..

 

Gaurav

Highlighted

Sorry to ask, but how will I

Sorry to ask, but how will I install/run the said IP agent? what should I need to make it work? Thanks.

 

*very new in this field.

Highlighted

If you have scripts that I

If you have scripts that I can type, I will appreciate:

Below are the details I have:

Node1:

Public: 192.168.1.11 / 255.255.255.0 / 00-0C-29-42-7B-1D

Private Heartbeat1: Disabled TCP/IP

Private Heartbeat2: Disabled TCP/IP

 

Node2:

Public: 192.168.1.22 / 255.255.255.0 / 00-0C-29-AE-E6-53

 

Private Heartbeat1: Disabled TCP/IP

Private Heartbeat2: Disabled TCP/IP

 
ClusterName: Cluster
 
 
Database: SQL 2005
 
LUNs: Drive L and Drive M
 
Thanks...
Highlighted

The IP resource is not probed

The IP resource is not probed on either node, so issue is probably not the agent has stopped running.  You can confirm this by running "hastatus -sum" from the command line.  There will be a line saying something like "IP Agent failed if this is issue, in which case you can use "haagent -start IP -sys NODE1" to start.

I would look at logs - click on yellow exclamation mark and look at main engine log and Agent logs or look directly at the files in something like "program files\Veritas\Cluster Server\log".

If you are still having issues, then post extracts from these logs and the main.cf (from something like program files\Veritas\Cluster Server\conf\config) and the output of "ipconfig /all".

Mike

Highlighted

Hi Amcasperforu, Since you

Hi Amcasperforu,

Since you are clustering SQL 2005, why don't you create the service gorup using the SQL Serivce Group Configruation Wizard?

It will set the same IP address for both nodes but you can change it once it is created.

Thanks,

Wally

Highlighted

Hi Amcasperforu, From the

Hi Amcasperforu,

From the screenshot, the configuration looks OK.

You can also check the IP_A.txt log to see if it logs what the problem is.  Sometimes it will mention what it sees as wrong with the configuraiton.

Thanks,

Wally

Highlighted

IP Agent

Casper,

Can you run the following command to confirm you have the correct MAC addresses:

getmac -v

Also, please verify that you do not have DHCP enabled for the particular NICs in question. 

 

Joe D

Highlighted
Accepted Solution!

  Thanks guys! Looks like I

 

Thanks guys! Looks like I found what's wrong... I have change the IP address to other IP address seems like it should be a virtual IP address and not the IP address configured as the Public IP that I have assigned on the nodes.

Below are my next concerns:

1. How about to configure the Heartbeat1 and Heartbeat2. According to Symantec Docu, Hearbeat NICs should have no TCP/IP so I disabled it under the Properties NIC menu.Now, I would I configure a Heartbeat resource so we know if this one fails? I believed under MS Clustering the heartbeat is also monitored.. 

2. My next problem is, how about adding a resource for my shared disk? what should be the resources needed and the pareamters to set?

Im trying all of this using a VMWare workstation, where the LUNs (shared disks) came from using StarWind then using the MS SCSi initiator

 

Thanks guys!.

View solution in original post

Highlighted

Resource Configuration

Casper,

VCS heartbeats are not monitored by a "Resource Type" but rather the VCS service (HAD) itself.  MS Clustering uses IP addresses and as such would require that those IP's be monitored.  LLT and GAB are "below the TCP Stack" and therefore require only a connection on the same switch/vlan with the corresponding cluster nodes. 

In terms of a disk resource, you now are getting into a combination of Storage Foundation for Windows and VCS.  You will have to create a Cluster Dynamic Disk Group using the VEA console and create the associated cluster resource of Type VMDg.

I would highly recommend referring to the documentation for SFWHA for more details on this process. 

https://sort.symantec.com/public/documents/sfha/5.1sp2/windows/productguides/pdf/SFW_Admin_51SP2.pdf

Page 794

Additionally there are online "self paced" courses available at http://education.symantec.com

Hope this helps,

 

Joe D

Highlighted

Thanks Joe...   Quick

Thanks Joe...

 

Quick question, since I have two nodes windows 2003 active/passive. On the shared disk under SFW (VEA GUI) should I separately create Cluster Disk Group and Volumes? With the same cluster disk group name and disk letter on both nodes?

 

Meaning after doing it on node1 I would Deport first before moving on Node2 and doing the same thing?

 

Thanks!

Highlighted

Storage Mapping

When you create a cluster dynamic disk group with Storage Foundation, as long as the underlying luns are mapped/zoned to both the active and passive nodes all you will have to do is deport the disk group from Node 1 and import the Disk Group from Node 2.  There is no need for any additional configuration on the second node as all of the Metadata information required is stored on the private regions within the Disk Group itself.

Please note that a Cluster Dynamic Disk group requires VCS to "Import/Online" and "Deport/Offline" the disk groups.  For testing purposes between the two nodes,  you may want to create the Disk Group as Dynamic first (Do not check Cluster Group option) to make sure that all the storage mappings are correct.

At which point you can use VEA to import/deport the disk groups.  Once you've established that is working correctly, then I would suggest adding the VMDg resource and testing VCS failover. To set the cluster flag for the disk group, simply deport it and then import it with the Cluster option checked  The icon next to the DG name in VEA will now appear as a different color (indicating that it is a Cluster Dynamic Disk Group).

let us know if you have anymore questions. 

Joe D

Highlighted

I will test this and will let

I will test this and will let you know..Thank you.

A question pops out on my mind, let say after I have established the Disk groups in VCS. The next thing for me I guess is to install the SQL 2005.

Would this be, need to install first the SQL agent then SQL 2005?

Node1 - SQL agent -> SQL 2005 (using drives L and M)

then after successful installation

Node1 - Stop SQL services

Then what shall I do with my shared drives  L and M, should I failover it to Node2 then start installing the SQL agent then SQL 2005?

Thanks...

Highlighted

SQL Install

Caper,

This topic may be better served as secondary thread.  That being said,  complete steps for installing SQL in a VCS environment and configuring the appropriate storage is available as part of the Solution Guide for SQL 2005.

 https://sort.symantec.com/public/documents/sfha/5.1sp2/windows/productguides/pdf/SFW_HA_DR_SQL_Solut...

At a "very" high level::

Install SFWHA and configure the base cluster

Build the the necessary storage volumes. System, UserDB, Logs, RegRep etc

Install SQL 2005 on first Node as Stand Alone (Named Instance) to the Shared Volumes from above. Except the System Program volumes (those need to be local)

Move storage to 2nd node and Remove or Rename the System Data Files

Install SQL 2005 on the Second Node (Using the same instance name as above) to the shared volumes the same as with the 1st node.

Run the SQL Server 2005 configuration Wizard from the first Node.  This will build the service groups and bring SQL Online. 

There are additional considerations, however I highly recommend going over the documentation first so that you have a better understanding of the necessary steps. 

Good Luck,

Joe D