cancel
Showing results for 
Search instead for 
Did you mean: 

nbproxy.exe trying to connect to non existent IP addresses

Slartybardfast
Level 5

Good evening NetBackup Gurus,

I have been plagued by nbproxy.exe trying to connect to a number of non existent IP addresses. I was using ProcessExplorer and on the TCPIP tab I can see that it tries to connect on 1556 to the rogue IP address, then tries vnetd to the same IP address. I had to add a dword key (TcpTimedWaitDelay) to change the default TCPIP parameter to 30 instead of 120. Does any one know where this addresses are cached/written. For the life of me I cannot find a reference any where. clearing host cache does not get rid of these IP addresses. There are no host file entries either. Truly at a loss. It might be from a legacy IP subnet that has long since gone. I have searched through the registry to see if they are tattoed in there but no bannana. I hope someone can shed some light on where thses little critters are hiding.

Thanks in advance

 

8 REPLIES 8

sdo
Moderator
Moderator
Partner    VIP    Certified

In your opinion should we all be worried?

Is it always the same address?

Is it an internet address?  APIPA address?  Private address?

Can you tell us the address(es) ?

Does full level 5 logging for nbproxy reveal any related details?

I wonder if it is embedded in code as a kind of "test whether Networking can be established" type test, and isn't expected to be used anyway.

 

In answer to you points:

No I dont think everyone needs to be worried. This is something to do with our environment. It has caused the master server to exhaust ephemeral ports.

Is ith the same address. No I have observed different addresses with the last 2 octects changing.

No they are not APIPA addresses

Unable to reveal IP addresses. I beleive that this is an old subnet that has long since been changed. Legacy hangover

I have a day off today but I will turn up logging for the nbproxy process to see if It will reveal anything further.

Embbedded code. I don't think it is, I could be wrong here but this is what I found from the OpsCenter Administrator guide.

So I think the process is retriving data from EMM DB on clients that were added to policies via IP addresses.

Most of the nbproxy processes are started, managed, and removed by NBSL. This section talks about the nbproxy processes that NBSL manages.

Not all nbproxy processes on the master server are managed by NBSL. For example, some of the nbproxy processes are managed by nbjm and nbpem.

An nbproxy process runs to retrieve the following NetBackup data for OpsCenter:

•Policies

•Catalogs

•Storage lifecycle policies

•LiveUpdate

•Client details

A very curious problem. Like all problems

sdo
Moderator
Moderator
Partner    VIP    Certified

I seem to remember in at least one older verison of NetBackup a situation / condition whereby NetBackup would attempt to pre-resolve the IP addresses of all client names for which backup images existed, or where client names were still listed in dormant (no schedules) or deactivated policies, or in client attributes (on the master), or were names of VMs returned in the lists of names from Hypervisors - so maybe you have client names (that are no longer really clients in active policies) which in turn also still exist in DNS or hosts file - and so NetBackup is able to resolve these old names to an IP address.

I know that the ephemeral port range can be extended quite easily in Windows.  Maybe it can in Linux / Solaris easily too.

Two places I cann think to check for the rogue IP's.

1. EMM - run a "nbemmcmd -listhosts -verbose" and see if anything pops up.
2. Could those IP's exist (as IP or hostname) in a policy (in particular a VMware policy). 

And some more questions:
Can any of the IP's be resolved to a hostname?
Are these random, or just a small set of IP's?
You say just the last two octets are changing - do the IP's reside within your internal network?


@davidmoline wrote:

Two places I cann think to check for the rogue IP's.

1. EMM - run a "nbemmcmd -listhosts -verbose" and see if anything pops up.
2. Could those IP's exist (as IP or hostname) in a policy (in particular a VMware policy). 

And some more questions:
Can any of the IP's be resolved to a hostname?
Are these random, or just a small set of IP's?
You say just the last two octets are changing - do the IP's reside within your internal network?


Thanks for the input. I have dug a bit deeper and these addresses are in our cloud environment. Which is not part of the domain. The IP addresses are from the VM's being backed up. The puzzling thing here is that is that we have mulptiple policies that are using VIP to select the VM's and this appears to be affecting a subset of policies. I would have expected all the VIP policies to be behaving the same. I have checked that the policies are configured the same. I remenber when thay were created we copied from a master policy then just changed the query. Nbproxy.exe is trying to connect to 1556 than falls back to 13724. There is no DNS for these environments so it will never be able to resolve or connect as this is an isolated networked environment. We only have access via VCenter and mapped LUNS to the media servers. The media servers have the disable IP resolution to stop VIP queries failing.

The place to check now is the VMware policy - check/change the configured VMware backup host to be a "Backup Media Server" rather than a specific host.  

If it already is set this way, all I can suggest is logging a support call - this is a known issue and there is an internal article referencing this issue. The "solution" is as above. If this doesn't fix the problem, then when you log the call reference the article 100046901.


@davidmoline wrote:

The place to check now is the VMware policy - check/change the configured VMware backup host to be a "Backup Media Server" rather than a specific host.  

If it already is set this way, all I can suggest is logging a support call - this is a known issue and there is an internal article referencing this issue. The "solution" is as above. If this doesn't fix the problem, then when you log the call reference the article 100046901.


Many thanks David. I have changed the "VMware backup host:" on the VMware tab in the policy to "Backup Media Server" for the policy that is responsible. Is the article available to peruse? I would like to understand what been going on. An interesting issue. I hope this works. On a side note we have a lot of VIP based policies should they all be set to "Backup Media Server"?

Unfortunately no - the content is restricted (not entirely sure why as there is very little to show).

If this doesn't help - log the support call as it will inevitably help Veritas understand the problem better.

And I would only apply the "solution" to those policies that are relevant (there is usually a good reason to specify a specific media server as the backup  host).

Do let everyone know whether this resolves the strange behaviour.