cancel
Showing results for 
Search instead for 
Did you mean: 

Faulting application beserver.exe, version 11.0.7170.32, stamp 475fe521, faulting module bemsdk.dll

Bruce_Rouzie
Not applicable
After applying the latest hotfixes to Backup Exec for Windows version 11d sp2 (plus hotfixes 31, 32, 33, 34 and 35) the beserver process crashes every 2-3 days with the following error:
 
Faulting application beserver.exe, version 11.0.7170.32, stamp 475fe521, faulting module bemsdk.dll, version 11.0.7170.32, stamp 475fc1de, debug? 0, fault address 0x000dc3dd.
 
This is very disruptive and creates some interesting side effects. For any job that is active when the crash occurs, the output from the job is lost and any dependent duplication jobs do not run and fail when started manually.
 
I've seen other posts reporting this same issue. Any progress on a fix?
20 REPLIES 20

garg
Not applicable
I upgraded to SP2 and applied hotfixes 31, 32, 33, 34, and 35 yesterday and last night during a backup I received this exact same error too.

Event Type:    Error
Event Source:    .NET Runtime 2.0 Error Reporting
Event Category:    None
Event ID:    1000
Date:        2/19/2008
Time:        10:35:13 PM
User:        N/A
Computer:    BACKUP
Description:
Faulting application beserver.exe, version 11.0.7170.32, stamp 475fe521, faulting module bemsdk.dll, version 11.0.7170.32, stamp 475fc1de, debug? 0, fault address 0x000dc3dd.

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.
Data:
0000: 41 00 70 00 70 00 6c 00   A.p.p.l.
0008: 69 00 63 00 61 00 74 00   i.c.a.t.
0010: 69 00 6f 00 6e 00 20 00   i.o.n. .
0018: 46 00 61 00 69 00 6c 00   F.a.i.l.
0020: 75 00 72 00 65 00 20 00   u.r.e. .
0028: 20 00 62 00 65 00 73 00    .b.e.s.
0030: 65 00 72 00 76 00 65 00   e.r.v.e.
0038: 72 00 2e 00 65 00 78 00   r...e.x.
0040: 65 00 20 00 31 00 31 00   e. .1.1.
0048: 2e 00 30 00 2e 00 37 00   ..0...7.
0050: 31 00 37 00 30 00 2e 00   1.7.0...
0058: 33 00 32 00 20 00 34 00   3.2. .4.
0060: 37 00 35 00 66 00 65 00   7.5.f.e.
0068: 35 00 32 00 31 00 20 00   5.2.1. .
0070: 69 00 6e 00 20 00 62 00   i.n. .b.
0078: 65 00 6d 00 73 00 64 00   e.m.s.d.
0080: 6b 00 2e 00 64 00 6c 00   k...d.l.
0088: 6c 00 20 00 31 00 31 00   l. .1.1.
0090: 2e 00 30 00 2e 00 37 00   ..0...7.
0098: 31 00 37 00 30 00 2e 00   1.7.0...
00a0: 33 00 32 00 20 00 34 00   3.2. .4.
00a8: 37 00 35 00 66 00 63 00   7.5.f.c.
00b0: 31 00 64 00 65 00 20 00   1.d.e. .
00b8: 66 00 44 00 65 00 62 00   f.D.e.b.
00c0: 75 00 67 00 20 00 30 00   u.g. .0.
00c8: 20 00 61 00 74 00 20 00    .a.t. .
00d0: 6f 00 66 00 66 00 73 00   o.f.f.s.
00d8: 65 00 74 00 20 00 30 00   e.t. .0.
00e0: 30 00 30 00 64 00 63 00   0.0.d.c.
00e8: 33 00 64 00 64 00 0d 00   3.d.d...
00f0: 0a 00                     ..     


Running on Windows server 2003. Please help or advise as to what we can do to resolve this.

Almost at the same time I also got:

Event Type:    Error
Event Source:    Service Control Manager
Event Category:    None
Event ID:    7034
Date:        2/19/2008
Time:        10:42:33 PM
User:        N/A
Computer:    BACKUP
Description:
The Backup Exec Server service terminated unexpectedly.  It has done this 1 time(s).

For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.


Thanks


Message Edited by garg on 02-20-2008 06:03 AM

sahorvat
Level 3
I upgraded to 11D on 1/15/08 and applied sp2 and the latest hotfixes as mentioned by others. If I'm lucky I will go two complete backup cycles before I get the .NET 2.0 runtime error. Once this occurs, the links between arbitrary duplicate jobs and backup jobs from the same policy become corrupted. In the next cycle different job sets will be come corrupt. In other words it is not the same duplicates that fail every night. The only way to rectify is to delete all the jobs from that policy and then re-create them. This is good for one or two cycles until the next .NET 2.0 runtime error pokes its head up!
 
I have an open ticket on this issue and the last thing I heard from "advanced" tech support was "It looks like you have a defect." Issue is still open.
 
As an addendum, we also reboot our Windows servers once a week. This obviously involves shutting down the Backup Exec services automatically during the reboot. The same corruption in duplicate job links occurs. As a matter of fact, it occurs anytime the Backup Exec services are shut down and restarted even when it is done politely and not as the result of a .NET 2.0 runtime error!. Has this been noticed by anyone else?
 
This seems to corrupt the duplicate job links arbitrarily even when there are NO backup jobs running at the time the services are restarted. This is a serious defect in the product and needs to be resolved!
 
One good thing about 11D is that you can re-run the duplicate jobs fairly easily by editing the selection list. The downside is that it could add an extra half hour to an operator's daily duties. It is quite inefficient. It is completely unacceptable if Symantec thinks it is ok to have a defect as long as they provide a way to re-run the failed jobs. This defect needs to be fixed!

Kevin_Hammond
Level 3
We are experiencing the exact same problems with SP2 and the latest hotfixes.  We get the same .NET errors the same Backup Exec crashes and the same issues with policy based duplicates.

Has anyone received any feedback as to what the problem might be and when a fix will be available?  Can we remove the hotfixes to get back to a stable enviornment?

Also for your reference, read through a previous thread of ours for some alternative ways to correct policy based duplicates that get "confused".
https://forums.symantec.com/syment/board/message?board.id=be11dOther&thread.id=1216

sahorvat
Level 3
In my case, I was convinced by "advanced" tech support that the disconnect may be a result of corrupteion in the 10D database that was migrated to 11D at the time. In response, i installed 11D on another server and rebuilt all the selection lists and jobs in a fresh database. I then transferred that database to the original server. i needed to osql into the database in order to change the server name in the database back to the production server. I was also able to reconcile all my catalogs by running catrebuildindex.exe through the command prompt. The little detail that "advanced" tech support left out is that there is no tool to migrate just the media/media set information. So, although my catalogs were salvaged, they would be virtually useless because Backup Exec thinks all my media is overwriteable.
 
My solution, outside of recommendation of "advanced" support was to dive into the database through SQL Server Management Studio. I exported the dbo.Media, dbo.MediaSet, and dbo.ForiegnMedia tables from the old database to Excel. I then turned on the new database and imported the data into temp tables. I ended up deleting the contents of dbo.Media in the new database. I made some value changes to the fields in the new dbo.MediaSet table and then re-populated dbo.Media in the new database with the imported data from the old dbo.Media.
 
I ended up deleting the contents of table dbo.ForiegnMedia in the new database before I could clear dbo.Media in the new database.
 
I don't recommend this unorthodox procedure unless you are really comfortable with databases and under standing the relationships between tables. I needed to change data types on the old dbo.media table imported from Excel to match the new dbo.Media. I also needed to change a ReservationId value in dbo.Device so that two of my Backup-To-Disk folders would inventory properly. A few inventories later and some deleting of duplicate records with the same MediaLabel in dbo.media and I was up and running with a fresh database and all my catalogs and devices properly inventoried. I was even able to run a duplicate job that i had postponed from the morning using the old catalog data.
 
The jobs ran flawlessly last night 100% success. This tells me that at least I didn't foobar anything major with my backdoor database antics. Of course, it is always that way for at least the first or second cycle after rebuilding jobs, even in the old database. The true test will come tonight when the server is rebooted before the cycle begins as part of a weekly reboot of Windows servers. This will tell me if the issue of restarting services outside of jobs running is still causing a problem. If so, then this is defect in the Symantec product!
 
 
My next idea is to utilize a combination of the Backup Exec command line applet and VBscript to create all the duplicate jobs once the disk backup cycle is complete. I'm thinking I can schedule it as a task. I'll use VBScript to build the Backup Exec script that will create and launch the jobs. Sounds good but I'll have to see if there is enough flexibility in the command line applet.
 
I have no more ideas, other convincing some mogle to buy enough stock in the parent company to get these guys interested in making their product respectable. The corruption of policy based jobs has always been an issue with this product and it even extends into version 12 according to the latest hotfix for that version.

Kevin_Hammond
Level 3
Yes version numbers, service packs and hotfixes do not seem to have much impact on the bugs in the product.  It seems the same or similar bugs just keep coming back..
 
10, 11, 12 they all have the same issues.

Kevin_Hammond
Level 3
Out of curiousity, have any of you upgraded the remote agents yet?  We have not upgraded the remote agents so our remote agents are still 11D 7170 SP1.  I am speculating that some of these issues may go away after upgrading the remote agents.
 
Kevin

sahorvat
Level 3
We just upgraded in January 2008. At that time all the sp and hotfixes were applied. I then deployed the remote agents after all the updates. that said, I have some older systems that I was unable to upgrade remote agents. Some of them are still runing the 10D agents but they just complete with exceptions. the backups are reliable and complete.
 
Otherwise two days and no failures. Also a good sign is that I had no failures after the sytem reboot last night before the job cycle. The true test will be when the .NET runtime error occurs. I gues it is possile that database corruption could cause a fault in one of the be dll's that may be aceing. I have no idea what functions are in those dll's so it is just a guess. It would be nice if that was the case.
 
I could swallow the runitme eror restarting services in the middle of jobs every so often and re-running the jobs as long as the links are valid in the next cycle. I suspect that should be the case since I think my problem was two-fold. I believe the .NET runtime error was a bothersome occurance that corrupted jobs in the cycle that it occurred but I think the issue of jobs being corrupted permanently by services resarting was causing my long term headaches. Hopefully, the database reconstruction will help that. Only time will tell.
 
I am in the proces of trying to identify links betwen Job History identifiers and duplicate jobs in hope that  might be able to come up with a stored procedure that will repair jobs.
 
 

Kevin_Hammond
Level 3
I read another technote that stated there is a memory leak when you are not using the latest version of the remote agent due to encryption not being availble.  This issue applied to a previous version of Backup Exec, but maybe this bug or a related memory leak still exists.  I know our beserver.exe process typically uses 1.5+ GB of RAM.  We get the typical your system is running out of virutal memory errors and I think this correlates to when we get the .NET Runtime errors.  I have not setup the monitoring yet to verify the exact RAM in use when the .NET Runtime occures but we see most of these symptoms around the time the service crashes.
 
I concur about the service crashing and not restarting.  That is unacceptable.  It still boggles my mind that Symantec is putting out such a poorly qa'd product.  These are issues you expect in alpha testing a product.
 
Kevin

Kevin_Hammond
Level 3
One other thing we have noticed is that frequently when we see a lot of RAM in use by beserver.exe, it will be during the loading media state of a duplicate job.
 
I am currently watching it loading media for a 1 GB duplicate job.  It has been loading media for 1.5 hours and beserver.exe is at 1.5 GB.  Once the job finishes, beserver will return to its normal memory usage.

sahorvat
Level 3
I also notice that beserver.exe has high memory usage. Mine does not go away. It grows over time. I strongly suspect that this is causing my .NET runtime error. I have implemented a daily regimen of stopping and re-starting BE services and SQL instance in that order. I notice an excessive number of page faults and growing memory usage in both services. Since I beleive I resolved the issue of jobs being corrupt by just shutting down services with the database rebuild, this is a good temp solution.
 
I also ran across another message in the forum related to high memory usage in beserver.exe.
 
Here the solution is claimed to be "Reinstalling the Microsoft Report Viewer Redistributable 2005 ". I didn't re-install but I did run a repair. i'll see what effect this has for me.

wmheid
Level 4
I am currently working with the top of Symantec's Tech Support food chain on this and the impression is that they just don't know what the problem is.
 
I upgraded to 12 a year ago this week and have endured much, so let me ask so basic questions:
 
1.  Did you upgrade from a previous version or just do a fresh install?
 
2.  Are you using B2D devices?  How many? What kind? i.e. SAN or Local Attached
 
3.  What else is on the server?
 
4.  Is 2-3 days a real number?  (I run about 2 weeks between crashes.)
 
5.  What kind of Server hardware?
 
 
 
 

sahorvat
Level 3
 I am currently working with the top of Symantec's Tech Support food chain on this and the impression is that they just don't know what the problem is. GOOD LUCK. That is my general impression. If someone didn't script out the answer for them they can't help you even at that level where the ability to think should be a requirement for the job.
 
I upgraded to 12 a year ago this week and have endured much, so let me ask so basic questions:
Tsk Tsk. I saw the sales pitch for 12 and blew it out of my inbox immediately. Heck it took me this long to attempt 11D. The only thing that made me decide was the ability to re-run the failed duplicate jobs when they occurred.
 
1.  Did you upgrade from a previous version or just do a fresh install? fresh install but migrated a 10D database. Have since scrapped the migrated database and re-created all jobs in a fresh database. However, I did transfer device information from the dbo.Media table in order to preserve my allocated/overwrite periods intact.
 
2.  Are you using B2D devices?   YES  How many?    10  What kind? i.e. SAN or Local Attached
 
3.  What else is on the server? Nada. Just Backup Exec 11D
 
4.  Is 2-3 days a real number?  Too real. The beserver.exe process starts out using about 96,000K and within less than 24 hours it has grabbed almost 780,000K and won't let go! I could sit with the task manager open and watch beserver.exe eat memory even when nothing is going on! (I run about 2 weeks between crashes.)
 
5.  What kind of Server hardware? Windows 2003. HP Proliant Server with 3 MSA's loaded with disks. MSL6000 tape library and two drives using LTO-3 tapes.

I created the following .BAT file and have it scheduled to run every night before the backups start to free memory. No .NET runtime error yet. However, it has only been a week.

@echo off
cd c:\Program Files\Symantec\Backup Exec\
bemcmd -o503
net stop DLOAdminSvcu
net stop MSSQL$BKUPEXECDLO
net stop MSSQL$BKUPEXEC
net start MSSQL$BKUPEXEC
net start MSSQL$BKUPEXECDLO
net start DLOAdminSvcu
bemcmd -o502

sahorvat
Level 3
I finally had the opportunity to uninstall BE 11D, SQL 2005, .NET 2.0.  I rebuilt the server and guess what. Still memory leaks on beserver.exe. The only thing that SGMON indicates is
 
BESERVER: [05/30/08 15:45:01] [5668] -1 CJobManagerBO::CheckCPSNotificationMonitor() Checking if CPS is installed
 
 
This is the only activity from BESERVER when nothing else is going on. Could this have anything to do with the problem.Hmmm
 
 

bePete
Level 3
I'll add to this one.  I've gone through the whole thing - uninstalled BE, reinstalled, uninstalled .NET, reinstalled.  Nothing recommended has fixed this memory leak for me.  I'm running BE 11D on a W2k3 SP2 server with 2GB RAM.  Every time the beserver.exe starts, it continues to eat up RAM until it gets to 1.8GB or so.  Then backup jobs begin to fail with an 'out of memory' error.  Server is completely patched on the MS side and the Symantec side.
 
I may try the batch file to get by for now, but this is ridiculous!
 
If anyone has any suggestions, PLEASE post them...

spar1GreP_2
Level 3

Add another one to the list.  My Backup Exec crashes once a week, event log shows the .NET runtime error.  Using perfmon I have also noticed that the beserver.exe process consumes memory until nothing is left (W2003 SP2 + Hotfixes, BE 11d 7170 + SP HF 32-37 All remote agents reinstalled following HF installations).  The graph produced by perfmon shows a nice escalating ski slope for the beserver.exe memory used.  I control it also by restarting the services. 

 

In addition between 05:00 and 07:00 GMT the available MB counter drops from 2.5GB free to 307MB free, it recovers within ten minutes but I have seen it drop as low as 12MB free (dependent on the memory free at the time of the drop).  Using the private bytes counter for processes I see at exactly the same time Available MB drops the pvlsvr.exe process (another Backup Exec service) private bytes increases (the only process that does jump inexplicably).  I have added the working set memory counter for this process to see if the amount being lost (Available MB wise) correlates to the total amount used by this process as measuring just private bytes only accounts for 13MB of the rough 1.88GB being lost in ten minutes.  The pvlsvr.exe process I believe is to do with the Device and Media operations in BE.  Funny thing is nothing is going on at this time to cause this drop in Available MB. 

 

Wish I had confidence that Symantec will resolve this.  Maybe the pvlsvr.exe process is the cause of the crash consuming all memory in that ten minute period preventing the beserver.exe process from grabbing any more memory. 

 

It makes me so sad to see how bad this product has got, being a hobby programmer, when looking at the graphs for the beserver and pvlsvr processes I feel shame for the programmers making such a bad job of controlling the memory they use.

 

sahorvat
Level 3

Symantec claims to have resolved the beserver.exe memory leak with service pack 3. has anyone attempted to install this yet?

 

http://seer.entsupport.symantec.com/docs/303202.htm

 

bePete
Level 3

I just applied SP3, then ran Live Update, which came up with still one more 'fix': Hotfix47 (geez, how many more patches).  After a reboot, without launching anything or running any jobs, the memory usage for beserver.exe is still climbing.  Right now its up to 750M and still going.  I don't think this fixed the problem...

 

I'd be interested to hear if SP3 fixes this problem for anyone else.

RobertITC
Not applicable

We ran into the same problem that the memory usage of beserver.exe was climbing and climbing.

After the backup was finished it didn't return to it's normal state. Instead it crashed the BE services.

 

We first installed all the updates and servicepacks available, but even after this the problem persists.

 

Finally we checked the process beserver.exe with procmon.exe and we found out that the this process
was writing into a file called crf.log

 

We opened this log and read inside that it was unable to start/write to de Microsoft Report Viewer 2005.

So we downloaded this program en installed it. We reboot the server en after we started a testrun.

 

The process beserver.exe was then stable and didn't consume that much memory anymore. So we did apply this to all of our servers and everything looks good so far

spar1GreP_2
Level 3

Was the issue in that the .log file was unable to be opened because you had the .log file extenstion associated with Microsoft Report Viewer (which had been uninstalled somehow)?  We have this application installed already on the Backup Exec Server so I doubt it is the cause of our memory escalation problem. 

 

Checked out what the beserver.exe process is doing using Sysinternals Process Monitor also, only odd thing to note is it regulary tries to find the registry key HKLM\SOFTWARE\Symantec\Backup Exec\Server\SendHistoryUpdateInterval that does not exist. 

 

Have now installed SP3 and the two post SP3 hotfixes (47,48).  Performance monitor over the last three days shows that the beserver.exe process is no longer consuming memory consistently (we no longer have a ski slope when graphing our process private bytes used).  Rather now the private bytes sits stable for roughly 24 hours then consumes an additional 5 MB.  Previously the average daily increase in private bytes was between 30-50MB.  Will see how well things are handled this weekend as we have the full Domino server backups and the full synthetic backups that execute, history tells me if the Backup Exec server is going to fail, it does it on the weekends when our full backups kick off.