NIC Teaming Slow Throughput, any ideas?
Hey guys, We currently have 2Gig set up through NIC teaming for our backup exec setup.When we have backups running, it barely reaches 35-40% of the bandwith. I am using Backup Exec 2010. Has anyone else here set up NIC teaming before? What are some things that might cause slow performance? Are there any specific settings in Backup Exec that may help? Thanks!5.1KViews3likes9CommentsSOLVED: 2010 Beutility shows server paused, devices does not.
EDIT: I called Symantec support and they tracked it down to Symantec Hotfix354913. In Windows Server 2008 R2 you'll need to go to Programs and Features under Control Panel, click on "view installed updates" then go down to the Symantec section. Locate Hotfix354913. This will stop your BUE services, remove the hotfix and then start up your services again. Then open up BUE, go to devices, right click on your server name and select pause again. This time I did not get any errors, it just paused it. I then right click on my server name and unpaused it. Then I went back into BEUtility and refreshed the view and confirmed that the server status had changed from "server paused" to "running". We then ran a export, import and quick erase, all task that were not working with this issue that were once again functioning as desired. ***************************************************************************** Out of the blue yesterday it started that my Backup Exec 2010 with all the latest updates running on a Server 2008 R2 Standard OS just stopped allowing me to import, export or erase tapes. Scheduled jobs seems to run without issue but without being able to import, export or erase tapes, I'll be out of available tapes soon. If i remove my tape library from my backup server and manually do a import or export, it works, but if I have the tape library connected via it's SAS cables, it tells me that it cannot complete the task as there is a lock on it from another source. The robot is a Dell ML6000. I spent 3 hours with Dell support today and have pretty well concluded that the issue is not the robot. He suggested checking to see if the server status was paused under the devices tab and it does not show that it is. If I launch BEUtility it does show "server paused" under status though. As a number of post out there suggest, I right clicked on the server name under devices and click on "pause" and it comes back with a message stating "Unable to pause servername" when I click on "OK" though it shows the status of the server as paused. Then if I go and right click again to unpause it, it pops up a error stating "Unable to resume servername" and again if I click on "OK" it does actually change the displayed status as unpaused but any jobs that were in a hung state still don't run anyway. I've rebooted the server numerous times and verified all the services are started. As I started off with, this server was working perfectly for a few months as is and nothing was changed or done that provoked this to my knowledge. I'm at a total loss. Is there any other way to try and force this server to unpause?1.7KViews3likes7CommentsCan backup Exec perform dual tape drive backups simultaneously?
I have a Sun/Oracle SL24 autoloader It has 1 tape drive we use for duplication of our backups As we grow - the duplication of our backups is taking longer and longer. We are considering purchasing a 2nd tape drive for our autoloader, but we don't know if BE 2010 r3 will alternate which drives are being used, or if it has the capability of performing DUAL backups (2 tape drive jobs at once) assuming that they would be put into a "pool". Does anybody have experience with this?Solved3.7KViews2likes8CommentsHELP!!! ODBC access error. Possible lost connection to database or unsuccessful access to catalog index in the database.
We're having major issues with our backup solution. We have a single BE 12 server. We been receiving the following errors from all Windows, Linux and Mac servers. And also errors relating to "value out of range" When I try to restore the Backup Set Number that ends with a negative value (ie, -30577) and catalog File Name ending in a negative value (ie, ....._-30577.xml) are the ones that doesn't show the restore list after the backup job is successful. We trying everything from repair DB, renaming Catalog folders, etc... still no luck! Please help! As we are not able to do any kind of backups at this point we can not afford to start over and lose months of backups. Backup Exec Alert: Catalog Error ODBC access error. Possible lost connection to database or unsuccessful access to catalog index in the database. From application event log: Access to catalog index (Catalog index database) failed. Reason: [Microsoft][ODBC SQL Server Driver]Numeric value out of range cat_RecordSet::Open() r:\catalina\1364r\becat\segodbc\seg_odbc.cpp(2404) SELECT distinct CatMedia.*, ImageObjectView.FragmentState FROM CatMedia, ImageObjectView WHERE CatMedia.MediaFamilyGuid = ImageObjectView.MediaFamilyGuid AND CatMedia.MediaNumber = ImageObjectView.MediaNumber AND CatMedia.PartitionID = ImageObjectView.PartitionID AND ImageObjectView.ImageNumber = ? AND CatMedia.PartitionID = ? AND CatMedia.MediaFamilyGuid = ? AND CatMedia.Status & ? = 0 Case History Actions View Case Detail Add Case Comment Add Case Files Close Case <!-- /* You may give each page an identifying name, server, and channel on the next lines. */ s.pageName=metaData['page_name']; s.server="symantec" s.channel=metaData['visitor_segment'] + ': ' + metaData['site_section'] s.pageType=metaData['page_type'] s.prop1=RegionArray[metaData['site_country']] s.prop2=metaData['site_country'] s.prop3=metaData['site_language'] s.prop25=metaData['error_url'] s.prop27=metaData['visitor_segment'] s.prop33=metaData['page_url'] s.prop41=metaData['site_section'] s.prop42=metaData['product_name'] s.prop43=metaData['product_category_name'] s.prop44=metaData['product_sub_category_name'] s.prop45=metaData['product_version_name'] s.prop46=metaData['content_format'] s.prop47=metaData['content_type'] s.prop48=metaData['content_title'] s.prop49=metaData['site_sub_section'] /* Conversion Variables */ s.products=metaData['products'] s.eVar26=RegionArray[metaData['site_country']] s.eVar27=metaData['site_country'] s.eVar28=metaData['site_language'] s.eVar29=metaData['visitor_segment'] + ': ' + metaData['site_section'] s.eVar41=metaData['site_section'] s.eVar42=metaData['product_name'] s.eVar43=metaData['product_category_name'] s.eVar50=metaData['visitor_segment'] /************* DO NOT ALTER ANYTHING BELOW THIS LINE ! **************/ var s_code=s.t();if(s_code)document.write(s_code)//--> <!-- if(navigator.appVersion.indexOf('MSIE')>=0)document.write(unescape('%3C')+'\!-'+'-') //-->2.5KViews2likes2CommentsBackup Exec 2014 Backup-to-Dedupe Duplicate-to-Tape
Some other questions have touched on this topic but they all seem to be just a little different than my scenario. The issue I seem to be having is the following. I have a single Windows Server 2008 R2 server with Backup Exec 2014. I am not using CASO with a MMS. I run incrementals throughout the week and then a full backup to our dedupe storage on a SAN. I then run a duplicate of the previous Full backup job (not using scheduled option for duplicate job) to duplicate to tape for off-site storage. The intention is to retain two weeks of data on-site and more off-site. However, I see instances where, if the duplicate to tape job failes, the previous Full backup is no longer on the dedupe storage. This means we have to do the entire full backup over again and then duplicate to tape. In addition, this concerns me because I wonder if all the other jobs we write to tape in this manner are also getting removed from dedupe storage after the job either completes successfully or fails. It concerns me because we want to retain some of those on-site. Is this by design? Is there a way around this?1.2KViews1like8CommentsBackup Exec 2012 mit tape library/bestimmtes Band für bestimmten Sicherungsjob
Ist es möglich Backup Exec so zu konfigurieren, dass BE ein bestimmtes Band für einen bestimmten Sicherungsjob verwendet? Im Moment wählt BE irgendein Band aus dem definierten temp-Pool aus. Nachdem die Bänder schon beschriftet sind, wäre es von Vorteil, dass BE zb. die Montagssicherung auf das Band schreibt, welches den Namen Montag trägt und nicht auf welches, das Freitag heißt.Solved1.6KViews1like2CommentsBackup Exec backup of cluster SQL Server with Microsoft Cluster
Hi My customer havetwo node cluster SQLServer (Microsoft cluster) anddatabasepresent on theSAN volume and want to take the SQL database backup throgh backup Exec on robotic libraries directly connected to the SAN swich. Should i need to deploy Media server on each Cluster SQL server for taking the backup on the robotic libraries(2 drives)directly connected to SAN switch or I need to deploy only remote agent for SQL Server on agent on the SQL server. Regards Rakesh1.1KViews1like3CommentsNDMP Library problem - NetApp FAS2040, Quantum i80, NDMP, BE2010 R2
Hi, I'm installing a new "Backup Exec 2010 R2" x64 onto a Windows 2008 R2 x64 (VMware ESX VM)server, to be used to backup a NetApp FAS2040C system runing Data Ontap 7.3.4. At present we are using the Trial license as our current Backup Exec 12.5 is still in use on another server and the new equipment has not yet gone "live". Both currently available hotfixes and tape device drivers have been installed. The library Expansion option is enabled and installed. The NDMP option is enabled and installed. The FAS2040C has had NDMP enabled(v4) and credentials tested. Each FAS2040 controller can see 1 xLTO 5 drive through the FC connection, one head can also see the library as it shares the control path with the drive. The Tape library is a new Quantum Scalar i80 with 2 x LTO 5 FC drives. The drives are individually and directly connected to each of the FAS2040C' controller heads 0a fibre ports. On the i80 asingle partition has been created containing the 2 x LTO 5 FC drives. When I add the NDMP storage servers withinBE2010R2, the library and both drives are shown - one shows as standalone (HP 0001), the other (HP 0002) appears under the library, however there is also a device shown as"MISSING 0001". Going into the "Configure Tape Devices" wizardshould be where thiscan befixed, however when moving the standalone drive onto the missing device under the library they simply swap position. Can someone confirm whether this is a bug and should work ok, I'm sure I've seen this same configuration used on other sites with previous versions of BE. Regards Mark749Views1like2CommentsUsing NDMP option to backup very large netapp volumes. I need a way to reduce my windows so I do not cause week day performance issues.
We are curently running BE2010 to backup our netapp data filers using the NDMP agent. It seems like our volumes and aggrigates are growing bigger and bigger causing my backups to take longer and longer. An example would be the job below. It takes me nearly 3 days to back up 10.8 TB which includes most of monday if the backup job is started near the end of the day on Friday. Down the road I see these volumes getting bigger and bigger since the next version of ONTAP allows for larger aggrigates. When I have a 16 TB volume/aggrigate my job will take about 4.3 days days to complete. This will deffinatly cause performance issues for several days of the week which will be unacceptable. Deduplication will not bennifit us at all because we would like a full copy every week. We are working with scisemic data that is all unieque. The library a dual lto4 head library which is is fiber connected to the netapp so the network should not be the bottle neck. My best suggestion is to create a way for me to stream the backup from one job to many tape heads.I think this capability would cut my time in half because I am able to write the data to two heads at the same time.I can not create two separate jobs because backup exec will only allow me to backup a whole NDMP volume. I am unable to split up a volume in to two separate jobs. Please let me know what you suggest. DougSolved1.7KViews1like9Comments