Sharepoint archiving - re-indexing crawl in Sharepoint
Hi everyone. I just wanted to pick everyone's brain about something. Say we archive a few TB worth of Sharepoint documents and they all get replaced by shortcuts. My understanding is that during its re-indexing crawl, the Sharepoint server will reach to every archived document on EV and bring it over to be indexed (I am not talking about EV indexing but rather SP indexing). I guess an incremental crawl is not a big deal, but every once in a while Sharepoint needs to perform a full crawl, which means a few TB worth of data will be pulled from EV and dragged across the network back to the Sharepoint server for indexing purposes. I can't see how this is feasible. The full crawl will take weeks and may kill the network.Solved510Views13likes2CommentsNetbackup VMware Backup - Windows Change Journal
If I am backing up VMware VM's using the FLASHBACKUP-WINDOWS policy type, with an OFF-HOST VMware type of backup (VRAY), then I assume the use of Windows Change Journal on the VM itself is of no benefit ? Any comments ? I am using CBT for the VM's, and performing FULL & DIFFERENTIAL backups. I have 'lots' of files on this server, and am looking to get the shortestbackup window possible. I am using a VMware Mount Host which is SAN attached to the datastore LUNs of the ESX5 servers. NBU 7104. Any Comments ? AJ708Views10likes3CommentsHOW TO install BackupExec 2010 agent on Debian (RALUS)
I hope this post will be useful to many people (please vote for it or mark it as solution if it helps you). Installing directly RALUS on Debian will not always work. First problem : ../perl/Linux/bin/perl: No such file or directory Second problem : at the end "was not successfully installed" and "impossible to add VRTSralus to (server)" And some others that will get solved when following my solution This is a simple way to install it and avoid these (and other) problems : 1. (optional) Create a folder to keep all RALUS files and copy the archive into it : mkdir /root/BE mkdir /root/BE/RALUS2010 mv RALUS_RMALS_RAMS-2896.9.tar.gz /root/BE/RALUS2010/ cd /root/BE/RALUS2010 2. Unpack the archive provided by Symantec tar xzf RALUS_RMALS_RAMS-2896.9.tar.gz 3. Stop the RALUS service if it is already installed and runnig /etc/init.d/VRTSralus.init stop 4. Very important, if you are under a 64 bit Linux you have to this Extract debian package : tar xzf RALUS64/pkgs/Linux/VRTSralus.tar.gz Install debian package : dpkg -i VRTSralus-13.0.2896-0.x86_64.deb Start installation : ./RALUS64/installralus 5. But if you are under a 32 bit Linux you have to this (I didn't tested under 32 bits) : Extract debian package : tar xzf pkgs/Linux/VRTSralus.tar.gz Install debian package : VRTSralus-13.0.2896-0.i386.deb Start installation : ./RALUSx86/installralus or ./installralus 6. Be sure to answer all questions correctly especially the one about the host server (XXX.XXX.XXX.XXX), you must give the IP of the Backup Exec server. 7. Do a restart of the RALUS Backup Exec agent, and it should say "[ OK ]" /etc/init.d/VRTSralus.init start I hope it will help ! Send me questions if you have other problems... Denis P.S. Tested with Debian 5.0.3 P.P.S. If you still have some problems : A) If you get "ERROR: VXIF_HOME is invalid. It must point to the root of VxIF. Exiting ...", simply edit ./RALUS64/installralus, and change line 3 : from : VXIF_HOME=../;export VXIF_HOME to : VXIF_HOME=/root/BE/RALUS2010/;export VXIF_HOME B) If you get "./RALUS64/installralus: line 50: ../perl/Linux/bin/perl: No such file or directory", simply edit ./RALUS64/installralus, and change line 50 : from : ../perl/$OS/bin/perl -I.. -I$PATH -I$VXIF_HOME -I../perl/$OS/lib/$PERL_VER ../installralus.pl $* to : ../perl/$OS/bin/perl -I.. -I$PATH -I$VXIF_HOME -I../perl/$OS/lib/$PERL_VER ./installralus.pl $* or to : perl -I.. -I$PATH -I$VXIF_HOME ./installralus.pl $* (to be clear, remove one dot in front of"/installralus.pl", keep only one dot instead of two) C) If the installation is sucessful but VRTSralus refuses to start, launch /opt/VRTSralus/bin/beremote –-log-console to see the error. If you get error while loading shared libraries: libstdc++.so.5: cannot open shared object file: No such file or directory you simply need to install the package : Under Debian 6.0.3 : apt-get install libstdc++5 (Thanks to RockwellMuseum)3.2KViews9likes17CommentsDeleting Backup_ID Cartridge space not reclaimed.
Team, I have a quick question, i have one cartridge in a POOL XYZ, the cartridge is now full. I have monthly, weekly & daily backups. i have deleted the older backups via bpexpdate -backupid ABC01_A_Backup_1344457991 -d 0 bpexpdate -backupid ABC01_A_Backup_1344457990 -d 0 and many more. but i can see the cartridge is still showing me the FULL status and is not able to entertain after deleting the backup id's. Please let me know, how will this deleted space will be reclaimed?Solved559Views7likes1CommentSystem Recovery Management Solution 2013
I was just on the MySupport portal to create a new case for SSR2011ms and I noticed System Recovery Management Solution 2013 As on option for product version Is there a new version out? I didn’t here anything about a new version or will it be out soon?603Views6likes3CommentsServer Agent out of date
I started seeing LiveUpdate Information About the agent for windows on at least one server does not have the most recent updates? To Install the most recent updates, on the Backup and restore tab, right click the remote computer that is out of date, and then select Update. 1.There is no indication on what server agent is out of date 2.After right clicking on each server on some I can see Update is grayed out and on some its clickable 3.Does this mean they are out of date and need to be updated? 4.What happen if I update the server will I have to reboot? 5.The server is Windows 2003 with exchange 2003 running on a cluster serves with 2 nodes 6.It seem each node might need the upgrade and the cluster seems to need 7.I am assuming after I update the active node the cluster will be up-to-date 8.Should I manually run LiveUpdate on the backup Exec server monthly and after an update I should update the clients?1.2KViews5likes18CommentsBackup an Exchange 2013 DAG without CNO/CAAP
Hi All, This is my very first post and I really do hope you can help since we have been at it already for 12 hours. Is there any way to backup an IP-less DAG? We have a new Windows server 2012 R2 Exchange 2013 cluster without CAAP/CNO and BE2014 SP2 but no joy. Any help would be appreciated. Tnx. T1.2KViews4likes4CommentsNBU 5220 Performance
We recently deployed a 5220 appliance into our environment as it was to be the savior in our battle against a backup window we were no longer able to meet. When we finally got it online and into our NBU environment the initial performance was great. The area we were to most benefit was with VMware backups. The data stores mounted directly to the appliance would allow direct access to the snapshots for a fast and efficient backup. Before this, we were performing client side backups so the impact on the hosts every night was significant as we tried backing up 600+ vms. The plan was to be able to move all Dev and test off-host backups to the middle of the day as the performance impact was minimal and the end result was an increased window to complete things. As we started with noon-time backups deduplication rates were high and so were the speeds. However, this performance gain was short lived, as we began increasing the load we suddenly saw performance drop to a point of concern. Backups were no longer speedy, 3500KB per second to 10,000KB. There are some that might pop to 24,000KB, but in a sample size of 15 as I write this, only one is showing 24,000. Now, I do have a few ideas. 1. We have a 72TB appliance, therefore, 2 disk trays, during the backups, the one disk tray is going crazy, all the lights are flashing and you can really see that it is working. However, the second tray is doing nothing. While you might see a blink here or there, it is almost nothing compared to the other disk tray. Is this to be expected? When we looked at the disk configuration, it shows concat, is this normal? 2. Too much data at once and we are simply burying the appliance. In reality, what sort of performance should I be able to expect from the appliance? 3. Relating to number 2, since we only have the one appliance right now while we wait to get the remote appliance in place, we are duping off to tape. This is running at the same time as a backup, so this means at the same time the appliance is writing a lot of data, it is also reading it back to tape. 4. We are overloading the data store so that read speed is bad from source to destination. We have fewer hosts,therefore, if we limit the jobs per host we limit the number of machines backing up at once (obviously). This means that backups take way long, so we removed the limit per host and just set a limit per data store. As we are new to it all, I am not sure what impact where, but again, I am trying to list any and all ideas from the start. 5. The appliance does not support multi-pathing, therefore, we only have a single path to the disk. Beyond that I am not sure, but this is something that doesnt help with the showcasing of the appliances to management at the moment. However, given the initial performance I am confident we can get back there.Solved3.2KViews4likes50CommentsVeritas cluster issue
Hi We have a VCS of two node. Two file systems are under VCS configured from VXVM. One file system got 100% full.Now we have rebooted the cluster node & the mount points started to show,but after some time that file system got disappear,Checked with concerned DG that also got disabed,we tried to manually import & got succeded.But after starting the volumes we tried to mount the filesystem under the specified mount point. But the below error giving--- # mount /dev/vx/dsk/bgw1dg/vol01 /var/opt/BGw/Server1 mount: /dev/vx/dsk/bgw1dg/vol01 is not this fstype # Kindly suggest what could be the cause. regards...ArupSolved2.2KViews4likes5Comments