cancel
Showing results for 
Search instead for 
Did you mean: 

RALUS performance

clod
Level 3

has anyone managed to achieve "reasonable performance" with ralus agent on linux ?

 

I can't manage more than 6MB/s with this, no matter what I twiddle in the ralus.cfg

or elsewhere (5MB/s is closer to the truth)

source is a couple of OES (suse 10) servers, amd64 (dual quadcore xeon DL380's)

with no load (no users) at backup time, 1000basetx to BE server.

disk io on the source is not an issue here, that abysmal throughput remains

whether backing up 2GB images or smaller files.

network bandwidth to the BE server is fine, ttcp fills the links (>80%), smb copies

hit 40MB/s, ftp >70MB/s.

ralus version is 1324, BE server is a DL185 (dual quadcore opteron with 4G) on 2k3 server with

12*750G satas on a p400 (512M) to stage transfer to LTO4 overland neo library.

 

so the question is, does anyone actually use BE to backup linux servers, and if so,

what sort of performance are you seeing ?

backing up >1TB with this is a completely unfunny joke as it stands right now.

if someone else has managed to make RALUS not-suck, I'll persevere, however,

I get the distinct feeling I'm wasting my time right now.

 

many thanks

 

 

10 REPLIES 10

Ashutosh_Tamhan
Level 6

Clod,

 

would post the ralus debug console output in here. Stop RALUS, start it using beremtoe --log-console, run a backup and post the output from the console while the backups run slowly. 

 

Comparing the backup throughput of the data via samba shares with the RALUS backups could help isolate the issue.

 

 

clod
Level 3

hello there,

thanks for taking the time to reply,

however I have already wasted a lot of time on this.

I was asking for real world performance figures from people, is that something you can provide ?

 

many thanks

 

Ashutosh_Tamhan
Level 6

Figures from my "test" environment are as follows:

 

Backed up 15558 files in 9312 directories.
Processed 4,044,217,107 bytes in 7 minutes and 24 seconds.
Throughput rate: 521 MB/min
Compression Type: None

 

The amount of data is small. Maybe I could backup the entire SLES box with all of the ISO images/packages/mp3s etc and see how it runs.

 

You might want to wait for people to post figures from their real world environments.

 

One such real world issue is known while backing up small files from AIX systems.

http://support.veritas.com/docs/303811

 

 

function Expand() { var srcElement = event.srcElement; if (event.srcElement.tagName != 'TH') srcElement = event.srcElement.parentElement; var child = srcElement.parentElement.parentElement.all[srcElement.getAttribute('child',false)]; if (null != child) { child.className = (child.className == 'COL' ? 'EXP' : 'COL'); if (event.srcElement.tagName == 'INPUT'){ event.srcElement.value = (child.className == 'COL' ? '+' : '-'); } else { for (var i=0;i'COL' ? '+' : '-'); } } } } } function ExpandAll(fromState,toState,image) { var th = document.all.tags('TH'); for (var i=0;i

&& (child.className != toState)) { child.className = toState; } } } function OnDocumentLoad() { var ua = window.navigator.userAgent; var msie = ua.indexOf('MSIE '); var bUpdateTables = false; if (msie) { var str = new String(ua.substring(msie+5, ua.indexOf('.',msie) +2)); if (str >= 5.5) { bUpdateTables = true; } } if (bUpdateTables) { var tbl = document.all.tags('TABLE'); for (var i=0;i{ b.className = 'EXP'; } b = document.all[btn]; if (null != b) { b.value = '-'; } }

clod
Level 3

thank you Ashutosh,

your results are a little better than mine,  but hardly "stellar" if that's 1000base ethernet.

it doesn't appear that anybody else wants to post their results, I'm guessing

it's probably out of sheer embarassment.

I'm seriously regreting going with be12 for this, it's extremely slow and has poor support

for *anything* outside of win32/64.

 

if you have a netware/oes2 cluster, forget it. it's all but useless.

 

but you'd expect it to perform reasonably well with linux, why wouldn't it ?

why doesn't it ?

 

I can't make a full backup daily, because it takes more than a day to back up,

and there's no obvious reason why it should.

 

am I expecting too much from symantec ?

 

many thanks for taking the time to respond.

 

Bjoern_Ott
Level 2

Hi Clod,

 

we have the same problems with backup our linux servers. I think the Agent has a bug when there are a large amount of small files on the server. We had BackupExec 10 with the old RALUS and the much older Legacy Agent in use and the speed was much better than after the upgrade to Backup Exec 12. Now we had to drop down the selection to a minimum and we need about 2h for 1GB data. That is really bad. Hope that Symantec will release a new version of the agent.

 

Regards

Björn

Gladstone
Level 3

Directories    : 2,678

Files            : 29,341 

 

Job rate       : 3133.0 MB/min

 

This is around 52 MB/s

 

 

mike_brooker
Level 5

Verified 7226075 files in 222067 directories. 0 files were different. Processed 859391409406 bytes in 43 minutes and 26 seconds. Throughput rate: 18870 MB/min

 

I back-up around 1TB of data in 10 hours from a SLES10 server, soem data areas have hundreds of thousands of very small files (due to the nature of the work)and I can see it slow down when it gets to these data areas, but having said that, it still gets through within 10 hours.

clod
Level 3

thanks for the reply mike,

that verify throughput rate is impressive, but it's not the backup rate.

18870MB/m (314MB/s) would be nice though!

 

if you're backing up around 1TB (1024^4) in 10 hours, that works out at roughly 30MB/s,

which is an order of magnitude higher than the throughput a lot of people seem to be

hitting on a typical mixed set of data.

it's certainly considerably more than I'm seeing with 2 massively overmuscled machines,

which, besides the running backup job, are completely idle.


keeping in mind that there's no obvious bottlenecks at endpoints, or in transit,

I can't see any local issues that would explain this disparity.

it appears I'm not alone too.

any thoughts ?

 

cheers

 

 

 

mike_brooker
Level 5

That wil teach me for posting just after waking up! I'll post better figures when I back at work.

 

It sounds like you back up a mixed bag of file types, just like we do, small files, image files, wav, all sorts.

 

My ralus.cfg file is pretty much as standard, except I added "Software\VERITAS\Backup Exec\Engine\RALUS\Encoder=LATIN-1" because of file naming issues.

 

Are you backing via Samba shares?

 

Have tweaked NFS at all?

 

I remember doing quite a few tweaks, I added "no_subtree_check" to "exports", and block size of 32768 for rsize and wsize for NFS client.

 

It took me an age to get Ralus/BEWS working as I would like, and IMHO Symantec/BEWS etc works fine on Windows, but with Linux, well it just never seems to sit comfortably (look at the new questions for the lastest products!) and I wouldn't go down that route again.

 

With one server I added up creating cronjob to copy files, works great, small amount of data though.

Mrwolf
Not applicable

I´ve the same problem whith 12.5 and Ralus. in HP BL-460c

My ratio is 680MB/S, and windows ratio is 5800MB/S

Now I'm applying 12.5 sp1...

 

The TSA (Target Service Agent) is used to back up Novell's I-Folder, GroupWise, E-Directory, and NSS volumes.
To back up the trustee assignments, also known as the extended attributes of the file system, the backup selection is through the "ROOT" selection of the volume, in Linux. * Backing up the file system and restoring the file system via the SMS (Novell's - Storage Management Services), is not supported in the 12.5 *version of Backup Exec for Windows Servers product. * The latest RALUS agent supports the backup and restore of the extended file attributes (trustee assignments), so selecting the NetWare file system via SMS is not necessary