cancel
Showing results for 
Search instead for 
Did you mean: 

BUE15 Slow.....but host isn't 'busy'

Codders
Level 4

Hi there,

 

Sorry to post another 'my system is running slow' query...but I really need your help and I'm keen to know what others are doing.

 

Brief history:

We used to use a Veeam Backup and replicate to protect our VM's.  At the time Veeam could not write to tape, so we used to fire a backup exec job to push the resultant Veeam full backup file to tape. Our few physical servers would be backed via Backup Exec(2010 R2).  We needed to upgrade our ageing HW and also wanted a single supplier solution. 

...after much research, along cam Symantec BUE 2014, and then BUE15 (which we had to upgrade to due to a bug in BUE2014 that was fixed in BUE15)

History lesson over, now on to my problem.

I've had to spend every night nursing my BUE2014 backups for the last 5 months.  I have had countless calls logged with support for a variety of issues and can only assume I'm one of VERY few customers who use BUE 15  in this way, that is, create backup jobs on a CASO server, run jobs on a primary MMS server and then duplicate the jobs to a secondary MMS server.  We're using deduplicated disk storage that is local to the MMS servers.  The primary MMS server also has a Dell TL2000 dual drive LTO4 tape library and our backup definitions are to run full backups at 18:00 on a Friday and Incremental jobs at 23:00 every 1 day.  Further, the definition has three additional stages: 1. Duplicate the full backup to the secondary MMS server; 2. Duplicate the incremental backup to the secondary MMS server and; 3. duplicate the full backup to Tape.

...the idea being that we'll have deduplicate backups duplicated to two MMS servers (that are physically seperate from one another), but also a copy on Tape that is fully 'hydrated', e.g., not in a deduplicated state.

Each step of the above has the verify option enabled and most of my backup jobs are VMDK based (VMware 5.5). I have not installed the agent purely because few of my servers will require me to restore files/folders directly to them, e.g., I'm happy to restore to the CASO server and manually move a restored file to the correct location.

My backups are slow....I'm performing 4 concurrent jobs and at the moment I'm nearly 3 days into a full backup.  The MMS servers are new; both have 12TB of disk space, plenty of CPU and 32GB of RAM.  Using performance monitor I can see that CPU rarely gets above 30%, network usage about 25% and memory usage rarely hits 20% of used RAM.  My disks are also idle for most of the time.  These kind of performance stats would lead me to think I could increase the concurrent operations, but doing so slows down the 4 existing jobs.  As I type I've actually made the decision to disabled the tape drive and will run these jobs after the 'disk' based elements have completed.  I'm using 5TB out of 12TB and have noticed that windows is reporting the disk is heavily fragmented, e.g., 83%.  Surely if fragmentation was an issue I'd see this in the performance monitoring stats, e.g., disk queues and such like?  Also, Windows Server 2012 R2 is configured for weekly disk defrags.  I have no AV software installed.

Veeam/Backup Exec 2010 managed....it just ran on old hardware and was two solutions rather than one.  However I am beginning to think I've wasted my money on BUE15.  I'm tired of watching my backups and tweaking things to make it work. 

What sort of schedules do others use......I'm I doing tool much here?  Should I opt for more incrementals and fewer full backups?  (Veeam had forever incrementals.....I only ever completed one full backup).

 

Thanks in advance and sorry for the long post...I'm tired and stressed and hoping someone will pitty me and moreover give me a snippet of information that's going to change my life!

 

 

Danny

7 REPLIES 7

pkh
Moderator
Moderator
   VIP    Certified

I would suggest that you try these

1) do a backup to a disk storage, not to your dedup folder

2) backup a particular VM, install the remote agent into the VM and back it up again.  Compare the two results.

Codders
Level 4

Thanks pkh.

 

Just so that I am clear.

 

1. Can the backup to disk folder 'live' on the same drive as the DeDupe folder?

2. Are you suggesting backup a VM via the agent method, or still backup via VMDK but have the agent installed on the VM?

 

Thanks,

 

 

Danny

pkh
Moderator
Moderator
   VIP    Certified

1. If possible, use another drive.

2. Backup the VMDK but with the agent installed

Codders
Level 4

Thanks PKH.  I'll try this once my current backup jobs have completed.  For info, Windows is reporting my DeDupe drive is 84% fragmented (5TB of 10.9TB remaining).  Windows last ran defrag yesterday, so no idea what's going on there. 

 

 

esinc
Level 2

Hi Codders,

which speed has your LAN Environment? Or: How dow you get the data from the VMWare host(s) to the Backup Exec Server?

Dedup is for saving space, not for speeding up backups :)

So, as pkh already mentioned, do a VMDK-Backup to disk:

VMWare -> Backup2Disk -> Copy to Dedup.

And after that do - from the Backup2Disk-Source - a copy to Dedup and (if needed) the copy (again with the Backup2Disk Backup as source) the copy to tape. 

If you need a copy to tape always use the Backup2Disk-Backup as the source because a deduped backup has to be "rehydrated" before it can be copied to tape - and this takes time.

Keep in mind that the attempt to read from a dedup storage (i.e. for copying data to tape) at the same time, where backup jobs are writing data to the dedup storage will dramatically decrease the performance of the storage.

Additional information can be found here: http://www.symantec.com/docs/HOWTO74443 (Best practices for Backup Exec 2014/15 Agent for VMware) and here: http://www.symantec.com/docs/DOC5481 (Backup Exec Tuning and Performance Guide)

HTH :)

 

 

 

 

Codders
Level 4

Thanks esinc and pkh.

 

sinc, the network is 1Gbps.  The data is coming from a Dell Equallogic 1Gbps SAN.  Network usage on the server is never more than 50%.  I/O on the SAN is low with latency below 5ms across all datastores.

I have ran a backup to disk job and the throughoput was much higher....the job completed in 40 minutes  as opposed to 4:30 hours.

As mentioned, we have two servers, one for our main office and the second located at our DR office.

I'm thinking we run our backups to the primary office using the backup to disk folder and then duplicate to the second office, thanks for the suggestion esinc.

Where will the deduplication'effort' take place and will it still be optimised?

Secondily, the servers and not under any strain at any point during the backup process.  We were told that deduplication is memory and CPU intensive - and yet our servers barely come under any sweat.  Is there a setting in BUE that could be limiting the amount of memory being used?  I have confirmed we do have the 64-bit installation.  I guess what I'm trying to say is that the physical server never appears to be running at capacity, barely a quarter in fact, so why would the backup/deduplication process not run quicker?  The only bottleneck I can see, is the BUE software itself.

Thanks,

 

 

Danny

esinc
Level 2

Hi Danny,

1 Gbps LAN means that your LAN can (theoretical) transport 125 MB/s. The speed of  LTO4 is 190 MB/s - so your LAN delivers only 65,79% of the speed of your tape drive.

A tape drive isn't able to wait for data - it has to write the data in a continously data stream. If there is no datatream the tape will write some bits then stops, rewinds a little bit, pauses, writes again some bits, rewinds a little ... and so on. That is why a direct copy to tape takes so much more time (in your 1 Gbps LAN).

Dedup can run in two different  ways: Server Side Dedup or Client Side Dedup.

Server Side Dedup means that the Backup Exec Server does the deduplication. You will need the hardware prerequisites for dedup in the BE Server:

1,5 GB (free!) RAM per TB deduplication storage plus another 8 GB for the operating system. And a CPU Core that is able to do the dedup work (by the way: CPUs with fewer but faster cores a better for dedup than ones with more but slower cores).

Client Side Dedup means that the Backup Source Server ("Client") does the deduplication: This server has to fullfill the prerequisites for dedup. If you use Client Side Backup you will save LAN-traffic because only the unknown and new blocks have to be transfered to the BE Server.

Then there is a database entry in the dedup database on the BE Server for every item. If the access to this database is slow (Path: <drive letter>:\BackupExecDeduplicationStorageFolder\databases) the whole process is slow. Some customers already maped an array of SSDs into the database folder of the deduplication storage by using a mount point (do not underestimate the size of this database!!) - then they saw how dependent software from hardware is. :)

HTH!