cancel
Showing results for 
Search instead for 
Did you mean: 

Impressions of BackupExec 2012

Bulbous
Level 5
Partner

Is it just me, or does anyone else absolutely HATE the redesign of Backup Exec? I have worked with BE since version 8, and I have become acutely familiar with the menus, where everything is, and how it works.

This redesign of the UI reminds me of the differences between Microsoft Office 2003 and Office 2007, only much worse. Menus are now hidden behind other menus, and everything has a completely counter-intuitive feel.

At first, I thought that the feeling would pass as I grew more familiar with the product, but in fact my dislike has grown as I have found more issues.

Does anyone else feel the same way?

417 REPLIES 417

TTT
Level 4

That brings us to another important goal of the release…bring the great technology of Backup Exec and its ability to work with tape, disk and cloud to new users too. I can't count how many times, I heard of stories where people would evaluate Backup Exec for the first time and walk away saying "Its much more complex than what I need". When you talk to them and learn their environment - you'd find out - its exactly what they need - in terms of the underlying capabilities. But, from their perspective, it was far too difficult to use. This would happen time and again

Greg, I understand the new interface and workflow design is helpful for first-time-users.  (That, and Symantec's and third-party consultants' services.)  However there should still be "expert mode" available as an option, such as a centralized job monitor (to view all backup jobs and storage operations- including MMS and CASO jobs), bringing back the multiple-window tree-view for media, doing a rt-click/restore when viewing a backup set, etc.

As for new-user adoption, I suggested back in 2007 to the presenter at a Backup Exec tech day in NYC that Symantec do a series of videos (or a whitepaper, or both) that would portray a fictitious company (such as Microsoft's Contoso) and how Backup Exec fit the company's growing needs.  It'd run from beginner to expert on using all the features, and cover the entire size range of SMB.

The fictitious company would start with 2 simple servers (web server and file server), and as it slowly grew, it would require more resources/servers (Domain Controllers and Exchange), then become virtual (VMware agent) with SAN/NAS, and eventually wind up with *all* BE options (Oracle, Sharepoint, DeDupe, CASO/MMS, SSO, etc.).  Each stage would have a short video (e.g. "Company, Inc. is now expanding sales of their product and needs to support an ERP system with a clustered database") and point to an accompanying whitepaper.  And the videos would *not* be like the Hal videos- they'd portray real solutions with confident and educated IT staff- not Marketing's perception of IT Professionals.  (In my opinion, Hal should never have been in charge of his company's backups.)

I'm one of those testing before I deploy to production (and have a case open with tech support now).  I'm not converting anything over, I'm starting with clean BE servers.  What would be helpful for my transition would be some advanced information on how to do the advanced things (e.g. daily management of 50 servers) that I used to do in previous versions, sort of like the Blogs are attempting to do now.  Ultimately that could come in the form of that fictitious-company-whitepaper I mentioned above (giving an in-detail description of the comprehensive BE package deployed in a larger scale).  Has Symantec thought about documenting their test system for us?

BE_KirkFreiheit
Level 4
Employee

hazmat09, I am a Backup Exec developer at Symantec and want to better understand a couple of items you pointed out above.

Regarding the "Immediately after" linkage between your Full/Diff and Duplicate stage, I'm wondering if you had it set up as a staged operation using a policy in BE2010.  If so, it would have required two Duplicate templates in the policy, one for the full and one for the Diff --- it's been a long-standing limitation of "Immediate after" scheduled Duplicate templates that they can be linked to exactly one previous template.  I agree that it's confusing, and hope to make it work as you expect (i.e. just make one Duplicate stage to copy all inputs immediately) in a future release.

I'm just wondering if that was an upgrade issue or if it was a post-upgrade job editing (add stage) issue.

Second, I'm very curious to learn how you (and anyone on the board, of course!) used the Edit Next Run feature.  Was it something you leveraged quite often?  Were there patterns of issues to work around with that feature, or were the issues it helped with all over the map?

Rem700
Level 3

It feels like a version you'd get pre-packaged with an external HDD - lacking many features and an over-abundance of a "pretty" GUI.

 

One of my biggest annoyances is that I can't, or haven't found a way yet, to run a backup job within a time window, repeating itself every X hours.  (Start incremental backup at 6AM then run again every 2 hours until 4PM, M-F)  Acronis does this.  Still looking for a way to do this w/o having to create multiple jobs.

hazmat09
Level 4

Hi Kirk,

Question One:

DFS Job as example. Prior to upgrade I had a Selection List for the DFS server data. I had a Full Backup Policy which would duplicate to tape immediately. Lastly I created the job for those two items. I then did the same thing for a DFS Differential job. The upgrade separated my job into numerous, so I basically did them from scratch. I believe they had the duplicate stage on them, but I can't fully remember. What I found confusing at first was I didn't realize you just added an additional stage for the Diff duplicate. and point it to the Diff job in the settings. What made it even more confusing was tech support told me I did not need that, which made no sense to me whatsoever, so I basically said okay, and figured it out on my own.

I totally understand how the stages work now and it is very logical. That being said, coming from the old methodology made it confusing, and it was only playing around with it a bit that I got the jist of it.

Question Two:

I used it in a few different scenarios.

  • Jobs were put on hold for whatever reason, I then wanted to re-run those outside of production at a particular time in the evening
  • If a job failed, particularly a full, and I wanted to re-run it later in the evening the next day.
  • If you had a holiday on the day you were swapping your tapes out and your full was to run that night. I would do edit next run and schedule the fulls for the following evening.

 

hazmat09
Level 4

Question One:

DFS Job as example. Prior to upgrade I had a Selection List for the DFS server data. I had a Full Backup Policy which would duplicate to tape immediately. Lastly I created the job for those two items. I then did the same thing for a DFS Differential job. The upgrade separated my job into numerous, so I basically did them from scratch. I believe they had the duplicate stage on them, but I can't fully remember. What I found confusing at first was I didn't realize you just added an additional stage for the Diff duplicate. and point it to the Diff job in the settings. What made it even more confusing was tech support told me I did not need that, which made no sense to me whatsoever, so I basically said okay, and figured it out on my own.

I totally understand how the stages work now and it is very logical. That being said, coming from the old methodology made it confusing, and it was only playing around with it a bit that I got the jist of it.

Question Two:

I used it in a few different scenarios.

  • Jobs were put on hold for whatever reason, I then wanted to re-run those outside of production at a particular time in the evening
  • If a job failed, particularly a full, and I wanted to re-run it later in the evening the next day.
  • If you had a holiday on the day you were swapping your tapes out and your full was to run that night. I would do edit next run and schedule the fulls for the following evening.

GregOfBE
Level 4
Employee

If you want to overwrite media with your first backup, one approach is to designate one of the servers' backups as the "first" job to kick off. Edit that one backup and set its media overwrite option to "Overwrite media" and bump up its job priority from medium to high  - you do that by selecting that server - go into its details view (double click) and highlight the backup. Then hit "Increase priority" from the toolbar. (Little bug: you have to hit F5 to see the priority column update until we fix that.  :)   This will cause that backup to run before the other ones that are scheduled with the same start  time.  (Alternatively you can change that backup's start time to be  little before the others). 

Similarly, designate the "last" backup and do the same except in reverse - lower priority or later start time and turn on its eject after checkbox.

For all the backups in the "middle" of the sequence, you can multi-select them and hit "Edit Backups" to edit them - then make sure they are set to append to media and NOT to eject. So you'll have to do 3 edit operations to create this "sequence" for a manual tape drive.

Its also good to have your media set options set to have a long enough append period to take all these jobs (say a little longer than how long they should all complete).

When your done, let us know how its working.

GregOfBE
Level 4
Employee

Thanks so much for taking the time to post this and about your experiences. It not only helps other users but it also helps us improve the product going forward. You didn't just ask for features, you explained the problems your trying to address which helps a lot. I agree with many of your statements - especially about finding ways to make the transition easier.

GregOfBE
Level 4
Employee

Thanks for those thoughts. I think you have some great ideas.

Concerning the CASO/Job view. You have multiple "jobs views" for your CASO environment. Here they are:

1. Go to the Storage Tab and drill into the CASO machine at the top. There you will see "Jobs" and "Job History" tabs. Those tabs show you all device/utility jobs and backup jobs across your environment.

2. Go to the Backup and Restore tab, enable groups from the toolbar and double click on the "All Servers" group. Again, the "Jobs" and "Job History" tabs will show all server-related jobs across your environment (backups and restores)

3. Backup and Restore tab: Multi-select arbitrarily any servers and hit the yellow "Details" button on the upper right (yellow button that says [n] Servers).  Now the jobs tabs show jobs for those selected servers

4. Backup and Restore tab: Double click any group that you create and the Jobs tabs show all jobs across those servers in that group

5. Storage Tab: Double click on any de-duplication storage, disk storage device, or tape device and you have Jobs tabs that show backups that are targetted specifically at that device.

 

Concerning the rt click / restore on media - yes we've heard this one and I hope to get that back in.

Keep the ideas flowing...

pkh
Moderator
Moderator
   VIP    Certified

The Edit Next Run is very important in my case.  I use a script as a pre-command to import tapes which come back from off-site.  If there is a foul-up in the delivery and there is no tape, I have to remove the import.  Otherwise, the import job will just hang waiting for a tape and all my backup jobs will fail.  With the Edit Next Run, I just remove the script from the pre-command of the next run.  I don't have to remember to put back the script as a pre-command on the following day when there is a tape.

GregOfBE
Level 4
Employee

Yes, those are great points  - especially for long time users. I'd also point out that once in the jobs view, you can switch between the tree and the list. Suppose you want to sort based on fulls vs. incrementals - switch to list mode to do that and sort by the type. I'd also suggest exploring the sort/filter button on the toolbar. It now lets you customize columns, sort and filter criteria and save the view with a name so you can apply it easily. You can use sort/filter named views in conjunction with groups to slice and dice more easily.

 

pkh
Moderator
Moderator
   VIP    Certified

@BE_KirkFreiheit - Re-posting my answer because I am afraid that it will get lost in the middle

The Edit Next Run is very important in my case.  I use a script as a pre-command to import tapes which come back from off-site.  If there is a foul-up in the delivery and there is no tape, I have to remove the import.  Otherwise, the import job will just hang waiting for a tape and all my backup jobs will fail.  With the Edit Next Run, I just remove the script from the pre-command of the next run.  I don't have to remember to put back the script as a pre-command on the following day when there is a tape.

Emiliano_Caruso
Level 3
Partner Accredited

He has worked for several years with Backup exec and in this verisone over the passage of the new interface and ease of upgrades with older versions, I've found no problem in setting up via LAN, vmware, database agents. Ease of use and management. Only thing is if one is accustomed to the old version old interface you just get used to change the thinking from the backup job to backup the server.

robnicholson
Level 6

>To me, in "my world", disaster recovery involves complete loss-of-site, with high risk of "my business goes bankrupt".  Imagine that on a resume!

But for disaster recovery, off-site tape is an awful choice and many of those companies that went bankrupt had a reasonably good tape backup.

If you are serious about business continuity, then backup-to-disk should be where it's at with replication to another site.

Unfortunately, our experiences of BE Dedupe is that it's *slower* than tape... B2D is pretty fast though. For us, dedupe more about day-to-day disasters such as a user deleting a folder or discovering they corrupted a file several months ago. So the ability to keep a six-month backup window is useful.

Dedupe should be the norm in the backup world but Symantec need to seriously look at their architecture to get it faster, more reliable and more resilient (e.g. ability to rebuild dedupe database in case of corruption).

Cheers, Rob.

patters
Level 4

Since some developers are reading this, and since there doesn't seem to be a place for bug reports (only ideas), I've posted all the bugs/improvement suggestions I've found so far here - excluding more serious ones that I have raised separately as support requests:

https://www-secure.symantec.com/connect/ideas/list-ui-bugs-and-others-i-have-found-be-2012

GregOfBE
Level 4
Employee

Excellent, thanks for doing that. I'll check it out.

ianSinclair
Level 3

I upgrade our 2010 backup to this new product, I understood there would be differences but never did I think things could get this bad. I logged a technical support call and explained what I wanted to do and I have since emailed symatec my thoughs abd problems of the new version. They did seem genuinely concered, I even spoke to the support manager on the phone, but at the end of the day I have had to rollback to 2010 as I cannot easily get 2012 to do what I want to do. I would asks everyone to log calls and email in, beacause as soon as they realise this is going to work the sooner they can fix it. here my email:

Hi Thanks for the email, I will attempt to explain what I need from the product, as of now I have to roll back to 2010

 

 

We have quite a complex environment with 14 physical servers and 56 virtual machines. We have a SAN with 10 Terra bytes of data.

 

We backup to tape so they can go offsite, so that in the event of loss of server room or building we can restore all our severs and data.

We are not too concerned with an individual file restores as we have san snapshots for this.

 

We have spent a lot of time organising our backups so that the data is ordered on tapes with certain data being on certain tapes

So that for instance a domain controller and email are the first things to be restored (business priority).

 

All our servers have a C:\ drive which is physical out exchange and sql  have physical C:\ drives but the log and data base drives are in the SAN.

 

We have a robotic library with 4 tape drives and 96 slots, we have this as one big library so as long as there is a tape in it the backup jobs will use it

It makes it very simple, we have a selection list of all the c:\ drives we point at the library and say back this up at 9 pm.

 

The drive will pick one of the available drives and one of the tapes and run all these jobs to one tape. I do a report that tells me what is on what tape

This goes off site with the tape, I can only send a certain number of tapes off site as our box only hold so many and we only have so many tapes.

 

Another job round which backs up the e:\  and F:\ drives which are on the SAN, these are mounted on the C:\ of the backup server as folders

I back these up with another job to another tape and it goes offsite.

 

 

When we do a disaster recovery one person gets the c:\ drive tape and does the restore and another gets the E:\ f drive tape and does a restore.

 

What I need from the software is quiet simple, I want to back up the C:\ drive of 4 servers all to the same tape, preferably as one job. – Job 1 Tape 1

I want to back up the E:\ drives of the servers to another tape. – Job 2 – Job 3 Tape 2

 

I want another job that has 2 servers c:\ drives I want these 2 jobs to go on the same tape – job3 – Tape 3

 

I then want another job which backs up another selection  of parts jobs 1 and 3 to another tape for the fire safe. – job4 tape 4

 

3 tapes offsite one to the safe, simple

 

I do not want to have to partition my library into lots of slots then do an overwrite job and guess the end time, then an append job guess the end time ……

 

On a Sunday I backup all 14 servers I don’t want 14 tapes and I need control of what server is on what tape and I need to keep it to 5 tapes.

 

I think the easiest thing for you do do would be to be able to create resource groups, then back them up as one job, let’s call them selection lists !

 

I don’t care for the new interface there is too much hidden away by a double click or a right click, but I can get used to that, but not being able to backup 4 server c:\ drives onto

The same tape without a complex library configuration, means your product no longer suits the purpose we have it for, so  my personal 12 years with backup exec may reluctantly have to come to an end.

 

I think somebody has already said it in the forum, its like this is the simple version, where is the button I click to get the I am an experienced network administrators version ?

patters
Level 4

Thanks. I've just added a new suggestion to it about Duplicate tasks (related to expiry).

patters
Level 4

I wouldn't agree. Tape is good for offsite recovery if you're budget conscious, and can accept a day's lag to getting up and running again in a disaster, and with VMs the restore is beautifully simple.

But the one thing that is never made clear until you try a recovery rehearsal is that you absolutely *must* have a some way of shipping your zipped up catalogs to a server off site. I have scripted this after every backup run and have the catalogs set with 2 months retention only to keep the size reasonable. Do this and you can start restoring immediately at the recovery site, and from LTO4 in a SAS loader you're looking at 3-4GB/min. Fail to do that and you may have to waste days just cataloguing the tapes.

TTT
Level 4

Greg thanks for checking in on this message!  It's reassuring to see that Symantec's willing to jump in and help us out.  If the whitepaper I described (full scale deployment of all BE options) is created please let me know!

I knew about most of those job views (I didn't know I could do # 3, that's helpful), but it still has me searching across the GUI to find jobs.  I'd rather have the "expert mode" option to enable a single job monitor as well.

Is there any display for the CASO/MMS "Copy server settings" jobs?  If I copy settings, there's no proof that it happened.  My logon account didn't copy down and I had no way to figure out why it didn't (so I just recreated it manually on my MMS).

My CASO isn't updating the "last 7 days of backup jobs" green/yellow/red icons for any backups that run on my distributed MMS, but that's not a big deal-breaker, I'll probably open a case later about it.

ianSinclair
Level 3

Greg your apepnd to media idea will not work if you have multiple tape drives and and or a robotic library, all the server jobs will be set to start at the same time. If you choose one overwrite then this will start and overwrite the first tape in the library, when the next job starts at the same time, there will be no appendable media available as the media is being used for the first job, so with your selcetin being "overwrite media if no appendable media is available" then the job will simply overwrite another tape, you are back to one tape per server. Thsi is fine in the I have 3 servers world, when you have 30 you run out of tapes. If you choose the terminate job option then the job will terminat as there is no media available.

 

What you have to do is guess how long it will take server 1 to backup, then change the time on server  2's job to start after this, that way it will see appendable media and write to it.

Or rather than have one nice library that you point all you jobs at and say there is a library its full of tapes take your pick, you have to partion the thing down, which add to complexity, then you have to make sure you have some tpaes in each partition rather than some tapes in the library.

Symantec have got to be big enough to stand up and say server centric is not for everyone, it works well for first time users in a 2 server environment, move up to the enterprise, and really you need resource centric backups. I may want all my email on one tape, all my SQL on another, thats fine when you have one of each but once you have more than one of each it gets v complex.

I wonder if this is a marketing ploy, at the moment Backup exec is the backup solution of choice for SMB's, this latest incarnation is clearly aimed at the small, the medium with any kind of complexity have lost out. So I am concered Symantec answer will be to bring out another product, Backup exec enterprise or the like and we have to buy it all again, not just pay the support costs.

 

Well thats my opinion

 

Ian