cancel
Showing results for 
Search instead for 
Did you mean: 

NetBackup 7.6.0.1 and Appliance 2.6.0.1 Install/Upgrade Field Experience

Khajan
Level 4
Employee Certified

We have released NetBackup 7.6.0.1 and Appliance 2.6.0.1 on 18th December 2013(Yesterday), Let us start logging our experience in this thread... I have been testing NetBackup 7.6FA for some time, soon I'll be helping few customers with 2.6.0.1 and NetBackup 7.6.0.1 soon.

For more details please check following few good links.

 

sort.symantec.com

entsupport.symantec.com  Enterprise NetBackup Server/NetBackup Appliances

Good Luck for your upgrades.

 

Khajan Gowda

69 REPLIES 69

mandar_khanolka
Level 6
Employee

Hmm..This time would vary based on the server resource configuration and load.

By the way, Jim, did it work for you? Were BMR enabled backups worked too during purge execution?

Thanks.

Mandar

Stanleyj
Level 6

Since upgrading im starting to see status 13's on first runs little more frequently than what im use to.  Most if not all will retry a little later and complete but they all use to run just fine the first go.  I know i have an older appliance (5200) but is anyone else possibly seen an increase in these?

4/7/2014 7:10:08 PM - Critical bpbrm(pid=11778) from client p170.home.com: FTL - socket write failed     
4/7/2014 7:10:08 PM - Info bpbkar(pid=3048) accelerator sent 1265495040 bytes out of 384308321280 bytes to server, optimization 99.7%
4/7/2014 7:11:26 PM - Error bptm(pid=11810) media manager terminated by parent process       
4/7/2014 7:11:33 PM - Info nbuappl01(pid=11810) StorageServer=PureDisk:nbuappl01; Report=PDDO Stats for (nbuappl01): scanned: 374584750 KB, CR sent: 756164 KB, CR sent over FC: 0 KB, dedup: 99.8%, cache disabled
4/7/2014 7:11:33 PM - Error bpbrm(pid=11778) could not send server status message       
4/7/2014 7:11:34 PM - Info bpbkar(pid=3048) done. status: 13: file read failed       
4/7/2014 7:11:34 PM - end writing; write time: 0:08:55
file read failed(13)

 

Im guessing i need to rearrange my policies to see if that makes a difference but that sometime is hard when the issue is different machines everynight.

jim_dalton
Level 6

To reply to Mandar: the server was doing nothing in terms of backup. 8G memory , all CPUs available. Not the fasted CPU but hey. It worked so worthwhile doing and now the verify takes a minute. Im sure I've posted this reply elsewhere.

Jim

Mux
Level 4

I have 2 5220 and has successfully upgraded 1 from 2.5.2 to 2.6.0.3. So far so good.

 

For the second appliance, the MSDP, it is 90% used and has 5.2 TB free.

 

WIll i be able to upgrade the appliance in the current situation?

 

Mux

Mux
Level 4

I have 2 5220 appliances both were running on 2.5.3.

 

I upgraded one and it is working fine. For the second one, dedup pool has only 10% (7TB) free. Will i face any issues if i go ahead with the upgrade?

 

 

fortec
Level 2
Partner Accredited

Are you using MSDP pools? If so I saw on a slideshow recently that it is recommended to have 12% available for the MSDP conversion when updating from 2.5.x to 2.6.x.

phil_scarfo
Level 3
Partner Employee Accredited Certified

MSDP free space required is listed at 12 %

I'd probably try to get it down to 15 tho

<insert shameless plug for additional storage shelf here>

maurijo
Level 6
Partner Accredited

Try to process your queue all the way down and let compaction run. You will free up a lot of space and make you safe for the upgrade. But you will need extra storage if your dedupe is that full, you will get into a LOT of trouble if you let it go like that...

Mux
Level 4

Yes. I am using MSDP.

 

I have already processed the queue and compaction multiple times. But still no luck...:(

And yeah i understand the risk of running the MSDP with only 10 % full.

We have ordered the hardware for 2 new media server and once it is ready, i am planning to split clients between the media servers and do the clean up activities.

 

So i belive my only option is to wait for the new media server and upgrade this appliance once the MSDP usage is below 80%-85%

 

Thanks guys...

Nick_M_
Level 1
Partner Accredited

Hi,

 

We've only noticed that our appliance is giving the same error (also on Appliance -> Status). We only noticed this after the upgrade to 2.6.0.3 as we zere experiencing some connectivity issues from certain subnets.

Checking back, the issue was also present on the appliance when it was still on 2.6.0.1. It never caused any issues though.

Were you able to resolve this error?

Resolved -- make sure nothing is wrong with the network config files and the clish will be able to parse the files correctly.