cancel
Showing results forΒ 
Search instead forΒ 
Did you mean:Β 

bperror -media -U results - what next?

RichardXClark
Level 4

Hi,

I have issues with drives repeatedly going down.

Quantum PX506
6 drives SDLT320
NBU 6.0 MP4 Windows 2003 single Master / Media

Initially drives #2 & #4 were going down. Drive 4 was replaced yesterday by Quantum.
I removed all drives & robot & readded using wizard.
All serial numbers in NBU match real physical serials.
Full server (wintel) & library reboots have been done.
Over last few days at least 5 different media IDs have been noted as CRC failures. I have had 1 or 2 bad tapes before, never this many.

Last night, drive # 3 went down.

What is my course of action?

So far in my to do list / action plan is...

Clean all drives
Check / update library firmware versions
Check / update windows device drivers
Run rob test
Run library diagnostis (has a web interface)

What else?

Thanks in advance, Rich

***********************************
bperror -media -U results
***********************************
TIME SERVER/CLIENT TEXT
06/18/2009 13:25:25 uk21pbkp001 - error unloading media, TpErrno = Robot
operation failed
06/18/2009 15:20:59 uk21pbkp001 uk21pfsr003 error requesting media, TpErrno =
Robot operation failed
06/18/2009 15:20:59 uk21pbkp001 uk21pfsr003 media id 212618 load operation
reported an error
06/18/2009 15:26:53 uk21pbkp001 uk21pfsr002 ioctl (MTWEOF) failed on media id
212619, drive index 3, Data error (cyclic redundancy
check). (23) (bptm.c.8229)
06/18/2009 15:27:02 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.003 (index 3),
Media Id 212619
06/18/2009 15:27:02 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.003 (index 3),
Media Id 212619
06/18/2009 16:38:49 uk21pbkp001 uk21pbkp001 ioctl (MTWEOF) failed on media id
212251, drive index 4, Data error (cyclic redundancy
check). (23) (bptm.c.8229)
06/18/2009 16:38:55 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212251
06/18/2009 16:38:55 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212251
06/18/2009 18:36:40 uk21pbkp001 uk21pexu001 error requesting media, TpErrno =
Robot operation failed
06/18/2009 18:36:40 uk21pbkp001 uk21pexu001 media id 212617 load operation
reported an error
06/18/2009 18:55:43 uk21pbkp001 uk21pexu002 cannot write image to media id
212233, drive index 4, Data error (cyclic redundancy
check).
06/18/2009 18:56:46 uk21pbkp001 uk21pexu001 cannot write image to media id
212280, drive index 0, Data error (cyclic redundancy
check).
06/18/2009 20:25:42 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.000 (index 0),
Media Id 212280
06/18/2009 20:25:42 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.000 (index 0),
Media Id 212280
06/18/2009 21:27:54 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/18/2009 21:27:54 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/18/2009 21:27:55 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/18/2009 21:27:55 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/18/2009 21:27:55 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/18/2009 21:27:55 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/18/2009 21:27:56 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/18/2009 21:27:56 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/18/2009 21:27:57 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/18/2009 21:27:57 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/18/2009 21:27:57 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/18/2009 21:27:57 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/18/2009 21:27:58 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/18/2009 21:27:58 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/18/2009 21:56:55 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212233
06/18/2009 21:56:55 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212233
06/18/2009 22:01:22 uk21pbkp001 uk21pfsr005 sts_get_lsu_prop_byname on LSU
E:\Test\Weekly failed: 13
06/18/2009 22:01:22 uk21pbkp001 uk21pfsr005 Invalid STS storage device:
E:\Test\Weekly
06/18/2009 22:01:23 uk21pbkp001 uk21pfsr005 sts_get_lsu_prop_byname on LSU
F:\Test1g\Weekly failed: 13
06/18/2009 22:01:23 uk21pbkp001 uk21pfsr005 Invalid STS storage device:
F:\Test1g\Weekly
06/18/2009 23:36:42 uk21pbkp001 uk21pexu001 ioctl (MTWEOF) failed on media id
212227, drive index 2, Data error (cyclic redundancy
check). (23) (bptm.c.8995)
06/19/2009 00:10:10 uk21pbkp001 - TapeAlert Code: 0x33, Type: Warning, Flag:
DIRECTORY INVALID ON UNLOAD, from drive (index -1), Media
Id 212227
06/19/2009 01:01:49 uk21pbkp001 uk21pexu001 error requesting media, TpErrno =
Robot operation failed
06/19/2009 01:01:49 uk21pbkp001 uk21pexu001 media id 212611 load operation
reported an error
06/19/2009 01:11:26 uk21pbkp001 uk21pexu001 cannot write image to media id
212235, drive index 5, Data error (cyclic redundancy
check).
06/19/2009 01:11:30 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.005 (index 5),
Media Id 212235
06/19/2009 01:11:30 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.005 (index 5),
Media Id 212235
06/19/2009 01:12:26 uk21pbkp001 uk21pexu001 error requesting media, TpErrno =
Robot operation failed
06/19/2009 01:12:26 uk21pbkp001 uk21pexu001 media id 212611 load operation
reported an error
06/19/2009 01:18:34 uk21pbkp001 uk21pexu001 error requesting media, TpErrno =
Robot operation failed
06/19/2009 01:18:34 uk21pbkp001 uk21pexu001 media id 212618 load operation
reported an error
06/19/2009 01:37:21 uk21pbkp001 uk21pexu001 cannot write image to media id
212618, drive index 4, Data error (cyclic redundancy
check).
06/19/2009 01:37:25 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212618
06/19/2009 01:37:25 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.004 (index 4),
Media Id 212618
06/19/2009 05:00:20 uk21pbkp001 uk21pexu002 error requesting media, TpErrno =
Robot operation failed
06/19/2009 05:00:20 uk21pbkp001 uk21pexu002 media id 212632 load operation
reported an error
06/19/2009 06:05:08 uk21pbkp001 uk21pbkp001 ioctl (MTWEOF) failed on media id
212252, drive index 1, Data error (cyclic redundancy
check). (23) (bptm.c.18281)
06/19/2009 06:05:17 uk21pbkp001 - TapeAlert Code: 0x03, Type: Warning, Flag:
HARD ERROR, from drive QUANTUM.SDLT320.001 (index 1),
Media Id 212252
06/19/2009 06:05:17 uk21pbkp001 - TapeAlert Code: 0x06, Type: Critical, Flag:
WRITE FAILURE, from drive QUANTUM.SDLT320.001 (index 1),
Media Id 212252
06/19/2009 09:25:54 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/19/2009 09:25:54 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/19/2009 09:25:54 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/19/2009 09:25:54 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/19/2009 09:25:55 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/19/2009 09:25:55 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/19/2009 09:25:55 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/19/2009 09:25:55 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/19/2009 09:25:56 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Weekly failed: 13
06/19/2009 09:25:56 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Weekly
06/19/2009 09:25:56 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/19/2009 09:25:56 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/19/2009 09:25:57 uk21pbkp001 uk21pbkp001 sts_get_lsu_prop_byname on LSU
F:\Test\Daily failed: 13
06/19/2009 09:25:57 uk21pbkp001 uk21pbkp001 Invalid STS storage device:
F:\Test\Daily
06/19/2009 09:26:30 uk21pbkp001 uk21pfsr005 sts_get_lsu_prop_byname on LSU
E:\Test\Weekly failed: 13
06/19/2009 09:26:30 uk21pbkp001 uk21pfsr005 Invalid STS storage device:
E:\Test\Weekly
06/19/2009 09:26:30 uk21pbkp001 uk21pfsr005 sts_get_lsu_prop_byname on LSU
F:\Test1g\Weekly failed: 13
06/19/2009 09:26:30 uk21pbkp001 uk21pfsr005 Invalid STS storage device:
F:\Test1g\Weekly
***********************************
 



 

9 REPLIES 9

mph999
Level 6
Employee Accredited
Hi Rich

I've seen x6 drives go down at once, all failed.  Probably caused by a damaged media being loaded I suspect, which is why in general you don't unfreeze frozen tapes, unless you can prove they aren't faulty.

06/18/2009 15:26:53 uk21pbkp001 uk21pfsr002 ioctl (MTWEOF) failed on media id
212619, drive index 3, Data error (cyclic redundancy
check). (23) (bptm.c.8229)

Ok, Cyclic redundancy check is almost certainly hardware failure.  Also, the MTWEOF fails, again I would suggest hardware first.
NBU basically sends the same SCSI commands to the drives to do things like positioning , WEOF etc .. as an O/S would.  Therefore failures on these operations are not likely to be NBU related.  Drivers and firmware can certainly cause positioning errors, but not as far as I can know, Cyclic redundancy check errors.

I'm not saying NBUcan't be the cause, just unlikely.

Tape alert 3 is read/positioning or write issue, tape alert 6 is a write issue, which is what I would expect is more likely on a worn or damaged drive.  It is much harder for a drive to write a tape than read (higher siignal strengh required) and so I would expect to see write errors on a drive that can still read.

Next, tape alert codes, as you will be aware are sent from the drives themselves, they are telling us (NBU) that there is an issue, NBU is just being polite and passing the message on - that error is nothing to do with NBU.

A common misconception is that NBU writes to tapes, it doesn't.  bptm passes a tar stream to the o/s and asks the o/s to write that data to tape using whatever block size Netbackup is configured with.   I think therefore it is reasonable to suggest that the best starting place to look for tape drive issues is generally not Netbackup.

I mentioned that NBU usualluy isn't the case, to elimnate it, I would remove all drives and drivers and reconfig from scratch, which I think you have done.  If nothing else really odd is going on, then I would be pretty happy with this.

For a failure something must have changed, certainly if nothing has changed from an o/s / nbu point of view, this again points the finger at the h/w.  The drives I suspect are quite old an dprobably nearing end of life.

Your plan looked good, go easy on the cleaning, it's essential when required, but only when required (ie  the drive is flashing ...) as it's not good at other times as the tapes are abrasive and wear the heads out in the drive.  The odd extra clean should ok.

So, based on the robot failures and  the tapes alerts I would say that it is very unlikely to be NBU.

Finally, if you replace your drives I would replace the tapes also - it is possible that 3 - 4 drives wore out at the same time (if they are used evenly) but I think a bad media is more likely the cause.

Hope this helps.

Martin

Anonymous
Not applicable
Could cost you nothing to upgrade to latest Maintenance Pack maybe. Just in case you decide to log a support case.

Of course make sure you have no other Windows Services on any Master or Media Servers polling the devices.

I'm thinking Removable Storage Service.

RichardXClark
Level 4
Update for the benefit of anyone reasearching similar issues.

Martin, you were correct in thinking its not NBU related, as I suspected & your explainations helped me understand my symptoms.

Stuart, I am going to 6.5.4, I would not normally be the 1st in Q for new versions but I have a specific DFSR issue that is solved.

As it turned out, I ran the Quantum diags & sent them for analysis.
They identified a robot arm camera error - the focusing was off, resulting in media picking / loading errors.
It is being replaced right now.

Thx for yoour input.
Rich

Will_Restore
Level 6
resulting in media picking / loading errors.

How does that relate to CRC errors?  Something else going on?

Will_Restore
Level 6
resulting in media picking / loading errors.

How does that relate to CRC errors?  Something else going on?

mph999
Level 6
Employee Accredited
"How does that relate to CRC errors? Something else going on?"

Good point, I was wondering that also ...

Oh well, sometimes funny things happen, lets wait to see if the current fix totally resolves the issue.

Martin

RichardXClark
Level 4
The problem with error 84 has re-occured.

(all together now - "told you so")

I have Quantum on a call out again to investigate, arriving later today.

Our support staff at that site have been collecting frozen tapes and setting them asside on a shelf.
I had them counted - there are 161, which represents 1/3 of the tapes purchased i the last 2 years.
This is alarming.
I had all new tapes in, fresh from the supplier. Of 56 tapes loaded, 8 has CRCs.
Recently this occurs on different drives 2 of which have been replaced this year.

Does this BPTM.log give any clues?

I am working through the In-depth Troubleshooting Guide for Exit Status Code 84 in NetBackup Server (tm) / NetBackup Enterprise Server (tm) 6.0

http://seer.entsupport.symantec.com/docs/278565.htm

But I am not sure which parts are most likely to be the cause apart from bad tapes & bad drives, which you guys seem to state as the most likely suspects.

When the current batch of Maxel tapes runs out, is it worth buying a different brand?


**********************************************


02:48:46.424 [5432.5624] <2> db_error_add_to_file: dberrorq.c:midnite = 1258502400
02:48:46.440 [5432.5624] <16> io_ioctl: ioctl (MTWEOF) failed on media id 212881, drive index 4, Data error (cyclic redundancy check). (23) (bptm.c.9022)
02:48:46.440 [5432.5624] <2> send_MDS_msg: DEVICE_STATUS 1 715947 uk21pbkp001 212881 4003668 QUANTUM.SDLT320.004 2000369 WRITE_ERROR 0 0
02:48:46.487 [5432.5624] <2> log_media_error: successfully wrote to error file - 11/18/09 02:48:46 212881 4 WRITE_ERROR QUANTUM.SDLT320.004
02:48:46.487 [5432.5624] <2> check_error_history: just tpunmount: called from bptm line 20414, EXIT_Status = 84
02:48:46.487 [5432.5624] <2> io_close: closing D:\Program Files\VERITAS\NetBackup\db\media\tpreq\drive_QUANTUM.SDLT320.004, from bptm.c.15787
02:48:46.487 [5432.5624] <2> drivename_write: Called with mode 1
02:48:46.487 [5432.5624] <2> drivename_unlock: unlocked
02:48:46.487 [5432.5624] <2> drivename_checklock: Called
02:48:46.487 [5432.5624] <2> drivename_lock: lock established
02:48:46.487 [5432.5624] <2> drivename_unlock: unlocked
02:48:46.487 [5432.5624] <2> drivename_close: Called
02:48:46.487 [5432.5624] <2> tpunmount: NOP: MEDIA_DONE 0 46087 1 212881 4003668 0
02:48:46.502 [5432.5624] <2> db_error_add_to_file: dberrorq.c:midnite = 1258502400
02:48:46.502 [5432.5624] <16> terminate_twin: INF - media write error (84), cannot continue with copy 2
02:48:46.533 [5432.5624] <2> vnet_vnetd_service_socket: vnet_vnetd.c.2034: VN_REQUEST_SERVICE_SOCKET: 6 0x00000006
02:48:46.533 [5432.5624] <2> vnet_vnetd_service_socket: vnet_vnetd.c.2048: service: bpdbm
02:48:46.533 [5432.5624] <2> logconnections: BPDBM CONNECT FROM 172.21.49.74.3493 TO 172.21.49.74.13724
02:48:46.658 [5432.5624] <2> db_IMAGEsend: reset protocol_version from 0 to 2
02:48:46.690 [5432.5624] <2> check_error_history: just tpunmount: called from bptm line 3679, EXIT_Status = 84
02:48:46.690 [5432.5624] <2> tpunmount: NOP: MEDIA_DONE 0 46087 1 212881 4003668 0
02:48:46.690 [5432.5624] <2> db_error_add_to_file: dberrorq.c:midnite = 1258502400
02:48:46.690 [5432.5624] <4> write_backup: begin writing backup id uk21pfsr003_1258512374, copy 1, fragment 1, destination path E:\UK21_PFSR003
02:48:46.690 [5432.5624] <2> signal_parent: set bpbrm media ready event (pid = 8164)
02:48:46.690 [5432.5624] <2> write_data: ndmp_dup_max_frag is set to 2097152000
02:48:46.690 [5432.5624] <2> write_data: twin_index: 0 active: 1 dont_process: 0 wrote_backup_hdr: 0 finished_buff: 0 saved_cindex: 0 twin_is_disk 1 delay_brm: 0
02:48:46.690 [5432.5624] <2> write_data: twin_index: 1 active: 0 dont_process: 0 wrote_backup_hdr: 0 finished_buff: 0 saved_cindex: 0 twin_is_disk 0 delay_brm: 0
02:48:46.690 [5432.5624] <2> write_data: first write, twin_index: 0 cindex: 0 dont_process: 1 wrote_backup_hdr: 0 finished_buff: 0
02:48:46.690 [5432.5624] <2> write_data: received first buffer (65536 bytes), begin writing data
02:51:20.998 [7816.6508] <2> write_backup: write_data() returned, exit_status = 0, CINDEX = 0, TWIN_INDEX = 0, backup_status = -6
02:51:20.998 [7816.6508] <2> write_backup: tp = 395095859, stp = 394878687, et = 3063311, mpx_total_kbytes[TWIN_INDEX = 0] = 12288000
02:51:20.998 [7816.6508] <2> io_ioctl: command (0)MTWEOF 1 from (bptm.c.18610) on drive index 1
02:51:22.373 [7816.6508] <2> write_backup: maximum fragment size written --- Fragmenting
02:51:22.373 [7816.6508] <2> db_error_add_to_file: dberrorq.c:midnite = 1258502400
02:51:22.373 [7816.6508] <4> report_throughput: VBRT 1 7816 5 2 E:\UK21_PFSR001 QUANTUM.SDLT320.001 *NULL* 212812 0 0 1 0 2048000  2048000 (bptm.c.18972)
02:51:22.576 [7816.6508] <2> vnet_vnetd_service_socket: vnet_vnetd.c.2034: VN_REQUEST_SERVICE_SOCKET: 6 0x00000006


**************************************************


There is masses more, this is an extract which contais an 84 error.
There is info there about the inline copy to disk, which is maily reliable & can be then duplicated from. Duplication also suffers error 84, which may eliminate shoe-shining.

As always, your knowledge and help is greatly appreciated.

Rich

mph999
Level 6
Employee Accredited
Regarding tape brands, some brands are more abrasive than others.  The only tapes I would allow to be used at my previous position were Imation - for us, they worked the best.

Additionally, when manufactured, Imation run the tapes through a "synthetic" drive head, this cleans off any debris from the manufacturing process which saves it ending up in your drive (the reason brand new tapes give more errors than those used a couple of times).  To my knowledge, no-one else does this.

A while back I visited a company to look at a some software, Storsentry - they were acting as a reference site.  This monitors tapes and drives and flags errors/ failures before they happen.  Out of a library with about 10000 media, they had pulled 115 tapes on the say so of the software.  On looking at these, all were one particular make, which I happen to know as being more abrasive than the others.  At that point, they had reduced the tape/ drive failure down to zero.

Obviously I can't say the brand they took out, but I can say that the tapes they were left with were all Imation.

Martin

dami
Level 5
I'm sure you have seen the below relating to cyclic redundancy checks, and that you are going through them. These are nightmarish issues and very hard to troubleshoot. All I would add for the moment is that I have seen similar situations before and it was never the actual media (ie the whole batch faulty) to blame. Cabling and fibre switches I have also seen to cause media errors as indicated below. I would guess that you are using fibre and not SCSI but would certainly be looking at cabling, cards, switches, anything between the media server and the actual tape drives. I am assuming that this setup has all worked before and then something happened and the errors started appearing and the drives going down, tapes frozen etc.

Can we have a bit more info on the setup (master / media servers - OS, connection types to tapes etc). It might be worth shutting down netbackup and trying to manually write to a tape drive with something else (tar, dd, windows backup ...) just a a sanity check. My gut feeling would be some hardware component - cable or card - somewhere inside the library causing problems but not being identifiable from logs or alerts.

Good luck.

(http://seer.entsupport.symantec.com/docs/272802.htm)
Common reasons for a CRC error with potential ways to resolve the problem:

1. Contaminated read/write heads of the tape device: Check with the hardware manufacturer for proper cleaning techniques

 
2. Bad media:
 
  • Replace the media
  • Try a new tape that is certified by the hardware manufacturer

3. Corrupt tape drivers have been known to cause CRC errors

 
Ensure the latest tape drivers available for your tape drives are loaded. VERITAS has tape drivers available in the VERITAS tape installer. This can be downloaded from:    http://support.veritas.com/tabs/download_ddProduct_NBUESVR.htm
 

 
4. SCSI controller is incorrectly configured to use wide negotiation:
 
  • Use the manufacturer's SCSI setup program to disable wide negotiation on the SCSI controller card
  • If the device is a wide (68 pin) SCSI device, then wide negotiation should be used. If the device is a narrow (50 pin) SCSI device, disable wide negotiation.

 
6. SCSI controller transfer rate is too fast:
 
Use the manufacturer's SCSI setup program to lower the SCSI transfer rate.
 
(NOTE: Check with the controller and backup device manufacturer for the proper configuration for SCSI transfer rate.)
 

 
7. SCSI controller synchronous negotiation enabled:
 
Use the manufacturer's SCSI setup program to disable synchronous negotiation on the SCSI controller card. Check with the controller and backup device
 
manufacturer for the proper configuration for SCSI synchronous negotiation.
 

 
8. Incorrect termination or bad cables:
 
Verify that the SCSI cable is good and is configured to provide proper SCSI termination. Do not mix passive and active termination.
 

 
9. Confirm that the tape drive is functioning properly:
 
Check with the tape drive manufacturer for diagnostic software to test the condition of the tape drive hardware.
 

 
10. General SCSI problems:
 
Isolate the tape drive(s) to its own controller card.