"unusable" LTO1 fiber drives

Good day,

let me start by describing our environment.

we have one Master/Media Server and 6 Media Servers, we backup 200+ servers/NT boxes nightly. We use an L700 Library with 20 fibre LTO1 drives. This is a shared storage environment.

Here is my problem;

From time to time we experience problems with the Drives, and STK comes out and may replace one. When they replace the drive, it doesn't always reconfigure back into the environment, and sometimes ends up being "unusable".

Here is an excerpt from cfgadm -al;

c2 fc-fabric connected configured unknown
c2::500104f0005e62c5 tape connected configured unknown
c2::500104f0005e62c8 tape connected configured unknown
c2::500104f0005e62cb tape connected configured unknown
c2::500104f0005e62ce tape connected configured unusable
c2::500104f0005e62d1 tape connected configured unknown
c2::500104f0005e62d4 tape connected configured unknown
c2::500104f0005e62d7 tape connected configured unknown
c2::500104f0005e62da tape connected configured unusable

I have searched for a means to bring these drives back into the environment. Sun has not been very helpful, they have basically told me the only way to reintroduce these drives is to reboot. Obviously not a production environments best answer. So I come here to see if anyone else has experienced this problem, and maybe see if anyone has a workaround/fix of their own that doesn't involve rebooting.

Thank you.

XeroxMessage was edited by:
1 Reply

Re: "unusable" LTO1 fiber drives

To my knowledge Sun is correct, removing a configured device and adding it back with the same values makes Solaris very unhappy, the best and quickest way to fix it is to reboot.

I'm not familiar with the L700 library but I know when we have StorageTek come out to do any drive replacements on our SL500 or L180, I find a maintenance window to stop all backup jobs, let them do their thing and then bring the backup server back online so it detects any drive configurations upon bootup.

I'm sure you could echo values into the SCSI portion of the kernel that is running but I don't see that beneficial since the drives should be using the same device name and scsi configuration.