cancel
Showing results for 
Search instead for 
Did you mean: 

Drive replacement

Netchi
Level 3

hi,

I have a netbackup 6.5.4, we have configured one of the server as a san media server and it is hosting Production SAP database. we replaced a drive which was assigned to it, it has solaris 10 OS

We have rezoned with new WWPn number to the server and delelted old zone.

The os team says the drive is detected, now do we need to run SG BUILD as it is not detected by netbackup.

Kindly let me know the steps needs to be followed on SOlaris after driver replacement on SAN media server.

 

Thanks,

 

1 ACCEPTED SOLUTION

Accepted Solutions

Dipendra_Singh
Level 4

1. To determine if a sg driver is loaded

/usr/sbin/modinfo | grep sg

2. To remove the sg driver
/usr/sbin/rem_drv sg

3. To reinstall the sg driver

/usr/bin/rm -f /kernel/drv/sg.conf

/usr/openv/volmgr/bin/driver/sg.install

4. Invoke the following two commands to run the sg.build script to create target IDs and LUNs:
cd /usr/openv/volmgr/bin/driver

/usr/openv/volmgr/bin/sg.build all -mt 15 -ml 1

The -mttarget option and argument specifies the maximum target ID that is in use on the SCSI bus
(or bound to an FCP HBA). The maximum value is 126. By default, the SCSI initiator target ID of
the adapter is 7, so the script does not create entries for target ID 7.

The -mllunoption and argument specifies the maximum number of LUNs that are in use on the SCSI bus
(or by an FCP HBA). The maximum value is 255.

5. cp /usr/openv/volmgr/bin/driver/st.conf /kernel/drv/st.conf

6. cp /usr/openv/volmgr/bin/driver/sg.conf /kernel/drv/sg.conf

7. Verify that the system created the device nodes for all the tape devices by
ls -l /dev/rmt/*cbn

View solution in original post

3 REPLIES 3

Dipendra_Singh
Level 4

1. To determine if a sg driver is loaded

/usr/sbin/modinfo | grep sg

2. To remove the sg driver
/usr/sbin/rem_drv sg

3. To reinstall the sg driver

/usr/bin/rm -f /kernel/drv/sg.conf

/usr/openv/volmgr/bin/driver/sg.install

4. Invoke the following two commands to run the sg.build script to create target IDs and LUNs:
cd /usr/openv/volmgr/bin/driver

/usr/openv/volmgr/bin/sg.build all -mt 15 -ml 1

The -mttarget option and argument specifies the maximum target ID that is in use on the SCSI bus
(or bound to an FCP HBA). The maximum value is 126. By default, the SCSI initiator target ID of
the adapter is 7, so the script does not create entries for target ID 7.

The -mllunoption and argument specifies the maximum number of LUNs that are in use on the SCSI bus
(or by an FCP HBA). The maximum value is 255.

5. cp /usr/openv/volmgr/bin/driver/st.conf /kernel/drv/st.conf

6. cp /usr/openv/volmgr/bin/driver/sg.conf /kernel/drv/sg.conf

7. Verify that the system created the device nodes for all the tape devices by
ls -l /dev/rmt/*cbn

mph999
Level 6
Employee Accredited

If all that has changed is one drive, all that is needed is what I have posted.  There is no need to start rebuilding the sg.conf / links files - this leads to broken systems if it goes wrong.

Further, the procedure is completly wrong.

Step 2. is not needed

This part of step 3 '/usr/openv/volmgr/bin/driver/sg.install"' should come after step 4, else you are only putting back exactly the same config as was there before.

Step 4. is not needed if the drives are configured via WWNs (seen in cfgadm output)

-mt 15 -ml1 will create unused devices files that will slow the system down when runnig commands such as scan / device wizard.  All that is required is sg.build all

If the drives  are configured via targtet/ lun, then there should not be a generic command sg.build all -ml X -mt Y

There should be a step explaining that the maximum target / luns should be drtermined, to know what values to use.  Unnecessary lines in the two files, should then be removed.

Step 5, and 6 are not needed if you first cd to /usr/openv/volmgr/bin/driver and then run ../sg.build ...

 

Regards,

 

Martin

mph999
Level 6
Employee Accredited

If scan does not show the drive, as as the WWN is differennt, I suspect that yes you do.

However, first you must edit the sg.conf / sg.links files in /usr/openv/volmgr/bin/driver

Check that the drives are configured via WWN, example.

We see that the WWNs are list for each drive

 

rdgv240sol22 # cfgadm -al -o show_FCP_dev
Ap_Id                          Type         Receptacle   Occupant     Condition
c3                             fc-fabric    connected    configured   unknown
c3::210000e08b109ee5           unknown      connected    unconfigured unknown
c3::2101000d772420c8,0         med-changer  connected    configured   unknown
c3::2101000d772420c8,1         tape         connected    configured   unknown
c3::2101000d772420c8,2         tape         connected    configured   unknown
 
IF the drives are configure viq target/ lun, post back here and I'll give the steps.  Not likely though, most configs these days are via WWN.
 

Edit both these files, and in the lines that contain the old WWN, carefully change this to the new WWN.

Then run ...

modunload -i $(echo $(modinfo |grep "sg (SCSA" |awk '{print $1}'))

mv /kernel/drv/sg.conf /kernel/drv/sg.conf.old
/usr/openv/volmgr/bin/driver/sg.install
 
Job done.
 
IF ....  the two files do NOT exist then do this 
 
cd /usr/openv/volmgr/bin/driver
../sg.build all
 
Then check in the two files created, that all the WWNs are in there that should be (for the differennt devices).
 
To get a list, use cfgadm -al -o show_FCP_dev
 
May need to run tpautoconf -replace_drive command (see man page for example) or just delete and rescan the drive that was replaced.
 
 
Martin