Forum Discussion

mikevienna's avatar
mikevienna
Level 3
4 years ago

Can't use 256k data buffer size NBU 9.0

Hi,

Im currently evaluating NBU 9.0 at home in my test environment :

X) single master/media running Solaris 11.4 x64 on an Sun/Oracle dual Opteron server

X) Two HP lto2 standalone tape drives connected in a chain on a single (external) U320/U160 SCSI HBA

Backup speed is max around 30mb/s even compression is on (using rmt/Xcbn device files) but because my testdata are large ISO and gz files and they are compressed that's OK must try w/ "normal" files..

As seen in bptm logs a 64k buffer size and a number of 30 buffers is used.

Set number_data_buffers to 128 is no problem.

BUT, as soon I set the buffer size greater than 64k the backup fails w/ a msg like "256k are defined but only xx bytes are written..".

I will copy the exact error ASAP.

I thought, and done it so often, that 256k is the defacto standard on HP LTO2 tape drives...

Any Ideas would be great.

And yes lto2 is old but it is just for testing at home and I don't want to spend the money for a lto7 or so drive.. And I have done it at customer sites without a problem..

Maybe the solaris st driver but the nbu driver (/kernel/drv/sg.conf) is used of course but I don't find a setting..

Thank you
Christopher


  • mikeviennaDo you have any specific objective for this testing ? Please share it with us so that we can help you better.

    You may be already knowing it but we stopped support LTO2 drives since NBU 8.0 or 773. So not really sure if 9.0 can detect it.

    Have you tried updating device mapping file ? Try it if not done already and then try using different buffer settings than default.

     

    • mikevienna's avatar
      mikevienna
      Level 3
      Hi since I'm currently recovering from a surgery (liver Transplant) I use my time to refresh my NBU skills since I've done that a long time (VCDP UNIX cert..)..cant even get the web gui to work :)

      BTW standalone drives are such a pain my first time using no library w Barcodes ..

      As stated I can run jobs wout a problem, my drives are configured as hcart2 ones....

      Will try to update the Dev mapping file and have a look at the scsi HBA (hp branded lsi 20320 Ultra320 pci-x single channel)

      Thank you
  • Hi
    There might be a problem with scsi hba which might not support 256kB Block size if not equipped with some cache memory or Just simply does not support larger Block size...
    Simply as that. Google for your hba and check if its fw is up to date... Maybe it Will help.
  • Hi mikevienna  

    Add VERBOSE = 5 to bp.conf. Crete directory /usr/openv/netbackup/logs/bptm 

    Re-run backup. There should now be a de-bug log in directory create above.

    Looks for lines with <4> <8> <16>. Hopefully this gives a better indication of why block size of 256 is rejected. 

    Update: after a bit of searching, due to age of this SCSI controller, it may lack support for 256K block size. 

    Best Regards
    Nicolai

    PS: I wish you a speedy recovery

    • mikevienna's avatar
      mikevienna
      Level 3

      Hi,

      I will add the VERBOSE to my bp.conf file...

      As I stated that the bptm log file is writing "block size of 256k but xxK were written" I have the bptm log directory

      already...Or do you mean that after adding the verbose setting I have to re-create the bptm directory ?

      Right, the HBA is old, but the "newest" with an PCI-X 133Mhz slot that I can find..

      Will maybe try getting a FC drive or an pci-express SCSI HBA, but my test server only has two PCI-X slots (Sun Fire X4100M2),,

      Will report , maybe I can tune the driver a little bit.

       

      Thank you very much!

      • Nicolai's avatar
        Nicolai
        Moderator

        Just add VERBOSE = 5 to bp.conf and re-run backup. No need to recreate the bptm directory.

        Adding VERBOSE = 5 to bp.conf will increase the logging from bptm in the bptm debug log.