Forum Discussion

Seppo74's avatar
Seppo74
Level 3
3 years ago

Storage maximum file size ignored when BE backup storage is a NFS share

Hello everyone.

In my attempts to understand why my Backup Jobs are Loading Media every 3 seconds, i gave a look at the Jobs' activities plus the QNAP storage where backups are saved and with my great surprise i noticed that whatever number i put into the "maxium file size" field of the Storage Properties, it's 101% ignored and every backup creates THOUSANDS of .bkf of 2.085.888 KB.

Now, given i perfectly know that the lowest the .bkf files size, the lower chance to have data corruption and supposing i don't care about this and i want the .bkf files to be 100GB each instead of 2GB, is there a way to achieve this?

Thanks!

  • test it on a local storage with increased file size and see if it works.
    if the same setting is not working on Qnap , then it could be some setting on Qnap. Are you able to create a share like you did for BE and then share larger files in that network share

    • Gurvinder's avatar
      Gurvinder
      Moderator

      Or mount the volume via ISCSI on BE server, assign a drive letter and create a local storage with increased file size --> see if that works.

    • Seppo74's avatar
      Seppo74
      Level 3

      I can give it a try tomorrow and let you know, but in the IMG1234567890 backup folders in the QNAP i see 50+ GB files. If there's something on QNAP preventing files larger than 2GB to be created, shouldn't this apply to everything?

      • Gurvinder's avatar
        Gurvinder
        Moderator

        does not look like something on BE side, that is why test on a local drive.
        If you have a backup folder on Qnap where larger files i.e. bkf show up then may be try creating a new share and test on that and see how it goes.

  • An additional note - having just tested this using BE 22 by setting the Maximum file size in the prioperties of the storage to 400GB (this appears to be a maximum limit)

    clicked Apply to save the change, then ran a backup, which successfully created a 122GB bkf file. Would expect the same behaviour in any version of BE to be honest. Consequently I'm suspecting the QNAP having an effect, so would advise to follow Gurvinder's steps above.

  • I am starting to think it’s something related to how BE behaves with network shares being backed up to non-Windows disk storage not directly attached to the server where BE is (the QNAP is simply attached to our network). Given the same shared folder into the QNAP:

    • copypaste of large files works.
    • Backing up virtual machines with BE works.
    • Backing up the same files via Windows Server Backup APP works.

    It’s quite obvious to me that “something” is telling BE to not create files larger than 2GB when the jobs do include single folders and files (being them on Windows or on our NetApp there’s no difference) on the QNAP share. Since every other type of backup is working and properly create large size files, it definitely cannot be a QNAP only issue. It has to come also from within BE itself.

    Again, any suggestion?

    Edit: in the meantime i opened a ticket with QNAP and they pointed me to an interesting link: Linux NFS faq (sourceforge.net).

    Taken from the first chapter:

    A1. What are the primary differences between NFS Versions 2 and 3?

    A. From the system point of view, the primary differences are these:

    • Version 2 clients can access only the lowest 2GB of a file (signed 32 bit offset). Version 3 clients support larger files (up to 64 bit offsets). Maximum file size depends on the NFS server's local file systems.

    • NFS Version 2 limits the maximum size of an on-the-wire NFS read or write operation to 8KB (8192 bytes). NFS Version 3 over UDP theoretically supports up to 56KB (the maximum size of a UDP datagram is 64KB, so with room for the NFS, RPC, and UDP headers, the largest on-the-wire NFS read or write size for NFS over UDP is around 60KB). For NFS Version 3 over TCP, the limit depends on the implementation. Most implementations don't support more than 32KB.

    • NFS Version 3 introduces the concept of Weak Cache Consistency. Weak Cache Consistency helps NFS Version 3 clients more quickly detect changes to files that are modified by other clients. This is done by returning extra attribute information in a server's reply to a read or write operation. A client can use this information to decide whether its data and attribute caches are stale.

    • Version 2 clients interpret a file's mode bits themselves to determine whether a user has access to a file. Version 3 clients can use a new operation (called ACCESS) to ask the server to decide access rights. This allows a client that doesn't support Access Control Lists to interact correctly with a server that does.

    • NFS Version 2 requires that a server must save all the data in a write operation to disk before it replies to a client that the write operation has completed. This can be expensive because it breaks write requests into small chunks (8KB or less) that must each be written to disk before the next chunk can be written. Disks work best when they can write large amounts of data all at once.

    • NFS Version 3 introduces the concept of "safe asynchronous writes." A Version 3 client can specify that the server is allowed to reply before it has saved the requested data to disk, permitting the server to gather small NFS write operations into a single efficient disk write operation. A Version 3 client can also specify that the data must be written to disk before the server replies, just like a Version 2 write. The client specifies the type of write by setting the stable_how field in the arguments of each write operation to UNSTABLE to request a safe asynchronous write, and FILE_SYNC for an NFS Version 2 style write.

      Servers indicate whether the requested data is permanently stored by setting a corresponding field in the response to each NFS write operation. A server can respond to an UNSTABLE write request with an UNSTABLE reply or a FILE_SYNC reply, depending on whether or not the requested data resides on permanent storage yet. An NFS protocol-compliant server must respond to a FILE_SYNC request only with a FILE_SYNC reply.

      Clients ensure that data that was written using a safe asynchronous write has been written onto permanent storage using a new operation available in Version 3 called a COMMIT. Servers do not send a response to a COMMIT operation until all data specified in the request has been written to permanent storage. NFS Version 3 clients must protect buffered data that has been written using a safe asynchronous write but not yet committed. If a server reboots before a client has sent an appropriate COMMIT, the server can reply to the eventual COMMIT request in a way that forces the client to resend the original write operation. Version 3 clients use COMMIT operations when flushing safe asynchronous writes to the server during a close(2) or fsync(2) system call, or when encountering memory pressure.

    For more information on the NFS Version 3 protocol, read RFC 1813.

    Now, i don't remember if i already said it, but the storage destination folder is a NFS share. Do you know by chance what NFS version Backup Exec uses to talk to NFS shares (WITHOUT using the BE client for Linux of course)? Because if it's version 2, maybe it's possible to force Version 3 and see if something changes.

     

    • Gurvinder's avatar
      Gurvinder
      Moderator

      There is a similar case reported in BE support recently and we created a binary for it. Can you msg me the Veritas case number if you logged one and we can reach out to you.