Thank you for the swift reply RLeon!
The technology is somewhat clearer to me, although my understanding was that the track log effectively records the point in time of the filesystem as of the last backup, and gets updated whenever a new backup is run - it doesn't get bigger if you have more changes (it only grows if there are more files or bigger files), and there are no processes running in the background between backups to record filesystem changes. Surely the log therefore records metadata about the filesystem rather than changes?
I'm still not sure how Netbackup Accelerator works out exactly what has changed so quickly though. It can't be performing a full block-level scan of the filesystem - if it was, I couldn't perform a 700GB backup in 5 minutes on SATA disks... Maybe it checks inode tables or something similar?