Hi jjimenez2
So I think you backup flow is approximately:
client -> MSDP -> CloudCatalyst (AWS).
Your full backup is reporting 9GB. This should be deduplicating (I hope - but depends on the source data). You can see how much by reviewing the job details towards the end - you should see a line something like this:
17/03/2021 11:21:24 PM - Info nbu2 (pid=1913118) StorageServer=PureDisk:nbu2; Report=PDDO Stats for (nbu2): scanned: 1454436 KB, CR sent: 303821 KB, CR sent over FC: 0 KB, dedup: 79.1%, cache disabled, where dedup space saving:47.5%, compression space saving:31.7%
In the above case the backup was about 1.45 GB, but due to dedplication only 300 MB was needed to be sent to the disk pool. So that's the first stage.
The duplication (by SLP) would then send to AWS via the backup image and you will see similar stats as above for the particular image (although the actual deduplication rate may vary depending on what data already exists in the cloud - at the worst you would expect the the amount of data sent to the cloud is the same as from the backup.
It is difficult (if not impossible) once the backup has run to determine (other than the above job information) how much the backup data was deduplicated. NetBackup as a rule will report the original size of the backup.
One way to see how much storage has been consumed by the backup in the cloud is to simply caclulate the size of the bucket (hopefully it will be less than the 9GB original size).
Now if the source data doesn't deduplicate well or at all, then the space consumed might be the same as the original source (the job detail information will tell you). So in this case the first backup may consume the full 9GB, but subsequent backups should (assuming relatively static data) deduplicate well against itself (so the second full backup althopugh 9GB, will only increase your storage consumption by say 50MB).
Hope this helps.
David