cancel
Showing results for 
Search instead for 
Did you mean: 

Why can't I change size of cloud data blocks?

Dale_Mahalko_LH
Level 4

I have discovered something annoying after doing some test uploads to the cloud and reviewing my Amazon S3 billing...

$0.01 per 1,000 PUT, COPY, POST or LIST requests to Standard-Infrequent Access
3,659,107 Requests ..................   $36.59 

Huh? What is going on? Why so many of these requests? So I go poking around in the bucket to see what is in there....

Folder BEOST_00000008 

  1. File "100" - May 23, 2018 6:25:06 PM - Size: 1.0 MB - Standard_IA
  2. File "101" - May 23, 2018 6:25:06 PM - Size: 1.0 MB - Standard_IA
  3. File "102" - May 23, 2018 6:25:06 PM - Size: 1.0 MB - Standard_IA
  4. File "103" - May 23, 2018 6:25:06 PM - Size: 1.0 MB - Standard_IA

etc

So, it appears Backup Exec is breaking up my data into millions of tiny files that are only 1 megabyte in size, and thereby jacking up my cloud storage costs with so many GET/POST commands.

Yet weirdly, although I can set a "Maximum file size" for data stored to a physical local drive, I can not find an equivelant file size setting for cloud storage, and the size is apparently forced to be 1 megabyte.

Why am I not permitted to have control over this? If I could just increase the cloud file size to 10 megabytes, my Amazon billing for GET/POST would drop to 1/10th of the charges so far. 

,

I am aware that choosing a larger cloud storage file size will increase costs for restore downloads of small data requests, but restores will happen extremely rarely, so it is a cost that can be largely ignored as part of day-to-day operations.

 

2 REPLIES 2

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

Do you stil have verify enabled as verify counts as a read operation so may incur costs

Verify is disabled on all my cloud backup jobs.