Showing results for 
Search instead for 
Did you mean: 

CloudCatalyst cache size

Level 6
Partner Accredited



Let say we have a Cloud Catalyst virtual Appliance, The virtual CC appliance uses 1TB of cache disk.

Let say I have a backup image that is 1,5TB large after dedupe.So my MSDP Media server will send 1,5TB over the LAN to my CC virtual Appliance.

So the dehydrated backup image exceed the CC cache size.

How will CC behave ? Will CC "bufferize" the backup image (I mean after a block reach the S3 storage it will free it from the CC cache) or will my backup image duplication job will fail because of a lack of space in CC cache ?


Level 4

Cloud catalyst process and sends data to the cloud based high and low watermark level. default level is 80% and 60%. When you send data, CC processes and commits a copy to the cloud. CC will also have a copy until it reaches 80%(High water mark)and then it starts cleaning up the images which have already been sent to the cloud.

Your duplications should not fail. It might be queued. As of today, Cloud catalyst can process and push only 0.7 TB/hour of obtimized data to any cloud. Also recommendations is to keep 40 upload threads which means your maximum concurrent jobs should be 40.

Upload thread can be changed on esfs.json file.

To add to the previous reply, the duplication job to the cloud catalyst will not complete until all data is committed to the cloud storage. The high & low water marks have no bearing on this behaviour - they are used to control when to start (& stop) clearing data from the local cache.

The local cache is simply to help with the data transfer from the originating source and provide for faster restores if the required data segment is in the local cache. 

The fact that your cache size is lower than the backup size is not a problem in itself, but I would consider increaing this.