cancel
Showing results for 
Search instead for 
Did you mean: 

Using Dedup to backup to a S3 Compatible target in BE

jlowe
Level 3

Hey guys,

 

I'm having trouble setting up Deduplication to a S3 compatable target. We have the cloud location setup in BE storage and can backup to it. However when we follow the procedure using Open Dedup to add a dedup volume for the S3 tartget we encounter errors. Is any one able to provide me some help on this  or even confirm that this is possible ?

8 REPLIES 8

Gurvinder
Moderator
Moderator
Employee Accredited Certified

you can put in some details, how are you configuring it and what errors ? that would help to provide any help/suggestions.

Hey ya,

We are trying to use the command line tool in Open Dedup which is outlined in the document provided by veritas. The command runs, and when trying to run the mountsfs command the volume doesn't mount, stating 501 as the error. 

 

I'll add some screenshots tomorrow. Because there is information scattered around everywhere, and no real set document for S3 (not AWS) type storage and dedup it is hit and miss. A set of steps to achieve this would be helpfull, then I can compare to what I am doing now.

 

Josh

Gurvinder
Moderator
Moderator
Employee Accredited Certified

are you referring to this :
https://www.veritas.com/support/en_US/article.100042352

for ODD, just have a container or a bucket and then use the OpenDedupeConfigUI tool to setup the configuration instead of manual commands. Later use BE console and configure the stoage as OST (make sure libstspiopendedupe.dll plugin is copied in BE install path).

Colin_Weaver
Moderator
Moderator
Employee Accredited Certified

You might want to watch this video (OK it is for BE 16 but the steps will be very similar in BE 20)

 

https://www.youtube.com/watch?v=VAa-K8WIXVo

Hey Gurv,

 

Thanks for the comments. Yep that is the doc I looked at.  However when looking at the OpendedupeconfigUI tool, it only gives options to add AWS, azure and a couple of others. None for a custom S3 comptable target. Maybee I'm missing something ? 

We have jobs that currently backup to local disk then duplicate to tape. Moving forward we want that duplicatation to go to the S3 based storage and be deduped during the processed. I'll take a look at the you tube video and run through the process again and make sure I haven't missed anything - however all the videos I've looked at so far go through the process with AWS, or other specific services - none actually show the process when you have a S3 comptable service, and need to point the dedup at a specific service host name, that isn't AWS, etc. 

Gurvinder
Moderator
Moderator
Employee Accredited Certified

oh ok, I dint check it was S3 compatible storage. which s3 cloud storage you are using ? what was the command you ran to with mounsdfs ? can you share the screenshot.

 

Hey ya

The service is a new zealand based service called Revera Vault - we have set this up directly as storage in BE and we can backup direct to it as an S3 comptable service.

So we run the Mksdfs which succeeds - command below:


C:\Program Files\sdfs>mksdfs --volume-name=dedup --volume-capacity=100GB --aws-enabled true --cloud-access-key xxxxxxxxxxxxxxx --cloud-secret-key xxxxxxxxxxxxxxxxxxxx --cloud-bucket-name dedup --cloud-url vault.revera.co.nz

Then we try and mount the volume which is the part that fails:

C:\Program Files\sdfs>mountsdfs -v dedup -m S
Running Program SDFS Version 3.6.0.13 build date 2018-03-30 05:30
reading config file = C:\Program Files\sdfs\etc\dedup-volume-cfg.xml
target=vault.revera.co.nz
disableDNSBucket=true
Unable to initiate ChunkStore
java.io.IOException: com.amazonaws.services.s3.model.AmazonS3Exception: Not Impl
emented (Service: Amazon S3; Status Code: 501; Error Code: 501 Not Implemented;
Request ID: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at org.opendedup.sdfs.filestore.cloud.BatchAwsS3ChunkStore.init(BatchAws
S3ChunkStore.java:797)
at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.j
ava:66)
at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:15
4)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:86)
at org.opendedup.sdfs.windows.fs.MountSDFS.main(MountSDFS.java:161)
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Not Implemented (S
ervice: Amazon S3; Status Code: 501; Error Code: 501 Not Implemented; Request ID
: null; S3 Extended Request ID: null), S3 Extended Request ID: null
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorRespon
se(AmazonHttpClient.java:1630)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest
(AmazonHttpClient.java:1302)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(Ama
zonHttpClient.java:1056)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonH
ttpClient.java:743)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(
AmazonHttpClient.java:717)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHtt
pClient.java:699)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(Amazon
HttpClient.java:667)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execu
te(AmazonHttpClient.java:649)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513
)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4
330)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4
277)
at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.ja
va:1338)
at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Clie
nt.java:1278)
at org.opendedup.sdfs.filestore.cloud.BatchAwsS3ChunkStore.init(BatchAws
S3ChunkStore.java:634)
... 4 more