I have a question to the SLP behavior with Replication Director on cDot Netapps. On 7mode Netapps it was possible to have multiple SLPs (daily, weekly, monthly,...) with the same destination volume on a backup netapp. For example. I have a daily SLP which replicates the VMs to a secondary site and I have a monthly SLP which does the same but also do a tape out once a month for long term retention. In 7 mode I had 1 destination volume for that. In cMode now the RD creates destination volumes based on SLP names. So If i have 3 SLPs I need 3 times the space on the destination Netapp.
I wonder if no one else has this problem???
I don't understand what the problem is. Clearly the NetApp was able to repeatedly and continually ingest backup data, then why wouldn't be able to continue doing so just because the SLP names are different. Exacly what capacity crunch behaviour are you seeing?
And, we know that NetApp has internal dedupe - but is this at the whole cluster layer, or aggregate layer, or volume layer. If dedupe is at the aggregate layer, and your three RD volumes are in the same aggregate, then there isn't a problem - is that right?
On 7mode Netapps i have 1 DFM dataset which will be triggered from Netbackup to start a Netapp snapshot and and replicate the snap.The destination volume was the same for different SLPs. So I was able to do weekly, monthly, yearly tape outs from the same volume on the destination netapp.
With cDot the RD creates 3 destination volumes. The volumes have a naming schema from Netbackup: SLPName_NetappSourceNode_VolumeName_P_V_DestinationAggrgateName
So the difference is:
source_vmware_volume -> dest_vmware_volume (for all SLPs)
source_vmware_volume -> daily_dest_vmware_volume
source_vmware_volume -> monthly_dest_vmware_volume
source_vmware_volume -> yearly_dest_vmware_volume
If I have a 50TB vmware volume on the source I also have a 50 TB volume+Snaps on the destination with RD and 7mode.
Now with RD and cDot I need at least 150TB on the destination...
On non alll flash systems the deduplicatin is volume based. If I have a AFF(all flash) system as backup host everithing would be fine. But I don't think peaple are using all flash systems as backup destination at the moment.
I hope you understand the problem now.
Unfortunately this is a known problem. I do not know if there is a fix available or whether one is being looked at.
My best suggestion is to log a support call - if there is a fix, they should be able to provide the answer; if not, then talk to your account manager to see if they can assist in getting something done by referencing the case.
Ok. I have opened a case. In my opinion it should be easy to fix that by changing the reference to 1 destination datastore in the Netbackup db. But I don't know the insights of the db. :)
2 possible way would be to migrate to cloudpoint when it supports onprem snaps and replication for VMware. I hope the behavior is not the same in the future.