cancel
Showing results for 
Search instead for 
Did you mean: 

Moving Dedup Store with little downtime

I encounter a problem with Windows Block size.

My Dedupe store would not be able to be larger than 16TB, so we need to format volume again with a larger block size

 

To solve this, we consider to

1) Add another storage array on another raid controller

2) robocopy dedup store to this new storage array (while backupexec still running backup) (would take a lot of time maybe one day at least)

3) Stop all backupexec services, then do another differential robocopy (would take a few minutes, or maybe one or two hours)

4) target new volume for dedup store in backupexec to use the new array

5) format main array with a larger block size and do robocopy in the other way

6) Stop all backupexec services, then do another differential robocopy

 

Would it work and should I be able to get my deduplication store up ?

 

 

1 Solution

Accepted Solutions
Highlighted
Accepted Solution!

The best way that I've found

The best way that I've found is to stop all services, copy directory, make sure new location has same path/drive-letter, start services.  It works.

Using rsync/robocopy and resyncing at a later time is an unsupported complexity that is not needed when a proper outage is planned.

Moving a number of TB's of data over the local PCI buss and fast disk is not terribly slow.  It's better on a SAN with snapshots and clones, but workable with a quality JBOD and SAS too.

View solution in original post

7 Replies
Highlighted

First off I recommend getting

First off I recommend getting upgraded to Backup Exec 2012 revision 1798 Service Pack 2 - http://www.symantec.com/docs/TECH203155

and for moving the dedup folder check this link out for more info:

 How to re-create a Backup Exec deduplication storage folder or deduplication disk storage device with a new drive letter and/or folder path note: see the section for Backup Exec 2012

http://www.symantec.com/docs/TECH160832 

 

Highlighted

To change the location of a

To change the location of a deduplication disk storage device:

"Right-click the deduplication disk storage device again and then click Delete."

 

Are you sure I would not loose all data there?

 

 

The create deduplication disk storage part does not even talk about targeting the folder where the deduplication store has been moved or copied

 

Seems a bit risky Smiley Sad

 

 

Highlighted
Accepted Solution!

The best way that I've found

The best way that I've found is to stop all services, copy directory, make sure new location has same path/drive-letter, start services.  It works.

Using rsync/robocopy and resyncing at a later time is an unsupported complexity that is not needed when a proper outage is planned.

Moving a number of TB's of data over the local PCI buss and fast disk is not terribly slow.  It's better on a SAN with snapshots and clones, but workable with a quality JBOD and SAS too.

View solution in original post

Highlighted

10 TB of data? would take at

10 TB of data? would take at least one day

That's why I considered using robocopy to do a first sync, then do a differential to reduce downtime

 

I heard that deduplication device used an hardware ID for raid controller (I had a lot of problems with deduplication device after changing raid card)

 

Are you sure that just copying data and changing drive letter would do the job? or better to follow symantec documentation to delete and create dedup device again?

Highlighted

Hello Unarcher,    You may

Hello Unarcher,

   You may want to refer to the following technote for the right procedure to move/recrete the Deduplication folder

http://www.symantec.com/docs/TECH160832 : How to re-create a Backup Exec deduplication storage folder or deduplication disk storage device with a new drive letter and/or folder path

 

Thanks,

-Sush...

Highlighted

if Drive Letter and folder

if Drive Letter and folder path stay the same?

Is this needed?

 

 

Highlighted

10TB should take less than 6

10TB should take less than 6 hours in most cases.  Then again, assuming you can read and write a few hundred MB/s to begin with on the filesystem.  

Over 1GbE (125MB/s) it's mathematically possible in 3hours.  Realistically, it's longer than that, not a whole day though in any way.