cancel
Showing results for 
Search instead for 
Did you mean: 

Import of a one-off full backup into NBU 7.5

sean_h
Level 4

Hi there,

We have a central NBU 7.5 server running at a remote location

We have a local Windows Standard 2008 server with 6TB that we are trying to backup for the first time but the network links are only 10MB i am told and due to be upgraded to 100MB 2nd qtr 2016

Unfortunately, the backup keeps failing

Q. Is there a way to perform a one off local backup of the Windows Standard 2008 server to local media in a particular format, and then ship that to the administrators of the NBU 7.5 server to be imported ?

Once imported, we can then continue with incrementals

Is this approach possible/feasible please ?

Many thanks in advance of any help and guidance

1 ACCEPTED SOLUTION

Accepted Solutions

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi Sean,

 

With that amount of data you'd really want to have dedupe solution in place. Deduplication is basically an incremental type backup but on a block/segment level, not a file level.

 

Example

Backup with traditional Disk/Tape

Data Path = Client >> WAN >> Media Server >> Disk/Tape

You have 6TB and for the example we'll say that is made up of 6,000,000 1 MegaByte files. First backup using traditional method would require backup of 6TB, pretty straight forward.

Now its time for the incremental...... Since the first backup 300,000 files were updated (5% change rate). So your incremental backup would require 300GB to be copied to tape. Not bad, but quite a bit of data to send over the WAN each night.

 

Backup with Deduplication

Data Path = Client >> WAN >> Media Server >> Storage Server with deduplication

Same Scenario as above but during the first backup the storage server would hash all the data received so it knows what each segment looks like. It keeps record of these hashes for future backups. It now has the 6TB plus all the hashes.

You should also understand that when the 300,000 files were modified the entire 1MB probably didn't change. Updates are usually small, someone changed a cell in excel and save the file again. So out of the 1MB, maybe a few bytes really changed.

If you're using deduplication the process of segmenting the file and comparing the hash would pick this up. The client would and the server would communicate, figure out that only 1 or 2 segments are new and only transfer those segment to the storage server. Now your 300,000 files might only require 300MB to be sent, not 300GB as with a traditional incremental.

I suppose you could think of it as an incremental of an incremental. It doesn't just look if the file has changed, it looks at exactly what has changed, the blocks.

 

Now this is all great but how does this help you. In your scenario you could do the following

  1. Take that 6TB and put it on  some type of transportable media, send it to the central site and then have it backed up via another client it is attached to. NetBackup can then be told that your client's next backup should reference the central client when doing the hashing. It would then figure out all the data is already in the storage pool and only transfer new unique data.
  2. Get central to send over a temporary / travelling media server to backup your client and then its sent back to central after which the process continues as per normal by sending unique data.

The final recommendation is for you to get your own deduplication media server in your site which could duplicate the backups to central for offsite storage. This is going to help you if your server is completely destroyed and you need to restore everything. The topic of this discussion doesn't protect you against complete local failure because it would take too long to restore the 6TB back to your site. Without a local backup server you'll need to perform some type of process like we've described above, but just in reverse, and without the luxury of time on your side.

 

 

View solution in original post

8 REPLIES 8

Douglas_A
Level 6
Partner Accredited Certified

there are a few options for this.. the easiest would be to back it up to tape or some other format and import the tape the NetBackup 7.5 environment, this would require you have a NetBackup environment and tape system at the location. 

You could just back it up to some flat file/copy and USB ship the data to the DC then copy it to a local system and back it up that way to "seed" see the data. 

Also if you have a similar data set you could use the seeding method described in the technote below 

https://support.symantec.com/en_US/article.HOWTO89158.html

Depending on whats easiest either of these would work well. 

 

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Are you using deduplication storage?

sean_h
Level 4

Riaan - many thanks for the reply

To be honest, i have no idea as the backup solution if supported offshore, but having a quick read up, how would that help us, as presumably, you still have to get THAT initial full backup ?

Once we have that, we should be fine with incrementals....unless i am misunderstanding

sean_h
Level 4

Hi Doug - thanks for the reply.

Locally, on the windows server, my guess is that we would be looking to do the minimum to get that initial backup, imported into the remote NBU, and then the incrementals should suffice and keep us up to date

I'll take a read of that doc and see if it makes sense and fits the bill for our requirements

Marianne
Moderator
Moderator
Partner    VIP    Accredited Certified
What NBU software is installed on the remote server? Client or server software? You will need a master or media server in the remote location for the initial backup.

RiaanBadenhorst
Moderator
Moderator
Partner    VIP    Accredited Certified

Hi Sean,

 

With that amount of data you'd really want to have dedupe solution in place. Deduplication is basically an incremental type backup but on a block/segment level, not a file level.

 

Example

Backup with traditional Disk/Tape

Data Path = Client >> WAN >> Media Server >> Disk/Tape

You have 6TB and for the example we'll say that is made up of 6,000,000 1 MegaByte files. First backup using traditional method would require backup of 6TB, pretty straight forward.

Now its time for the incremental...... Since the first backup 300,000 files were updated (5% change rate). So your incremental backup would require 300GB to be copied to tape. Not bad, but quite a bit of data to send over the WAN each night.

 

Backup with Deduplication

Data Path = Client >> WAN >> Media Server >> Storage Server with deduplication

Same Scenario as above but during the first backup the storage server would hash all the data received so it knows what each segment looks like. It keeps record of these hashes for future backups. It now has the 6TB plus all the hashes.

You should also understand that when the 300,000 files were modified the entire 1MB probably didn't change. Updates are usually small, someone changed a cell in excel and save the file again. So out of the 1MB, maybe a few bytes really changed.

If you're using deduplication the process of segmenting the file and comparing the hash would pick this up. The client would and the server would communicate, figure out that only 1 or 2 segments are new and only transfer those segment to the storage server. Now your 300,000 files might only require 300MB to be sent, not 300GB as with a traditional incremental.

I suppose you could think of it as an incremental of an incremental. It doesn't just look if the file has changed, it looks at exactly what has changed, the blocks.

 

Now this is all great but how does this help you. In your scenario you could do the following

  1. Take that 6TB and put it on  some type of transportable media, send it to the central site and then have it backed up via another client it is attached to. NetBackup can then be told that your client's next backup should reference the central client when doing the hashing. It would then figure out all the data is already in the storage pool and only transfer new unique data.
  2. Get central to send over a temporary / travelling media server to backup your client and then its sent back to central after which the process continues as per normal by sending unique data.

The final recommendation is for you to get your own deduplication media server in your site which could duplicate the backups to central for offsite storage. This is going to help you if your server is completely destroyed and you need to restore everything. The topic of this discussion doesn't protect you against complete local failure because it would take too long to restore the 6TB back to your site. Without a local backup server you'll need to perform some type of process like we've described above, but just in reverse, and without the luxury of time on your side.

 

 

sean_h
Level 4

Riaan,

Apologies for the delay and many thanks for the comprehensive write up - really appreciated !

Is there a way to check if de-dup is actually in place by checking for a particular file or the contents of a file on some of the servers currently being backed up (clients) ?

does the meer presence of pd.conf mean de-dup is in place ?

 

sdo
Moderator
Moderator
Partner    VIP    Certified

To truely confirm whether de-dupe was used, look in the activity monitor detail for any given backup job for something like:

info myserver(pid=3100) StorageServer=PureDisk:myserver; Report=PDDO Stats for (myserver): scanned: 274909110 KB, CR sent: 205649638 KB, CR sent over FC: 0 KB, dedup: 25.2%