cancel
Showing results for 
Search instead for 
Did you mean: 

Multiple Small Files

H_Sharma
Level 6

Hello Experts,

We have one solaris 11 servers. It has many small files it has taken more than 12 hours to take the backup on tape and still running. Drives are free and no issue with the drives. Speed we could see on the policy it shows1 mbps that is rather slow.

These server has many small log files i believe thats why its taking time. So somebody has suggessted to TAR these all files and then this would become one file and backup would be fast.So pls let me know to troubleshoot this slow client issue and can we consider this tar option as solution.

 

Thanks,

1 ACCEPTED SOLUTION

Accepted Solutions

Michael_G_Ander
Level 6
Certified

Flash-backup or accelerator is usually the answer to backup of many small files

Fragmentation causes slower backups, so defragmentation can help with the speed

suggest you do a bpbkar -nocont to measure how fast you can read the file system

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

View solution in original post

16 REPLIES 16

Michael_G_Ander
Level 6
Certified

Flash-backup or accelerator is usually the answer to backup of many small files

Fragmentation causes slower backups, so defragmentation can help with the speed

suggest you do a bpbkar -nocont to measure how fast you can read the file system

The standard questions: Have you checked: 1) What has changed. 2) The manual 3) If there are any tech notes or VOX posts regarding the issue

elanmbx
Level 6

Accelerator - definitely takes care of a lot of the "huge number of small files" issue.  Just make sure you've got space on the system for the track journal which ends up in <NETBACKUP_INSTALL>/track/ directory.

I had issues with Accelerator on a Windows client with space constraints due to the size of the track journal.

For example - my Unix host with millions of small files would generally take 30-40 hours to complete a full.  Accelerator full takes 5-6 hours generally.

H_Sharma
Level 6

Hello Experts,

More suggestion pls as didnot get the answer for Tar/Zip files. Will these help?

RonCaplinger
Level 6

Yes, using tar or gzip will help.  But it is not a long-term solution. 

For the number of files to be an issue, it would have to be millions and millions of files.  Check the most recent full backup and see how many files it says it backed up.

elanmbx
Level 6

Don't go tar/zipping files just to address the backup issue.  Someone is going to want to find/restore some of those files some day and if you've tar/gz'd them it is going to be a pain.

Accelerator will definitely help this scenario.

jim_dalton
Level 6

Do as Michael suggest with bpbkar initially OR create yourself a multi GB file- an ISO say - and just back that single file up. If that runs rapidly you know that with that infrastructure you can get performance.

If thats slow, then you have a fundamental issue. Then work out where the bottleneck is.

Other comments about multi files are on the money.

1 m/s is slow even for a mass of small files.

Post back your findings and we can get through to the solution.

Jim

H_Sharma
Level 6

Hi,

We have windows master server and snapshot is available.

If i select In policy i can find.

Snapshot clinet and replication director :::::::-  Perform snapshot backups:::::::: :- snapshot method:::::::::::::: :- (there are many say Auto, EMC, HITACHI, IBM, NBU_SNAP, SHADOW IMAGE, VSFS_CHECKPOING,VXVM etc)

So anybody can tell me which snapshot should i choose for solaris file system?

 

 

elanmbx
Level 6

It depends - check the Snapshot Compatibility List document to see what filesystems are supported for snaphsots on Solaris...

Marianne
Level 6
Partner    VIP    Accredited Certified

You need Enterprise Client license or Capacity License in order to use FlashBackup or Snapshot Backup.

See NetBackup Snapshot Client Administrator's Guide   

Read up about different snapshot methods. Your choice will be based on underlying array hardware or the Volume Manager / File System on the Solaris Client. 
For FlashBackup, you would normally choose NBU_SNAP
(That is if memory serves me right - please read relevant sections in above manual before trying FlashBackup. You will see that a cache partition is needed.)

Also important that you check for compatibility in NetBackup 7 Snapshot Client (CL) .

About creating a 'tar' file:
Think about it this way - NBU also uses 'tar' as underlying format to read and write backups.
If NBU is slow reading files from filesystem, then I cannot see what you would gain by using tar or gzip to first create a one-file archive. Unless these are files that will never change or added to, then yes, by all means create a once-off backup-file and the use NBU to back these up with long-term retention. Then exclude from NBU for further backups.

H_Sharma
Level 6

Hi Mariaane,

I read this guide and configured one policy for testing to check whether snapshot is working on it.

I took a server and in policy i configured as below.

Client:- Solaris.

Policy was standard with snapshot method as nbu_snap. I had also created 1 gb cache file for raw partiion on client location /dev/rdsk/c1t0d0s6

Host Properties > Clients,
select the client, then Actions > Properties, UNIX Client > Client Settings.

and given the path /dev/rdsk/c1t0d0s6 on that for cache partition. I started the backup and getting the below error. Pls help.

12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client  FTL - vfm_freeze: method: nbu_snap, type: FIM, function: nbu_snap_freeze 
12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client  FTL -        
12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client FTL - vfm_freeze: method: nbu_snap, type: FIM, function: nbu_snap_freeze 
12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client  FTL -        
12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client  FTL - snapshot processing failed, status 4240   
12/27/2014 11:13:29 PM - Critical bpbrm(pid=12880) from client  FTL - snapshot creation failed, status 4240   
12/27/2014 11:13:29 PM - Warning bpbrm(pid=12880) from client : WRN - / is not frozen    
12/27/2014 11:13:29 PM - Info bpfis(pid=10367) done. status: 4240 


2/27/2014 11:13:39 PM - Critical bpbrm(pid=15132) from client  cannot open /usr/openv/netbackup/online_util/fi_cntl/bpfis.fim.T_1419698597.1.0       
12/27/2014 11:13:40 PM - Info bpfis(pid=11498) done. status: 4207   

Marianne
Level 6
Partner    VIP    Accredited Certified

I am surprised to see that you have free space on any internal system disk on a Solaris system (/dev/rdsk/c1t0d0s6)? 100% sure there is not a mounted filesystem on that partition?

You also seem to have / in the backup selection? 

WRN - / is not frozen  

I don't thing system partitions such as / can be used for FlashBackup. It is not possible to freeze/quiesce OS partitions.
Hopefully the 'lots of small files' are not on the root partition - most definitely NOT good system design...

You also need to have the raw partition specified in Backup Selection - not mountpoint.

Remember to create bpfis log folder on the client.

H_Sharma
Level 6

Hi Marianne,

Solaris team had given me 1 gb size from raw partition that was /dev/rdsk/c1t0d0s6 i dont know whether anything needs to be done on the same.

I just pasted that path on the server properties as cache file size.

I didnot get that.

"You also need to have the raw partition specified in Backup Selection"

In my backup selection there is only the path of the directory that needs to be backed up.

Pls help.

Marianne
Level 6
Partner    VIP    Accredited Certified

Seems you have missed some steps in Snapshot Client Admin Guide (link in a previous post).

FlashBackup is covered in Chapter 4.

You need to add the raw partition in Backup Selection.
See step 11 on page 83 and have a look at the Solaris examples.

The raw partition needs to be where the multiple files reside - NOT root (/) filesystem or a folder name.

Use 'df -h' to find device/partition name for your filesystem.

You can also confirm with 'df -h' and 'prtvtoc /dev/rdsk/c1t0d0s2' that s6 on c1d0 is not in use by anything else.

H_Sharma
Level 6

You mean to say if i need to take the backup of below path...

i.e /usr/openv/tar1

In backup selection i will define the 2 paths.

1:- /usr/openv/tar1

2:-/dev/rdsk/c1t0d0s6 (Cache file size)

Marianne
Level 6
Partner    VIP    Accredited Certified

No. You need to go through Policy config in Snapshot Client manual again.

The backup selection needs to be the raw device on which /usr/openv/tar1 exists - actually the 'multiple small files', NOT the tar-file.
This needs to be a separate filesystem - NOT the NBU /usr/openv folder or filesystem.

Please provide output of 'df -h' on the client.
If the 'multiple small files' reside on a system disk (such as root (/) or /usr)
 then FlashBackup is not for you.

/dev/rdsk/c1t0d0s6 seems to be your cache device that needs to specified in Host Properties AFTER you have double-checked that this partition is not used by anything else on the client.

Also check  “Cache device requirements” on page 121 of the manual.

Marianne
Level 6
Partner    VIP    Accredited Certified

I believe Michael has answered your initial query about ways to backup millions of small files.

I have marked his post as Solution.

For issues with configuration of FlashBackup policy, please start a new discussion.