08-22-2013 11:41 AM
Hi,
Can anyone help me out in configuring the shared memory on solaris 10. Currently, netbackups are happening with 7mb/sec, so i've created SIZE & NUMBER buffer files to speed up the backups.However when the increase the value in SIZE file backups fail with "problems encountered during setup of shared memory (89). Currently no shared memory is configured on the machine.
2) Will there be any performance degradation with these BUFFER values when restoring
Environment.
Netbackup master server : solaris 10
Backup Ver: 7.5.0.4
RAM : 32GB
SWAP: 25GB
NETWORK: 3 NICS with each speed 1GB.So created an aggregate which comes to 3GB speed.
SIZE_DATA_BUFFER: 3670016
NUMBER_DATA_BUFFER: 512
TAPE LIBRARY : DRIVES(2); MPX=1; TAPES =LT05
Nummer of tape drives * MPX * SIZE_DATA_BUFFERS * NUMBER_DATA_BUFFERS = TOTAL AMOUNT OF SHARED MEMORY
2*1*3670016*512 = 3758096384 ( almost 4GB)
I followed the steps to create 8GB shared memory as per KB TECH62633. As i am using solaris 10 i am bit confused with steps 3 & 4 in KB TECH138168
#/usr/sbin/projadd -U root -c "NetBackup resource project" -p 1000 NetBackup
# /usr/sbin/projmod -a -K 'project.max-msg-ids=(privileged,256,deny)' NetBackup
# /usr/sbin/projmod -a -K 'project.max-sem-ids=(privileged,1024,deny)' NetBackup
# /usr/sbin/projmod -a -K 'project.max-shm-ids=(privileged,1024,deny)' NetBackup
# /usr/sbin/projmod -a -K 'project.max-shm-memory=(privileged,8589934592,deny)' NetBackup
08-22-2013 12:15 PM
08-22-2013 01:28 PM
Martin,
I've verified with different values and currently backups are running fine.However I want to increase NUMBER value more than 512 as we've sufficient resources on the server but its complaining to configure the shared memory. so I am not sure what are all the parameters that needs to be changed on solaris 10 machine to configure the shared memory.
Regards,
Sam
08-22-2013 01:34 PM
08-22-2013 01:39 PM
Revaroo,
I mean backups are running fine but they are slow not throughput as expected.
Regards,
08-22-2013 03:14 PM
08-23-2013 06:38 AM
Martin,
From the completed job I see "waited for full buffer 954 times, delayed 12037 times " when the NUMBER VALUE is set to 512,however I still see the speed as 14MB/s and its taking more than 8hrs to complete 300GB of data to backup. As I said earlier I've created an aggregate with 3 NICs of 1GB speed each and HBA's are qlogic with 8GB speed. Much appreciated if you can help me out.
Regards,
Sam
08-23-2013 11:17 AM
08-23-2013 10:48 PM
waited for full buffer.....
It seems to me that your buffer size is too big, causing long delays in filling up the buffers.
I agree with Martin - a buffer size of 262144 has been found to give best performance.
What is network buffer set to?
08-26-2013 07:26 AM
Marianne,
Currently, there is no Network buffer set. If so, what should be the value and is it at both server & client side?
Regards,
Sam
08-26-2013 01:53 PM
Forget the network buffer for the moment ...
Set the buffers like I explained earlier, run a backup of the media server and lets see what the performance is.
If we start changing all the different buffers at once we're going to get nowhere ...
M
08-27-2013 06:37 AM
Martin,
It took like 7 hrs 15mins to backup 205GB of data after changing the NUMBER buffer to 512 & SIZE to 3145728 and as per Status log "full buffer 66748times,delayed 1701111 times". Will check with SIZE value set to 262144
Regards,
SAM
08-27-2013 06:40 AM
Moreover the speed is like 7MB/sec
12-11-2013 04:29 PM
One item has been missed from the discussion: Number of files on the server. You can have the fastest SAN on the fastest network, but it really boils down to number of files. Even though my environment is Windows, I had a 25GB drive that took 13 hours to backup. Looking at the drive, I had 3+ million files on it.
Once I changed from a "standard" backup to a "flash", run time dropped to 30 minutes. Just a thought to consider...
12-12-2013 09:36 AM
2 things
1 you have set up the project settings for netbackup but have you made sure that netbackup start script is made to be part of the project?Theres a netbackup document for it.
2 what ron says is correct. Do yourself a favour: rule out the data for now, use a synthetic policy to write out random data from server to itself if you can ie no network and a small number of multi Gbyte files. If thats slow then the bottleneck is there already without involving networks, data profiles etc. A local backup reading synthetic data ie no disk will be as fast as you can get data. With lto5 and large blocks if you dont get the manufacturers speed then we will learn something. I would say 150M/sec, maybe more.
Jim