cancel
Showing results for 
Search instead for 
Did you mean: 

KB per second

Satish_Kumar_3
Level 4
The data transfer rate in KB / Sec is 288 which is very less and causing the backup to take more time to complete over the 10/100 switched Ethernet network. Pls suggest how do i improve the performance.

Regards
Satish
20 REPLIES 20

Yasuhisa_Ishika
Level 6
Partner Accredited Certified
Does your cards works in 100Base Full-Duplex? How large the number of target files?
For detail inspection, try to set debug logging of bptm and bpbkar processes.
Logging level must be grater than 2.

Satish_Kumar_3
Level 4
Hi, I've enabled level 5 logging on my server and the debug log of bptm is here below. db_lock_media: unable to lock media at offset 1 (USER01) ? What does this mean and has this got to do with my issue.

Regards
SK

***********************************************************************


14:01:10.912 <2> bptm: INITIATING (VERBOSE = 5): -count -cmd -rt 0 -rn 0 -stunit MasterServer-hcart2 -den 14 -mt 2 -ma
sterversion 510000
14:01:10.912 <2> bptm: NUM UP 1 0 0 0 1 0 b-IBMULTRIUM-TD20
14:01:10.912 <2> bptm: EXITING with status 0 <----------
14:01:10.932 <2> bptm: INITIATING (VERBOSE = 5): -delete_expired
14:01:10.933 <2> bptm: EXITING with status 0 <----------
14:01:21.077 <2> bptm: INITIATING (VERBOSE = 5): -U
14:01:21.077 <2> db_byid: search for media id USER01
14:01:21.077 <2> db_byid: USER01 found at offset 1
14:01:21.086 <2> db_lock_media: unable to lock media at offset 1 (USER01)
14:01:21.086 <2> bptm: EXITING with status 0 <----------
14:01:51.107 <2> bptm: INITIATING (VERBOSE = 5): -U
14:01:51.107 <2> db_byid: search for media id USER01
14:01:51.107 <2> db_byid: USER01 found at offset 1
14:01:51.116 <2> db_lock_media: unable to lock media at offset 1 (USER01)
14:01:51.116 <2> bptm: EXITING with status 0 <----------
14:02:21.166 <2> bptm: INITIATING (VERBOSE = 5): -U
14:02:21.166 <2> db_byid: search for media id USER01
14:02:21.166 <2> db_byid: USER01 found at offset 1
14:02:21.176 <2> db_lock_media: unable to lock media at offset 1 (USER01)
14:02:21.176 <2> bptm: EXITING with status 0 <----------
14:02:51.209 <2> bptm: INITIATING (VERBOSE = 5): -U
14:02:51.209 <2> db_byid: search for media id USER01
14:02:51.209 <2> db_byid: USER01 found at offset 1
14:02:51.216 <2> db_lock_media: unable to lock media at offset 1 (USER01)
14:02:51.216 <2> bptm: EXITING with status 0 <----------
14:03:21.237 <2> bptm: INITIATING (VERBOSE = 5): -U

Yasuhisa_Ishika
Level 6
Partner Accredited Certified
Well, if I remember right, this means that media is already assigned or mounted.
This is not problem.

Show us "grep delayed /usr/openv/netbackup/logs/*/*".

Reference:
http://seer.support.veritas.com/docs/244652.htm
Example 1

MayurS
Level 6
Hi,

Have you tried restarting the Netbackup services ?

Yasuhisa_Ishika
Level 6
Partner Accredited Certified
I searched in support.veritas.com with keyword "performance", and found that in many platform auto-negotiated NIC sometimes causes performance issue.
Do you have been disabled autp-negotiation?

Tim_Dile
Level 5
If that's the case then ours don't look to healthy

Tim

22:42:43.824 <2> fill_buffer: socket is closed, waited for empty buffer 7303 times, delayed 23634 times, read 2516928 Kbytes
22:46:39.473 <2> fill_buffer: socket is closed, waited for empty buffer 4198 times, delayed 15678 times, read 1110638592 bytes
22:47:26.153 <2> fill_buffer: socket is closed, waited for empty buffer 2791 times, delayed 9498 times, read 983236608 bytes
22:47:40.314 <2> fill_buffer: socket is closed, waited for empty buffer 6356 times, delayed 21746 times, read 1683259392 bytes
22:49:23.713 <2> fill_buffer: socket is closed, waited for empty buffer 10671 times, delayed 35616 times, read 2938784 Kbytes
22:50:32.894 <2> fill_buffer: socket is closed, waited for empty buffer 3598 times, delayed 12598 times, read 1044545536 bytes
22:52:17.482 <2> fill_buffer: socket is closed, waited for empty buffer 4714 times, delayed 13910 times, read 1591214080 bytes
23:03:08.192 <2> fill_buffer: socket is closed, waited for empty buffer 24442 times, delayed 76263 times, read 6774880 Kbytes
23:32:10.851 <2> fill_buffer: socket is closed, waited for empty buffer 751 times, delayed 4154 times, read 5067616 Kbytes
23:32:58.382 <2> fill_buffer: socket is closed, waited for empty buffer 41847 times, delayed 100095 times, read 14512992 Kbyte
s
23:45:35.129 <2> fill_buffer: socket is closed, waited for empty buffer 4078 times, delayed 13146 times, read 7237024 Kbytes
23:46:17.551 <2> fill_buffer: socket is closed, waited for empty buffer 13569 times, delayed 35109 times, read 8041312 Kbytes
23:47:49.660 <2> fill_buffer: socket is closed, waited for empty buffer 29563 times, delayed 76074 times, read 12797664 Kbytes
23:55:51.167 <2> fill_buffer: socket is closed, waited for empty buffer 50542 times, delayed 169636 times, read 13460928 Kbyte
s

Yasuhisa_Ishika
Level 6
Partner Accredited Certified
Is there any "waited for full buffer" strings in bptm logs?
Is your backup multiplexed?

If times counts of "waited for full buffer" line is greatly smaller than those "waited for empty buffer", cause resides in drive side. Otherwise in network or client side.

It is hard for me to inspect in this way if backups are multiplexed.
It seems thar delayed counts of your log are high enough to doubt. So if there are no other way, just try cleaning drive or replace tapes.

MayurS
Level 6
When i used to face the same prob as your these were the reasons,
1) My Normal backup tape used to go into the NDMP drive. Hence my normal backups used to go snail speed.
TO resolve added a line to /usr/openv/volmgr/vm.conf as follows:
DISALLOW_NONNDMP_ON_NDMP_DRIVE
Restarted NetBackup for the vm.conf change to take effect
2) My client or Server services got hanged.
3) Network issues - Firewall in betwn, Too much load on the network. Network Card Set to "Auto" speed
4) Tracker.exe that shows the status of the JOB slows the job
I found this 4th reason a number of time and then i decided to eradicate the Tracker functionality
http://seer.support.veritas.com/docs/267253.htm
5) Anti-Virus scannig.

Satish_Kumar_3
Level 4
Hi,

1st can you tell what "grep delayed */* " does :) It is checking for all logs keyword delayed appearing in it and displays , right?

I am posting what you have asked me but the contents are something quizzical? Let me do the reading which u have sent. Plz help me understand incase u are able to figure a way out.

Regards
Satish

# grep delayed */*
bpbkar/log.092605:09:11:49.446 <2> bpbkar write_eot: INF - bpbkar waited 13465 times for empty buffer, delayed 18565 times
bptm/log.091305:16:36:42.160 <2> write_data: waited for full buffer 14 times, delayed 42 times
bptm/log.091405:09:59:59.426 <2> write_data: waited for full buffer 6 times, delayed 10 times
bptm/log.091605:15:33:23.495 <2> write_data: waited for full buffer 14930 times, delayed 14952 times
bptm/log.091805:10:20:19.888 <2> fill_buffer: socket is closed, waited for empty buffer 38 times, delayed 683 times, read 4307826 Kbytes
bptm/log.091805:10:20:19.892 <2> write_data: waited for full buffer 34936 times, delayed 35011 times
bptm/log.091805:11:16:22.390 <2> write_data: waited for full buffer 14978 times, delayed 14988 times
bptm/log.091905:18:34:32.154 <2> fill_buffer: socket is closed, waited for empty buffer 53 times, delayed 1082 times, read 6145378 Kbytes
bptm/log.091905:18:34:32.156 <2> write_data: waited for full buffer 53439 times, delayed 64165 times
bptm/log.092005:09:48:58.237 <2> fill_buffer: socket is closed, waited for empty buffer 39 times, delayed 730 times, read 4287346 Kbytes
bptm/log.092005:09:48:58.242 <2> write_data: waited for full buffer 34737 times, delayed 34789 times
bptm/log.092005:13:25:13.906 <2> fill_buffer: socket is closed, waited for empty buffer 37 times, delayed 903 times, read 4937170 Kbytes
bptm/log.092005:13:25:13.908 <2> write_data: waited for full buffer 23435 times, delayed 34115 times
bptm/log.092005:14:12:37.925 <2> fill_buffer: socket is closed, waited for empty buffer 128 times, delayed 3032 times, read 17317627 Kbytes
bptm/log.092005:14:12:37.932 <2> write_data: waited for full buffer 62658 times, delayed 92135 times
bptm/log.092105:13:28:25.740 <2> write_data: waited for full buffer 113715 times, delayed 113771 times
bptm/log.092305:15:52:17.032 <2> write_data: waited for full buffer 25838 times, delayed 26481 times
bptm/log.092405:14:50:33.006 <2> write_data: waited for full buffer 15349 times, delayed 15359 times
bptm/log.092405:16:01:36.681 <2> fill_buffer: socket is closed, waited for empty buffer 129 times, delayed 2984 times, read 17483862 Kbytes
bptm/log.092405:16:01:36.683 <2> write_data: waited for full buffer 63321 times, delayed 76626 times
bptm/log.092405:16:40:02.601 <2> fill_buffer: socket is closed, waited for empty buffer 52 times, delayed 975 times, read 6156638 Kbytes
bptm/log.092405:16:40:02.604 <2> write_data: waited for full buffer 53524 times, delayed 75440 times
bptm/log.092405:17:31:15.901 <2> fill_buffer: socket is closed, waited for empty buffer 35 times, delayed 711 times, read 4302706 Kbytes
bptm/log.092405:17:31:15.905 <2> write_data: waited for full buffer 34877 times, delayed 34922 times
bptm/log.092505:16:04:25.002 <2> fill_buffer: socket is closed, waited for empty buffer 67 times, delayed 1644 times, read 17889288 Kbytes
bptm/log.092505:16:04:25.010 <2> write_data: waited for full buffer 70772 times, delayed 103941 times
bptm/log.092505:16:42:05.437 <2> fill_buffer: socket is closed, waited for empty buffer 9 times, delayed 217 times, read 6165985 Kbytes
bptm/log.092505:16:42:05.440 <2> write_data: waited for full buffer 54071 times, delayed 74583 times
bptm/log.092505:19:14:24.716 <2> write_data: waited for full buffer 139952 times, delayed 140015 times
bptm/log.092505:19:32:16.713 <2> fill_buffer: socket is closed, waited for empty buffer 35 times, delayed 692 times, read 4287346 Kbytes
bptm/log.092505:19:32:16.719 <2> write_data: waited for full buffer 34748 times, delayed 34767 times
bptm/log.092605:11:11:23.726 <2> fill_buffer: socket is closed, waited for empty buffer 39 times, delayed 683 times, read 4307826 Kbytes
bptm/log.092605:11:11:23.728 <2> write_data: waited for full buffer 35007 times, delayed 35136 times
bptm/log.092605:14:33:16.019 <2> fill_buffer: socket is closed, waited for empty buffer 12 times, delayed 238 times, read 6513683 Kbytes
bptm/log.092605:14:33:16.021 <2> write_data: waited for full buffer 58805 times, delayed 402997 times
bptm/log.092605:09:11:49.466 <2> write_data: waited for full buffer 15415 times, delayed 15460 times

Poon_William
Level 4
Hi,

If the backup is through network and the tape drive is waiting for full buffer to write then my guess is that the network was not pumping data fast enough to the media server that is holding the tape drive. I used to encountered alot of such problems and I ususlly perform a FTP and copy files from the client to the media server with hash on. If the hash patterns paused and it really show that problem is on the network. As in a healthy network environment, the hash pattern should be very smooth and ther should not be any pausing in between. Most of the times was caused by auto-negotiate or cable connection.

WP

Satish_Kumar_3
Level 4
can anyone tell me how do I find on linux whether what is the configuration of the NIC whether it is set to auto / 10 / 100 Mbps. I discovered that the server NIC is set to Auto which I have changed as suggested to 100 Mbps Full. How do I do this on linux ? Any quick help pls.

Regards
SK

MayurS
Level 6
MII-TOOL(8) MII-TOOL(8)
NAME
mii-tool - view, manipulate media-independent interface
status


SYNOPSIS
mii-tool



DESCRIPTION
This utility checks or sets the status of a network inter-
face's Media Independent Interface (MII) unit.Most fast
ethernet adapters use an MII to autonegotiate link speed
and duplex setting.

Most intelligent network devices use an autonegotiation
protocol to communicate what media technologies they sup-
port, and then select the fastest mutually supported media
technology. The -A or --advertise options can be used to
tell the MII to only advertise a subset of its capabili-
ties. Some passive devices, such as single-speed hubs,
are unable to autonegotiate. To handle such devices, the
MII protocol also allows for establishing a link by simply
detecting either a 10baseT or 100baseT link beat. The -F
or --force options can be used to force the MII to operate
in onemode, instead of autonegotiating. The -A and -F
options are mutually exclusive.

The default short output reports the negotiated link speed
and link status for each interface. If an interface or
interfaces are not specified onthe command line, then
mii-tool will check any available interfaces from eth0
through eth7.

OPTIONS
-v, --verbose
Display more detailed MII statusinformation. If
used twice, also display raw MII register contents.

-V, --version
Display program version information.

-R, --reset
Reset the MII to its default configuration.

-r, --restart
Restart autonegotiation.

-w, --watch
Watch interface(s) and report changes in link sta-
tus. The MII interfaces are polled at one second
intervals.

-l, --log
Used with -w, records link status changes in the
system log instead of printing on standard output.

-F media, --force=media
Disable autonegotiation, and force the MII to
either 100baseTx-FD, 100baseTx-HD, 10baseT-FD, or
10baseT-HD operation.

-A media,..., --advertise=media,...
Enable and restart autonegotiation, andadvertise
only the specified media technologies. Multiple
technologies should be separated by commas. Valid
media are 100baseT4, 100baseTx-FD, 100baseTx-HD,
10baseT-FD, and 10baseT-HD.

Find it Here ->
http://node1.yo-linux.com/cgi-bin/man2html?cgi_command=mii-tool

Anonymous
Not applicable
Poon, can you give an example session of commands you would type to test out 'possible delays'. Namely which FTP commands are you using. Would be interested as I have a similar slow network backup happening.

Thanks

Stumpr2
Level 6
I suggest you read the article written by George Winter which is available from this VERITAS link. You will see the article at the bottom of the page titled "VERITAS NetBackup Software Performance Tuning"
http://eval.veritas.com/downloads/van/van_news_nov_i.htm
Click on the "Read More" button and it will bring up a pdf document. This tuning document also covers what you are looking for.

Please note:
Most probable cause of slow backups = NIC card setting
Second most probable cause = name resolution

Mark_Herlich_2
Not applicable
Set the NIC on both the master and client to fixed Full duplex- and local Host name on the master media server and same with the client-

Poon_William
Level 4
Stuart,

I am using a normal FTP connection for file transfer test.

See example:

From client, cd to a directory with enough files and size for testing.

client > ftp master_server
..connected..
ftp > hash
ftp > prompt
ftp > mput *
ftp > ########## (You will see the # scrolling on the screen while files are being transfered. For a good network transfer, the scrolling should be very smooth and no pause in between. If the scrolling stop and pause in between, it show that something is wrong)

Hope it helps.


WP

Poon_William
Level 4
Stuart,

I am using a normal FTP session: See example:

On client server, cd to a directory with enough files and size for testing.

client > ftp media_server
ftp > hash
ftp > prompt
ftp > mput *
ftp > ################ (The hashing should scroll smoothly with no pause in between. If the scrolling stop or pause, it mean something is not right in the network setting)

Hope it helps.

WP

Jeffrey_Redingt
Level 5
Tim,

You might want to increase the number of data buffers you have to improve performance on your backups. Since it is waiting for empty buffers, increasing the number of buffers should improve that situation and the backup speed. Be careful though and don't increase the number too high or you can run the server out of memory. The default is 16, I would try 24 or 32 and test it.

Jeffrey_Redingt
Level 5
Looking at the waiting for full buffer messages you have, I would increase the communication buffer size to 64 or 128 from the default of 32. That may help if the problem is not NIC or switch based.