cancel
Showing results for 
Search instead for 
Did you mean: 

The working mechanism about tldd/tldcd

liuyl
Level 6

1) Is the number of child tldcds determined by the job sessions, and it can exceed the current number of robots?
2) Why the tldd/tldcds cannot work as their own multi-threads, but as their corresponding child processes running?

For examples:

root@jcbak:/#
root@jcbak:/#
root@jcbak:/# ps -ef|grep tldcd
root 21862 24560 0 22:44 ? 00:00:00 tldcd -v
root 21905 24560 0 22:45 ? 00:00:00 tldcd -v
root 21923 24560 0 22:45 ? 00:00:00 tldcd -v
root 21924 24560 0 22:45 ? 00:00:00 tldcd -v
root 21925 24560 0 22:45 ? 00:00:00 tldcd -v
root 21927 23207 0 22:45 pts/26 00:00:00 grep tldcd
root 24560 1 0 Aug29 ? 00:38:20 tldcd -v
root@jcbak:/#
root@jcbak:/#
root@jcbak:/#
root@jcbak:/# tpconfig -l|grep -i robot
Device Robot Drive Robot Drive Device Second
robot 0 - TLD - - - - /dev/sg230
robot 1 - TLD - - - - /dev/sg231
robot 4 - TLD - - - - /dev/sg234
root@jcbak:/#
root@jcbak:/#

22 REPLIES 22

mph999
Level 6
Employee Accredited

 

Both processes are single threaded.  They are not written as multi-threaded processes and therefore do not behave as such.

tldcd runs only on the robot control host - it communicates directly with the robot.  There should be one tldcd process per robot if I recall correctly.  Something has gone a bit amiss with your setup it seems:

stop ltid (/usr/openv/volmgr/bin/stopltid), if the processes don't disappear after a minute, kill them.

Under /usr/openv/volmgr (or <install>\veritas\volmgr if on the lesser operating system ;0) ) - there is a misc directory, delete it.

Restart ltid (ltid -v), hopefully should be fine - (you will see an error due to the misc dir, but it should be recreated).

If the robot has multiple paths to the host ??? - this might cause multipe tldcd processes, I think - not 100% sure, but easy to check.  You should only have one path, maybe two it you want redundancy, but this second path should be set as such in robtest.

tldd runs on each machine that has tape drives, this would include the robot control host, so you have:

Robot control host - tldcd and tldd (if it has drives, which it usually does)

Robot control host - tldcd only, if no tape drives

Hosts with tape drives only - tldd

How it works, when a tape is requested to be loaded or unloaded for a job, tldd passes this request to tldcd (over the network if necessary, if the RCH is a seperate machine)

tldcd send the actual scsi CDB to the library, for example, 0xa5 for move medium.

Marianne
Level 6
Partner    VIP    Accredited Certified

Good information on NBU processes in NetBackup Logging Reference Guide

Start with this topic on page 77:
Media and device management process

tldd and tldcd in table 3-2.

1) Such phenomenon is very widespread in our NBU envrionment, and these processes would always automatically disappear after a short time.
2) I mean that why the tldd/tldcd have not been designed as multi-threads mechanism because it can run much more efficient than single-thread.

Marianne
Level 6
Partner    VIP    Accredited Certified

@liuyl

You can troubleshoot issues with NBU processes by creating debug logs. 

I have posted a link to the Logging guide above.
You can also read Appendix B of the 7.6 Troubleshooting Guide:
https://sort.veritas.com/DocPortal/pdf/ka6j00000004F4HAAU

About your 2nd question - you will need to ask Veritas Engineering.
We cannot answer that question in a user forum.

PS:
Are you aware of the fact that all support for NBU 7.6.x ended almost 2 years ago?
https://www.veritas.com/support/en_US/article.000116439

 

mph999
Level 6
Employee Accredited

I don't know why you have multiple tldcd processes, the only thing I can think of, as I mentioned is if you had multiple paths, it 'might' be a reason ...  If you want to investigate it, you would need to start looking at logs, or probably as a better starting point, strace/ truss output on the PIDs of the tldcd processes to get some idea of what they might be doing.

tldcd was written a long time ago, back in the day when NBU mainly wrote to tape as and was way less complex than today.   Most, if not all processes back then where single threaded.  

Multithreaded processes appeared with NBU 6,  for example nbemm, nbrb, nbjm, nbpem but certainly for tldcd there is no real need, it doesn't actually do much, just sends a few commands to the robot every now and then, it's not 'busy' like nbemm, or nbjm.  I suspect it's a bit of a case of, if it ain't broke ....

mph999
Level 6
Employee Accredited

A quick guide to logs, which the wonderful Marianne helped me put together ...

https://vox.veritas.com/t5/Articles/Quick-Guide-to-Setting-up-logs-in-NetBackup/ta-p/811951

 

It seems that there really can be more than one process holding on the same robot from the debug log!
And also they can respectively work fine without any problem! So why ?

Here just for example (process 21923/21924):
22:45:07.679 [24560] <2> robotd_check_magic: Not using VxSS authentication.
22:45:07.679 [24560] <6> tldcd: process_request: ../tldcd.c.3034, process_request(), received command=1, from peername=shzycdb5, version 50
22:45:07.679 [24560] <5> tldcd:mount_unmount_drive: Processing MOUNT, TLD(1) drive 6, slot 18, barcode 000018L6 , vsn 0018L6
22:45:07.697 [21923] <5> tldcd:command_init: TLD(1) opening robotic path /dev/sg231
22:45:09.373 [24560] <6> tldcd:check_unit_attention: command_init to check for UA on robot 0
22:45:09.393 [24560] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256
22:45:09.393 [24560] <3> tldcd:mode_sense: --> NumSlots = 708 at address 4096
22:45:09.393 [24560] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:45:09.393 [24560] <3> tldcd:mode_sense: --> NumIE = 24 at address 16

 


22:45:09.398 [24560] <6> tldcd:listen_loop: accept: newfd = 20, error = 0, timersig = 0
22:45:09.399 [24560] <4> peer_hostname_ipi: Connection from host jcyxfdb1, 10.131.24.154, port 63860
22:45:09.399 [24560] <4> peer_hostname_ipi: Connection to host jcbak, 10.131.33.60, port 1556
22:45:09.399 [24560] <2> robotd_check_magic: Not using VxSS authentication.
22:45:09.399 [24560] <6> tldcd: process_request: ../tldcd.c.3034, process_request(), received command=3, from peername=jcyxfdb1, version 50
22:45:09.399 [24560] <5> tldcd:mount_unmount_drive: Processing UNMOUNT, TLD(1) drive 7, slot 90, barcode 000090L6 , vsn 0090L6
22:45:09.417 [21924] <5> tldcd:command_init: TLD(1) opening robotic path /dev/sg231
22:45:10.533 [24560] <6> tldcd:check_unit_attention: command_init to check for UA on robot 0
22:45:10.554 [24560] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256
22:45:10.554 [24560] <3> tldcd:mode_sense: --> NumSlots = 708 at address 4096
22:45:10.554 [24560] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:45:10.554 [24560] <3> tldcd:mode_sense: --> NumIE = 24 at address 16
22:45:10.556 [24560] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for QUANTUM Scalar i6000 745Q, type

 


22:45:23.394 [24560] <3> tldcd:mode_sense: --> NumTransports = 2 at address 1
22:45:23.394 [24560] <3> tldcd:mode_sense: --> NumIE = 16 at address 769
22:45:23.395 [24560] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for IBM 03584L32 A470, type 8, slots 600 and ie 16
22:45:23.395 [24560] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:976> search robot list (length=456) for IBM 03584L32, type 8
22:45:23.395 [24560] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:1229> found match: "IBM 3584" IBM 03584L
22:45:23.395 [24560] <5> tldcd:inquiry: inquiry() function processing library IBM 03584L32 A470:
22:45:23.395 [24560] <6> tldcd:check_unit_attention: initalizing robot 6
22:45:23.395 [24560] <6> tldcd:initialize_robot_hardware: command_init on robot 6
22:45:23.395 [24560] <5> tldcd:command_init: TLD(6) opening robotic path MISSING_PATH:2U10616001
22:45:23.395 [24560] <6> tldcd:listen_loop: accept: newfd = -1, error = 0, timersig = 1
22:45:23.482 [21924] <6> tldcd:tape_in_drive: valid = 1, sel = 4185, barcode = (000090L6 )
22:45:23.482 [21924] <6> tldcd:read_element_status_drive: RES drive 7
22:45:23.613 [21924] <6> tldcd:read_element_status_slot: RES storage element 90
22:45:23.744 [21924] <5> tldcd:move_medium: TLD(1) initiating MOVE_MEDIUM from addr 262 to addr 4185
22:45:31.853 [21924] <5> tldcd:tld_main: TLD(1) closing/unlocking robotic path

mph999
Level 6
Employee Accredited

I tested this - there is in fact only one tldcd process, even for two robots.

I can only image something is a little messed up in the config, did you try deleting the misc dir  as I suggested previously.

If that doesn;t help, I'd delete and readd the devices.

I would use nbemmcmd -deletealldevices -allrecords

That will delete all libraries and tape drives, then just readd them and reinventory.

But I noticed that the two tldcd processes, in fact, did not take actions at the same point-in-time, though they were concurrently holding on the same robot!

mph999
Level 6
Employee Accredited

Yes, I understand there are two tldcd processes, and this is not normal, as far as I know.

I would suggest as before, stop media manager services, delete /usr/openv/misc dir and restart, if that doesn''t resolve, dlete all device using:

nbemmcmd -deletealldevices -allrecords  (this will delete all robots and tape drives)

and then reconfigure.

A restart of services with full debug on the logs might show some clues as to what is going on, but I would imagine you would still end up trying my sugestions (we don't really have many options as to what we can do).

Could you tell me what the evidence for such problem might be ?
Because that means I would re-run DW/tpautoconf on one hundred MM hosts according to your suggestion.

Notes: this attachment is the complete debug robot file at that time!

mph999
Level 6
Employee Accredited

I don't actually know, I don't recall ever seeing this in almost 11 years.

Generally, tapes/ robots work fine, and when you get somethng a bit odd happeneing, a re-config / delete misc dir can work wonders, and if the system is not too large, this is not really inconvenient, It's a quick fix that is worth trying if possible and it coud save hours of looking around for an offical answer, where the solution would  probably involve deleteing misc dir / and a reconfig.

However, when you have many media servers I appreciate this is not too good (hence why I mentioned the command would delete everything).

So - firstly, is this multi-process happening on all robot control hosts, or just some of them

When did the issue start

Could you try, just on the robot control hosts having the issue, stopping media manager services, delete the dir /usr/openv/volmgr/misc and then restart media manager (stopltid / ltid -v to restart).

Do you have multiple paths to the robots ?

If deleting misc doesn't help, you could look in the robots / ltid / reqlib logs covering a time when you start the services to see if there are any clues 

Create ...volmgr/debug/ltid   ,reqlib and robots logs

reqlib log covers debug info for tld and tldcd processes

(add VERBOSE to vm.conf and restart media manager to pick up the change)

If the logs already exist, once you have stopped the proceses, delete (or move) existing logs so the startup is at the beginning (just makes things a bit easier, and smaller)

Marianne
Level 6
Partner    VIP    Accredited Certified

" ..... one hundred MM hosts ..." 

WOW! Really? 
How many robots? 
How many robot control hosts? 

Although there is no physical limitation in NBU, the NBU environment can grow so big that OS, device and network resources and/or EMM can no longer 'keep up'. 

mph999
Level 6
Employee Accredited

Good point Marianne ...

I never reallly understand why you would have say 1 master and 100 media servers, and not 2 masters with 50 media servers each.  Putting all jobs on one master means in the event of an issue, you could lose 100% of your environment.  Split into two, you only lose 50%.

Further, large environments are much much harder to troubleshoot - more can go wrong, and the logs can be massive - I always suggest to keep it simple and a manageable size.  An issue wih a single media server (for example, a comms issue) can cause havic, the more media servers, the more likely this is to happen.

There used to be a TN where we gave example tuning for small/ medium / large environments - the largest of these examples had 80 media servers.  As you say, we have no hard limit, but I always took that as a hint ...

I won't tell you the largest environment I've seen, you'd fall off your chair ...

The biggest problem with EMM coping that I have seen, is if you have many media servers sharing a larger amount of drives.  I recall a case with around 45 media servers sharing the same 70 tape drives - yep, 70 drives per media server.  This gave 3150 instances of drives in EMM (45 x 70) - it didn't work ... 

Sorry for my saying not clearly, that is, we do have 2 MS/RCHs for the one hundred of MM hosts.
The one has 4 robots with 66 MMs(single-path)、the other has 3 robots with 42 MMs(single-path).
And also they both uncertainly & randomly have such phenomenons.

mph999
Level 6
Employee Accredited

OK, so a fairly complex setup.

I really am at a bit of a lose - I tested this (two robots on one RCH) an didn't see any issues and as mentioned, I've never noticed what you describe in the past.

I suspect a reconfig would put things back into order, but, it's one of those things I can't promise.  Odd that it's hapening across multiple servers.

I'm back in the office next week, let me ask about see if anyone else has some ideas.

Could you explain when this issue first started, and what changes were made around that time.

I have rviewed the debug robots log.110618,  and find that such phenomena mainly focus on some shorter period of time (< 1 min), that is, not always for the same robot and all the time!
So it seems that there would be no any rules to follow such phenomena in the light of present situation!

1) In fact,  there ware 3 processes(21905/21923/21924) opening the same robot TLD(1)
22:45:02.449 [24560] <6> tldcd:process_request: ../tldcd.c.3034, process_request(), received command=1, from peername=jcyxfdb1, version 50
22:45:02.450 [24560] <5> tldcd:mount_unmount_drive: Processing MOUNT, TLD(1) drive 2, slot 86, barcode 000086L6 , vsn 0086L6
22:45:02.468 [21905] <5> tldcd:command_init: TLD(1) opening robotic path /dev/sg231
22:45:02.879 [21835] <5> tldcd:tld_main: TLD(1) closing/unlocking robotic path
22:45:02.880 [21862] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 8 at address 256


22:45:07.679 [24560] <6> tldcd: process_request: ../tldcd.c.3034, process_request(), received command=1, from peername=shzycdb5, version 50
22:45:07.679 [24560] <5> tldcd:mount_unmount_drive: Processing MOUNT, TLD(1) drive 6, slot 18, barcode 000018L6 , vsn 0018L6
22:45:07.697 [21923] <5> tldcd:command_init: TLD(1) opening robotic path /dev/sg231
22:45:09.373 [24560] <6> tldcd:check_unit_attention: command_init to check for UA on robot 0
22:45:09.393 [24560] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256


22:45:09.399 [24560] <6> tldcd: process_request: ../tldcd.c.3034, process_request(), received command=3, from peername=jcyxfdb1, version 50
22:45:09.399 [24560] <5> tldcd:mount_unmount_drive: Processing UNMOUNT, TLD(1) drive 7, slot 90, barcode 000090L6 , vsn 0090L6
22:45:09.417 [21924] <5> tldcd:command_init: TLD(1) opening robotic path /dev/sg231
22:45:10.533 [24560] <6> tldcd:check_unit_attention: command_init to check for UA on robot 0
22:45:10.554 [24560] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256


22:45:12.453 [21905] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:45:12.453 [21905] <3> tldcd:mode_sense: --> NumIE = 24 at address 16
22:45:12.454 [21905] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for QUANTUM Scalar i6000 745Q, type 8, slots 402 and ie 24
22:45:12.454 [21905] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:976> search robot list (length=456) for QUANTUM Scalar i6000, type 8
22:45:12.454 [21905] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:1229> found match: "Quantum Scalar i6000" QUANTUM Scalar i6000
22:45:12.454 [21905] <5> tldcd:inquiry: inquiry() function processing library QUANTUM Scalar i6000 745Q:
22:45:12.454 [21905] <6> tldcd:read_element_status_drive: RES drive 2
22:45:12.472 [24560] <6> decode_response: sending response "EXIT_STATUS 0 0" for request on drive 1 (child_pid 21862)
22:45:12.472 [24560] <6> tldcd:check_unit_attention: skipping robot 0
22:45:12.472 [24560] <6> tldcd:check_unit_attention: skipping robot 1


22:45:20.840 [24544] <5> GetResponseStatus: DecodeQuery() Actual status: Unable to open robotic path
22:45:20.840 [24544] <3> DecodeQuery: TLD(6) unavailable: initialization failed: Unable to open robotic path
22:45:23.344 [21905] <5> tldcd:tld_main: TLD(1) closing/unlocking robotic path
22:45:23.345 [21924] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 8 at address 256
22:45:23.345 [21924] <3> tldcd:mode_sense: --> NumSlots = 402 at address 4096
22:45:23.345 [21924] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:45:23.345 [21924] <3> tldcd:mode_sense: --> NumIE = 24 at address 16
22:45:23.346 [21924] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for QUANTUM Scalar i6000 745Q, type 8, slots 402 and ie 24
22:45:23.346 [21924] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:976> search robot list (length=456) for QUANTUM Scalar i6000, type 8
22:45:23.346 [21924] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:1229> found match: "Quantum Scalar i6000" QUANTUM Scalar i6000
22:45:23.346 [21924] <5> tldcd:inquiry: inquiry() function processing library QUANTUM Scalar i6000 745Q:
22:45:23.346 [21924] <6> tldcd:read_element_status_drive: RES drive 7
22:45:23.364 [24560] <6> decode_response: sending response "EXIT_STATUS 0 0" for request on drive 2 (child_pid 21905)


22:45:23.482 [21924] <6> tldcd:read_element_status_drive: RES drive 7
22:45:23.613 [21924] <6> tldcd:read_element_status_slot: RES storage element 90
22:45:23.744 [21924] <5> tldcd:move_medium: TLD(1) initiating MOVE_MEDIUM from addr 262 to addr 4185
22:45:31.853 [21924] <5> tldcd:tld_main: TLD(1) closing/unlocking robotic path
22:45:31.854 [21923] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 8 at address 256
22:45:31.854 [21923] <3> tldcd:mode_sense: --> NumSlots = 402 at address 4096
22:45:31.854 [21923] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:45:31.854 [21923] <3> tldcd:mode_sense: --> NumIE = 24 at address 16
22:45:31.855 [21923] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for QUANTUM Scalar i6000 745Q, type 8, slots 402 and ie 24
22:45:31.855 [21923] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:976> search robot list (length=456) for QUANTUM Scalar i6000, type 8
22:45:31.855 [21923] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:1229> found match: "Quantum Scalar i6000" QUANTUM Scalar i6000
22:45:31.855 [21923] <5> tldcd:inquiry: inquiry() function processing library QUANTUM Scalar i6000 745Q:
22:45:31.855 [21923] <6> tldcd:read_element_status_drive: RES drive 6
22:45:31.868 [24560] <6> decode_response: sending response "EXIT_STATUS 0 0" for request on drive 7 (child_pid 21924)

 

2) There ware 2 processes(15857/15858) opening the same robot TLD(0)!
22:13:20.507 [24560] <2> robotd_check_magic: Not using VxSS authentication.
22:13:20.507 [24560] <6> tldcd:process_request: ../tldcd.c.3034, process_request(), received command=1, from peername=jcsjdb3, version 50
22:13:20.508 [24560] <5> tldcd:mount_unmount_drive: Processing MOUNT, TLD(0) drive 4, slot 313, barcode 000063L5 , vsn 0063L5
22:13:20.528 [15857] <5> tldcd:command_init: TLD(0) opening robotic path /dev/sg230
22:13:21.764 [24560] <6> tldcd:check_unit_attention: skipping robot 0
22:13:21.764 [24560] <6> tldcd:check_unit_attention: command_init to check for UA on robot 1


22:13:21.769 [24560] <6> tldcd:process_request: ../tldcd.c.3034, process_request(), received command=3, from peername=jcsjdb3, version 50
22:13:21.770 [24560] <5> tldcd:mount_unmount_drive: Processing UNMOUNT, TLD(0) drive 2, slot 111, barcode 000153L5 , vsn 0153L5
22:13:21.786 [15858] <5> tldcd:command_init: TLD(0) opening robotic path /dev/sg230
22:13:25.365 [15831] <5> tldcd:tld_main: TLD(0) closing/unlocking robotic path
22:13:25.379 [24560] <6> decode_response: sending response "EXIT_STATUS 0 0" for request on drive 5 (child_pid 15831)


22:13:25.384 [24560] <6> tldcd:listen_loop: accept: newfd = -1, error = 0, timersig = 1
22:13:25.427 [15857] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256
22:13:25.427 [15857] <3> tldcd:mode_sense: --> NumSlots = 708 at address 4096
22:13:25.427 [15857] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1


22:13:25.513 [15857] <6> tldcd:read_element_status_slot: RES storage element 313
22:13:25.594 [15857] <5> tldcd:move_medium: TLD(0) initiating MOVE_MEDIUM from addr 4408 to addr 259
22:13:35.072 [15857] <5> tldcd:tld_main: TLD(0) closing/unlocking robotic path
22:13:35.086 [24560] <6> decode_response: sending response "EXIT_STATUS 0 0" for request on drive 4 (child_pid 15857)
22:13:35.086 [24560] <6> tldcd:check_unit_attention: skipping robot 0


22:13:35.091 [24560] <6> tldcd:listen_loop: accept: newfd = -1, error = 0, timersig = 1
22:13:35.131 [15858] <3> tldcd:mode_sense: <../tldcd.c:7079> Device geometry: NumDrives = 12 at address 256
22:13:35.131 [15858] <3> tldcd:mode_sense: --> NumSlots = 708 at address 4096
22:13:35.131 [15858] <3> tldcd:mode_sense: --> NumTransports = 1 at address 1
22:13:35.131 [15858] <3> tldcd:mode_sense: --> NumIE = 24 at address 16
22:13:35.133 [15858] <6> tldcd:inquiry: <../tldcd.c:6929> Read device table for QUANTUM Scalar i6000 745Q, type 8, slots 708 and ie 24
22:13:35.133 [15858] <4> MmDeviceMappings::GetRobotAttributes
: <../../lib/MmDeviceMappings.cpp:976> search robot list (length=456) for QUANTUM Scalar i6000, type 8

 

Notes: but they all did work fine serially without any conflict or interference!

mph999
Level 6
Employee Accredited

OK, finally managed to reproduce - used different retentions on my jobs so it loaded more tapes than before.

root 2527 1 0 Nov14 ? 00:00:55 vmd
root 27063 1 0 02:32 pts/0 00:00:00 ltid -v
root 27067 27063 0 02:32 pts/0 00:00:00 tldd -v
root 27068 27063 0 02:32 pts/0 00:00:00 avrd -v
root 27071 1 0 02:32 pts/0 00:00:00 tldcd -v
root 27650 27067 0 02:36 pts/0 00:00:00 tldd -v
root 27662 27067 0 02:36 pts/0 00:00:00 tldd -v
root 27665 27067 0 02:36 pts/0 00:00:00 tldd -v

... and of course, different pids that you see in theh logs will be from a process with a different pid ...  Why that skipped my mind before I have no idea ...

So - normal behavior - child process gets kicked off when a request is made.

They will only be around for a short time, as it doesn't take long to make a request and for the task to be completed .

But there were the multi tldds in your above test scene,  so I think that these should be different from the multi tldcds in a sense!