LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912016-03-17T18:52:05ZLTTng bugs repository
Redmine LTTng-tools - Bug #1006 (Resolved): Enabling an application context (both JUL and log4j) results ...https://bugs.lttng.org/issues/10062016-03-17T18:52:05ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Enabling an application context in both the log4j and jul domains results in a confirmation message of the form<br /><pre>
UST context $app.myprovider:myshortcontext added to all channels
</pre></p>
<p>The domain is printed by checking opt_kernel, while the exact domain should be checked.</p> LTTng-tools - Bug #1005 (Resolved): New trace can't generate in persistent memory file systemhttps://bugs.lttng.org/issues/10052016-03-16T03:01:59Zjia fangfang.jia@windriver.com
<p>Using the new feature Recording trace data on persistent memory file systems in LTTng 2.7.</p>
<p>After the device restart due to system crash, we can read the crash log in the specific shm-path directory. When we want to trace the new log, the following debug log reported and the new data can't be traced to the specific shm-path.</p>
<p>DEBUG3 - 20:07:54.353680 [286/291]: mkdir() recursive /shm/lttng/mysession-20160302-182255/ust/uid/0/32-bit with mode 504 for uid 0 and gid 0 (in run_as_mkdir_recursive() at runas.c:468)<br />DEBUG1 - 20:07:54.353726 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />DEBUG3 - 20:07:54.354141 [286/291]: open() /shm/lttng/mysession-20160302-182255/ust/uid/0/32-bit/metadata with flags C1 mode 384 for uid 0 and gid 0 (in run_as_open() at runas.c:498)<br />DEBUG1 - 20:07:54.354202 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />PERROR - 20:07:54.354724 [286/291]: <ins><strong>Opening metadata file: File exists (in ust_registry_session_init() at ust-registry.c:606)</strong></ins><br />DEBUG3 - 20:07:54.354801 [286/291]: rmdir_recursive() /shm/lttng/mysession-20160302-182255 with for uid 0 and gid 0 (in run_as_rmdir_recursive() at runas.c:524)<br />DEBUG1 - 20:07:54.354847 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />DEBUG3 - 20:07:54.355554 [287/287]: Attempting rmdir /shm/lttng/mysession-20160302-182255 (in utils_recursive_rmdir() at utils.c:1247)<br />DEBUG3 - 20:07:54.356905 [286/291]: Buffer registry per UID destroy with id: 0, ABI: 32, uid: 0 (in buffer_reg_uid_destroy() at buffer-registry.c:641)</p>
<p>For detail info, please see my attachment.</p> LTTng-tools - Bug #1002 (Resolved): lttng snapshot on an empty tracing session results in an unkn...https://bugs.lttng.org/issues/10022016-03-14T19:22:20ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Recording a snapshot on a session which has recorded no events results in the following output in MI output mode:</p>
<pre>
$ lttng --mi=xml snapshot record | xmllint --format -
Error: Unknown error code
Error: Command error
<?xml version="1.0" encoding="UTF-8"?>
<command xmlns="http://lttng.org/xml/ns/lttng-mi" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lttng.org/xml/ns/lttng-mi http://lttng.org/xml/schemas/lttng-mi/3/lttng-mi-3.0.xsd" schemaVersion="3.0">
<name>snapshot</name>
<output>
<snapshot_action>
<name>record</name>
<output/>
</snapshot_action>
</output>
<success>false</success>
</command>
</pre><br />(Unknown Error Code + Command error)
<p>while the human output results in the following output:<br /><pre>
$ lttng snapshot record
Warning: No data available in snapshot
</pre></p>
<p>Also, the commands should not result in an error (warning is fine) since the user has no control on whether or not applications (or the kernel) have produced any event between the start of the tracing session and the recording of the snapshot.</p> LTTng-tools - Bug #988 (Resolved): lttng -q option not functionalhttps://bugs.lttng.org/issues/9882016-01-10T19:45:44Zjohn smithwhalajam@yahoo.com
<p>lttng -q (quiet) option doesn't work for stop, destroy, view commands (didn't test the rest of commands):<br />$ lttng -V<br />lttng (LTTng Trace Control) 2.7.0 - Herbe à Détourne</p>
<p>$ lttng stop x<br />Error: Session name not found<br />$ lttng -q stop x<br />Error: Session name not found</p>
<p>$ lttng destroy x<br />Error: Session name x not found<br />$ lttng -q destroy x<br />Error: Session name x not found</p>
<p>$ lttng create<br />Session auto-20160110-113725 created.<br />Traces will be written in /home/john/lttng-traces/auto-20160110-113725</p>
<p>$ lttng enable-event -a -u<br />All UST events are enabled in channel channel0<br />$ lttng start<br />Tracing started for session auto-20160110-113725<br />$ lttng stop<br />Waiting for data availability<br />Tracing stopped for session auto-20160110-113725<br />$ lttng view<br />Trace directory: /home/john/lttng-traces/auto-20160110-113725<br />[error] Cannot open any trace for reading.<br />[error] opening trace "/home/john/lttng-traces/auto-20160110-113725" for reading.<br />[error] none of the specified trace paths could be opened.</p>
<p>$ lttng -q view<br />[error] Cannot open any trace for reading.<br />[error] opening trace "/home/john/lttng-traces/auto-20160110-113725" for reading.<br />[error] none of the specified trace paths could be opened.</p> LTTng-tools - Bug #970 (Resolved): snapshot written in 2 folders if it takes more than 1 second t...https://bugs.lttng.org/issues/9702015-11-03T22:59:57ZJulien Desfossezjdesfossez@efficios.com
<p>With "lttng snapshot record" on a session with UST et kernel buffers, if the writing of one domain takes longer than 1 second, another folder is created when writing the data of the second domain.<br />Reproduced with master.</p>
<p>lttng create --snapshot<br />lttng enable-channel -k bla --subbuf-size 8M --num-subbuf 8<br />lttng enable-event -k -a -c bla<br />lttng enable-event -u -a<br />lttng start<br />... wait for the kernel ring-buffer to be full...<br />lttng snapshot record</p>
<p>If the disk takes longer than 1 second to write 64MB, the resulting snapshot looks like this:</p>
<p>ls /home/julien/lttng-traces/auto-20151103-174410<br />snapshot-1-20151103-174443-0/kernel/.....<br />snapshot-1-20151103-174446-0/ust.....</p>
<p>This is confusing because we expect all the traces of one snapshot to be in the same folder, especially when we record lots of them it becomes a mess to link the ones that belong together.</p> Userspace RCU - Bug #953 (Resolved): Run make check and regtest for master on CIhttps://bugs.lttng.org/issues/9532015-10-16T20:58:23ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.comBabeltrace - Bug #952 (Invalid): Tests are skipped in oot tree on jenkinshttps://bugs.lttng.org/issues/9522015-10-16T20:10:45ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>See <a class="external" href="https://ci.lttng.org/view/Babeltrace/job/babeltrace_master_build/arch=x86-32,build=oot,conf=std/18/tapResults/">https://ci.lttng.org/view/Babeltrace/job/babeltrace_master_build/arch=x86-32,build=oot,conf=std/18/tapResults/</a></p> LTTng-tools - Bug #936 (Resolved): Disable all event usthttps://bugs.lttng.org/issues/9362015-09-09T21:21:52ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Disable all events on ust does not work.</p>
<pre>
lttng create mysession
lttng enable-event test -u
lttng enable-event test2 -u
lttng disable-event -u -a
lttng list mysession
killall lttng-sessiond
</pre>
<p>Result:</p>
<pre>
Spawning a session daemon
Session mysession created.
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171434
UST event test created in channel channel0
UST event test2 created in channel channel0
Error: UST event not found
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171434
=== Domain: UST global ===
Buffer type: per UID
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 131072
number of subbufers: 4
switch timer interval: 0
read timer interval: 0
trace file count: 0
trace file size (bytes): 0
output: mmap()
Events:
test2 (type: tracepoint) [enabled]
test (type: tracepoint) [enabled]
</pre><br />Looks like it works when using enable-event -u -a:<br /><pre>
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171544
All UST events are enabled in channel channel0
All UST events are disabled in channel channel0
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171544
=== Domain: UST global ===
Buffer type: per UID
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 131072
number of subbufers: 4
switch timer interval: 0
read timer interval: 0
trace file count: 0
trace file size (bytes): 0
output: mmap()
Events:
* (type: tracepoint) [disabled]
</pre><br />The behaviour on kernel/python/jul/log4j is to disable all event.
<pre>
Session mysession created.
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171705
Kernel event test created in channel channel0
All Kernel events are disabled in channel channel0
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171705
=== Domain: Kernel ===
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 262144
number of subbufers: 4
switch timer interval: 0
read timer interval: 200000
trace file count: 0
trace file size (bytes): 0
output: splice()
Events:
test (loglevel: TRACE_EMERG (0)) (type: tracepoint) [disabled]
</pre> LTTng-tools - Bug #925 (Resolved): Disabling a kernel event disables all kernel eventshttps://bugs.lttng.org/issues/9252015-09-01T19:41:49ZJérémie Galarneaujeremie.galarneau@efficios.comLTTng-tools - Bug #915 (Resolved): Support symlinks for ltttng-crashhttps://bugs.lttng.org/issues/9152015-09-01T17:58:59ZJérémie Galarneaujeremie.galarneau@efficios.comLTTng - Bug #914 (Resolved): lttng: Trace is going on even though event has been disabledhttps://bugs.lttng.org/issues/9142015-09-01T01:44:26Zjia fangfang.jia@windriver.com
<p>Hi guys</p>
<p>Issue: My trace log is still going after I disable the events.<br />My detail log is as attachment<br />Please help me, Thanks.</p>
<p>lttng version is :<br />lttng version 2.7.0-pre - Gaia -v2.6.0-rc1-242-g60f7035.</p>
<p>PC is Ubuntu14.04</p>
<p>I use my app "hello" as <a class="external" href="http://lttng.org/docs/v2.6/#doc-tracing-your-own-user-application">http://lttng.org/docs/v2.6/#doc-tracing-your-own-user-application</a> told. But I modified my hello.c is as attachment.</p>
<p>To reproduce my issue:<br />1/ lttng create mysession</p>
<p>2/ lttng enable-event hello_world:my_first_tracepoint --filter '$ctx.procname == "./hello*"' --session mysession -u -c channel</p>
<p>3/ lttng enable-event hello_world:my_first_tracepoint --session mysession -u -c channel</p>
<p>4/ ./hello and press Enter to start tracepoint</p>
<p>5/ lttng disable-event --session mysession --channel channel --userspace hello_world:my_first_tracepoint</p>
<p>6/ lttng start and about 4sec later type lttng stop<br />7/ lttng view</p>
<p>Then you can view the trace log is going though you have disabled the events with filter</p> LTTng-tools - Bug #912 (Resolved): Standalone versioning for mi api (xml)https://bugs.lttng.org/issues/9122015-08-27T19:44:51ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>The mi should have its own versioning as it is an api.</p>
<p>Representation of concept might force a break on backward compatibility.</p>
<p>But the mi should always try to minimize the need of a version bump.</p> LTTng-UST - Bug #903 (Resolved): Make distcheck failhttps://bugs.lttng.org/issues/9032015-08-10T17:50:39ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>The make distcheck command from automake fail.</p>
<p>This is due to the make on a read only dist tarball.</p>
<pre>
make[5]: Entering directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build/doc/examples/easy-ust'
gcc -I. -I../../../../include/ -I../../../include/ \
-Wall -g -O2 -c -o sample.o sample.c
gcc -I. -I../../../../include/ -I../../../include/ \
-Wall -g -O2 -c -o tp.o tp.c
Assembler messages:
Fatal error: can't create sample.o: Permission denied
Makefile:34: recipe for target 'sample.o' failed
make[5]: *** [sample.o] Error 1
make[5]: *** Waiting for unfinished jobs....
Assembler messages:
Fatal error: can't create tp.o: Permission denied
Makefile:38: recipe for target 'tp.o' failed
make[5]: *** [tp.o] Error 1
make[5]: Leaving directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build/doc/examples/easy-ust'
Makefile:885: recipe for target 'all-local' failed
make[4]: *** [all-local] Error 1
make[4]: Leaving directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build/doc/examples'
Makefile:536: recipe for target 'all-recursive' failed
make[3]: *** [all-recursive] Error 1
make[3]: Leaving directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build/doc'
Makefile:558: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build'
Makefile:433: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory '/home/jonathan/lttng/lttng-ust/lttng-ust-2.7.0-rc1/_build'
Makefile:763: recipe for target 'distcheck' failed
make: *** [distcheck] Error 1
</pre> LTTng-tools - Bug #882 (Resolved): destroying one session stops tracing on all other sessionshttps://bugs.lttng.org/issues/8822015-03-02T19:01:10ZAnand Neelianand.neeli@gmail.com
<p>With lttng 2.6.0 with 0.8.6 liburcu see following issue: <br />On a multisession setup with relayd, Destroying one session is showing errors on the console and tracing of all other sessions is stopped.</p>
<p>steps to recreate are as follows<br />1) Create multiple sessions with relayd. (in below logs i have created 2 sessions)<br />2) Destroy one session and then tracing on all the sessions stops</p>
<p>(Have not check this with single session, could be happening with single session also)</p>
<p>logs<br />---------<br />node-a # lttng list<br />Available tracing sessions:<br /> 1) mys5 (tcp4://128.0.0.4:5342/ [data: 5343]) [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]<br /> Live timer interval (usec): 2000000</p>
<pre><code>2) mysession (tcp4://128.0.0.4:5342/ [data: 5343]) [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]<br /> Live timer interval (usec): 2000000</code></pre>
<p>Use lttng list <session_name> for more details</p>
<p>node-a # lttng list mys5<br />Tracing session mys5: [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]</p>
<p>=== Domain: UST global ===</p>
<p>Buffer type: per PID</p>
<p>Channels:<br />-------------<br />- myc5: [enabled]</p>
<pre><code>Attributes:<br /> overwrite mode: 0<br /> subbufers size: 4096<br /> number of subbufers: 4<br /> switch timer interval: 0<br /> read timer interval: 0<br /> trace file count: 2<br /> trace file size (bytes): 2000000<br /> output: mmap()</code></pre>
<pre><code>Events:
* (type: tracepoint) [enabled] [has exclusions]</code></pre>
<p>node-a # lttng list mysession<br />Tracing session mysession: [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]</p>
<p>=== Domain: UST global ===</p>
<p>Buffer type: per PID</p>
<p>Channels:<br />-------------<br />- mychannel: [enabled]</p>
<pre><code>Attributes:<br /> overwrite mode: 0<br /> subbufers size: 4096<br /> number of subbufers: 4<br /> switch timer interval: 0<br /> read timer interval: 0<br /> trace file count: 2<br /> trace file size (bytes): 2000000<br /> output: mmap()</code></pre>
<pre><code>Events:
* (type: tracepoint) [enabled] [has exclusions]</code></pre>
<p>node-a # lttng destroy mys5 <<<<<<<<<<<<<<<<<<<< destroying session here<br />Error: Pushing metadata<br />Error: Handling metadata request<br />Error: Health error occurred in thread_manage_consumer<br />Error: Pushing metadata<br />Error: Handling metadata request<br />Error: Health error occurred in thread_manage_consumer<br />PERROR - 04:36:15.258880 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />PERROR - 04:36:15.259520 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.260076 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />Session mys5 destroyed<br />node-a # PERROR - 04:36:15.106369 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106410 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106437 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106474 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106490 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106500 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106509 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106519 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106528 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106537 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106546 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106555 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106564 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106573 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106581 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106590 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106599 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106608 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106616 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106625 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106636 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106645 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106654 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106663 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106672 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106681 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106690 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106699 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106708 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106717 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond</p>
<p>node-a # ps aux | grep lttng<br />root 1622 0.1 0.0 495020 1160 ? Ssl 04:34 0:00 lttng-relayd -o /var/tmp/lttng-traces -d<br />root 1631 0.0 0.1 854020 6360 ? Ssl 04:34 0:00 lttng-sessiond --consumerd32-path /usr/lib/lttng/libexec/lttng-consumerd --consumerd32-libdir /usr/lib/ --consumerd64-path /usr/lib64/lttng/libexec/lttng-consumerd --consumerd64-libdir /usr/lib64/ -b --no-kernel<br />root 1643 0.0 0.0 546356 3424 ? Sl 04:34 0:00 lttng-consumerd -u --consumerd-cmd-sock /var/run/lttng/ustconsumerd64/command --consumerd-err-sock /var/run/lttng/ustconsumerd64/error --group tracing<br />root 1651 0.0 0.0 64068 1852 ? Sl 04:34 0:00 lttng-consumerd -u --consumerd-cmd-sock /var/run/lttng/ustconsumerd32/command --consumerd-err-sock /var/run/lttng/ustconsumerd32/error --group tracing</p> LTTng-tools - Bug #878 (Resolved): lttng-sessiond cannot unload lttng-modules when live session e...https://bugs.lttng.org/issues/8782015-02-03T02:59:16ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>After the following set of commands:</p>
<p>lttng-sessiond (as root)</p>
<p>lttng create --live; lttng enable-channel -k test; lttng start</p>
<p>If we CTRL-C lttng-sessiond, this appears:</p>
<p>^CError: Unable to remove module lttng-ring-buffer-client-discard<br />Error: Unable to remove module lttng-lib-ring-buffer<br />Error: Unable to remove module lttng-tracer</p>
<p>It appears there is still a refcount held by sessiond when we try to remove the modules in this scenario. It could be a file that should have been closed but has not.</p>