LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912024-03-13T17:07:08ZLTTng bugs repository
Redmine LTTng-tools - Bug #1411 (Feedback): Memory leak when relay daemon exits before application startshttps://bugs.lttng.org/issues/14112024-03-13T17:07:08ZMikael Beckius
<p>When the relay daemon is shutdown after creating a live session but before applications are started the shared memory allocated for tracing appears to remain and new memory is allocated for every application start.</p>
<p>How to reproduce:<br />host:~# lttng create micke --live<br />Spawning a session daemon<br />Spawning a relayd daemon<br />Live session micke created.<br />Traces will be output to tcp4://127.0.0.1:5342/ [data: 5343]<br />Live timer interval set to 1000000 us</p>
<p>host:~# lttng enable-event --userspace --all<br />All ust events are enabled in channel channel0</p>
<p>host:~# lttng start<br />Tracing started for session micke</p>
<p>host:~# killall -9 lttng-relayd</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 207Mi 15Gi 572Ki 320Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# ./micke-lttng<br />Mikael LTTNG 2015 - Starting<br />Mikael LTTNG 2015 - Signing out</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 248Mi 15Gi 40Mi 360Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# ./micke-lttng<br />Mikael LTTNG 2015 - Starting<br />Mikael LTTNG 2015 - Signing out</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 288Mi 15Gi 80Mi 400Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# lttng destroy micke<br />Destroying session micke..<br />Session micke destroyed</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 289Mi 15Gi 80Mi 400Mi 15Gi<br />Swap: 0B 0B 0B<br />host:~#</p>
<p>Version:<br />lttng-tools 2.13.11</p>
<p>Analyzis:<br />It seems that when the first application of a session starts, after the relay daemon has been shutdown, a failure to transfer streams to the relay deamon triggers a clean up through a call to ust_consumer_destroy_channel. But it appears that the cleanup isn't complete and the channel reference count remains incremented. Decrementing the reference count appears to be blocked in clean_channel_stream_list by stream->monitor = 0; preventing CONSUMER_CHANNEL_DEL from reaching consumer_del_channel(chan);</p>
<p>Information has it that this problem is NOT reproduced on 2.13 but I haven't tested that myself</p> LTTng-tools - Bug #1392 (Feedback): lttng view: Stream 0 is not declared in metadatahttps://bugs.lttng.org/issues/13922023-09-27T20:38:17ZRicardo Nabinger Sanchezrnsanchez@gmail.com
<p>While trying to capture UST events from a simple program, even though kernel events seem to be properly captured, UST events are not. Trace Compass cannot decode the UST data (but it can decode from the kernel channel).<br />The application is single-threaded, performs only disk and stdout I/O, nothing fancy. Compiled with <code>-finstrument-functions</code>, no optimizations, and with debugging symbols.</p>
<p>Session setup:<br /><pre>
# lttng create
# lttng enable-channel -u --subbuf-size=64M --num-subbuf=8 chan_ust
# lttng enable-channel -k --subbuf-size=64M --num-subbuf=2 chan_kernel
# lttng add-context -u -t vpid -t ip -t procname -t vtid -c chan_ust
# lttng enable-event -u -a -c chan_ust
# lttng enable-event -k -c chan_kernel --syscall --all
# lttng enable-event -k -c chan_kernel lttng_statedump_block_device,lttng_statedump_file_descriptor,lttng_statedump_process_state,mm_page_alloc,mm_page_free,net_dev_xmit,netif_receive_skb,sched_pi_setprio,sched_process_exec,sched_process_fork,sched_switch,sched_wakeup,sched_waking,softirq_entry,softirq_exit,softirq_raise
# lttng start
(possibly excessive preloads, but these are from my notes, collecting data on Varnish Cache)
# LTTNG_UST_BLOCKING_RETRY_TIMEOUT=-1 LD_PRELOAD=liblttng-ust-fd.so:liblttng-ust-fork.so:liblttng-ust-dl.so:liblttng-ust-cyg-profile.so:liblttng-ust-libc-wrapper.so:liblttng-ust-pthread-wrapper.so tests/test-playlist tests/playlist/bug_ffmpeg_1000-x20.m3u
# lttng stop
# lttng view > /dev/null
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
</pre></p>
Environment:
<ul>
<li>lttng-modules-2.13.10 (from tarball)</li>
<li>lttng-tools-2.13.11 (from tarball)</li>
<li>lttng-ust-2.13.6 (from tarball)</li>
<li>babeltrace (git; at <code>v1.5.11</code>)</li>
<li>Linux kernel 6.6.0-rc1</li>
<li>Slackware64-current, up-to-date as of 2023-09-26</li>
<li>Trace Compass 9.1.0.202309200833</li>
</ul>
<p>IRC snippet:<br /><pre>
Sep/27 16:13:54 rnsanchez I'm trying to find out what I'm doing wrong.. I keep getting "[error] Stream 0 is not declared in metadata." in my captures. I guess this is making tracecompass not being able to show UST events (simple function enter-exit instrumentation)
Sep/27 16:16:46 rnsanchez using my old notes, while checking https://lttng.org/docs/v2.13/#doc-liblttng-ust-cyg-profile and other sections
Sep/27 16:18:25 jgalar rnsanchez: shot in the dark, but are you making sure the session is stopped or destroyed before reading the trace?
Sep/27 16:18:28 rnsanchez if it helps knowing, events from kernel channel are fine and I can see them in tracecompass (the trace is huge)
Sep/27 16:18:52 rnsanchez jgalar: yes, with lttng stop, then I copy the data from root to my user
Sep/27 16:19:10 rnsanchez but I get that Stream 0 regardless.. it's from lttng view
Sep/27 16:20:11 jgalar it sounds like a metadata flushing issue, any chance you can share the trace? (or at least, the metadata files)
Sep/27 16:20:42 rnsanchez yes, sure. just a moment while I upload it
Sep/27 16:20:48 jgalar okay thanks
Sep/27 16:21:43 ebugden not sure if related, but it's triggering deja vu of this issue: https://github.com/efficios/barectf/commit/d024537859c1d869bfa1cedc8abe8e3f7a648faa
Sep/27 16:22:23 ebugden maybe a similar problem in a similar area in lttng view
Sep/27 16:25:28 rnsanchez jgalar: https://rnsanchez.wait4.org/auto-20230927-161138.tar.xz
Sep/27 16:27:33 rnsanchez jgalar: this last one has incomplete events as I was trying to investigate, but I can bundle another one with full events as I used to collect
Sep/27 16:31:51 rnsanchez ebugden: is there anything on my side I can do?
Sep/27 16:34:38 ebugden rnsanchez: i don't feel i'm familiar enough with this situation to say; i was tossing out the association as possible food for thought for jgalar
Sep/27 16:35:12 rnsanchez oh ok :-)
Sep/27 16:43:03 rnsanchez jgalar: just in case, here is with full events: https://rnsanchez.wait4.org/auto-20230927-164014.tar.xz
Sep/27 16:45:53 jgalar ebugden: yup, the errors are indeed similar
Sep/27 16:50:59 jgalar hmm, the user space trace's metadata doesn't declare any stream class, that's... unexpected...
Sep/27 16:51:14 jgalar which versions of lttng-ust and lttng-tools are you running?
Sep/27 16:55:03 rnsanchez ust 2.13.6 (tarball) and tools 2.13.11 (tarball too)
Sep/27 16:55:07 rnsanchez compiled today
Sep/27 16:55:13 jgalar okay
Sep/27 17:17:57 ebugden rnsanchez (cc: jgalar): would you open a bug report on https://bugs.lttng.org/ ?
Sep/27 17:18:16 rnsanchez ebugden: sure
Sep/27 17:19:21 ebugden thanks! (we have a few hypotheses, but we'd need more information about exactly how the trace is generated)
Sep/27 17:19:58 jgalar rnsanchez: it's weird, the user space trace's stream files just have the packet headers... if you can reproduce the problem while running lttng-sessiond with the `-vvv --verbose-consumer` options to get the logs, it would be helpful
Sep/27 17:20:47 rnsanchez I can try. I suppose I have to finish the sessiond already running and manually launch another?
Sep/27 17:21:06 jgalar yep
</pre></p> LTTng - Bug #1385 (Feedback): [lttng-live] error: recevie a message in the pasthttps://bugs.lttng.org/issues/13852023-08-16T18:53:52ZBin Yuan
<p>My program works well most of time. But in rare case, it reports the following hightling error.<br /><img src="https://bugs.lttng.org/attachments/download/572/clipboard-202308170250-vfley.jpg" alt="" loading="lazy" /></p>
<p>What will lead to those branch? any suggestion to debug?</p> LTTng - Bug #1370 (Confirmed): Why "lttng create --live" spawns a local relay daemon but not in ...https://bugs.lttng.org/issues/13702023-04-06T15:33:05ZBin Yuan
<p>The relayd spawned by lttng-create command don't close the file descriptor such like stdout.<br />Why not spawn the relayd with "--daemonize" option.</p> LTTng - Bug #1369 (Feedback): Nano clock value overflows the signed 64-bit integer range on the b...https://bugs.lttng.org/issues/13692023-03-26T02:41:32ZBin Yuan
<p>I launched a program producing traces to the lttng consumerd and lttng relayd, also launching a babeltrace command as lttng live consumer to show the tracing real-timely.</p>
<p>It works well when the channel is set to per user mode. But when it is changed to per process per user without no other changes, the babeltrace2 client crashing showing a overflow error:</p>
<p>The last error message of babeltrace2 exception stack shows :</p>
<p>"Cannot convert cycle to nanoseconds from origin for given clock class: value overflows the signed 64-bit integer range: cc-addr=****, cc-name="monotonic", cc-freq=1000000000, ....."</p>
<p>What's the difference resulting this crash ? How to solve it since the perprocess per user mode is perfered for me.</p>
<p>version info:<br />babeltrace2: 2.0.4<br />lttng comands: 2.12.12</p> LTTng-tools - Bug #1362 (New): Listing a trigger with an event rule matches condition with a upro...https://bugs.lttng.org/issues/13622022-10-24T21:10:59ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Running against <code>d67dc273f</code>.</p>
<p>When listing triggers, I get a crash:</p>
<pre>
[root@carbonara lttng-tools]# /tmp/lttng/bin/lttng list-triggers
- name: uprobe_trigger
owner uid: 0
condition: event rule matches
rule: uprobe_trigger (type: kernel:uprobe, location type: ELF, location: tests/regression/tools/notification//../../..//utils/testapp/userspace-probe-elf-binary/.libs/userspace-probe-elf-binary:test_function)
errors: none
action:notify
lttng: ../../src/common/dynamic-array.hpp:53: void* lttng_dynamic_array_get_element(const lttng_dynamic_array*, size_t): Assertion `element_index < array->size' failed.
Aborted (core dumped)
</pre>
<p>gdb output:<br /><pre>
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by `/tmp/lttng/bin/lttng list-triggers'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007ffbb3aa164c in ?? () from /usr/lib/libc.so.6
(gdb) bt
#0 0x00007ffbb3aa164c in ?? () from /usr/lib/libc.so.6
#1 0x00007ffbb3a51958 in raise () from /usr/lib/libc.so.6
#2 0x00007ffbb3a3b53d in abort () from /usr/lib/libc.so.6
#3 0x00007ffbb3a3b45c in ?? () from /usr/lib/libc.so.6
#4 0x00007ffbb3a4a486 in __assert_fail () from /usr/lib/libc.so.6
#5 0x0000564a49b9d130 in lttng_dynamic_array_get_element (array=0x564a4ab6c040, element_index=0) at ../../src/common/dynamic-array.hpp:53
#6 0x0000564a49b9d378 in lttng_action_path_copy (src=0x564a4ab6c040, dst=0x564a4ab6c540) at actions/path.cpp:110
#7 0x0000564a49b4d505 in lttng_error_query_action_create (trigger=0x564a4ab6b5d0, action_path=0x564a4ab6c040) at error-query.cpp:232
#8 0x0000564a49b3ee7c in print_action_errors (trigger=0x564a4ab6b5d0, action_path_indexes=0x0, action_path_length=0) at commands/list_triggers.cpp:765
#9 0x0000564a49b3f99a in print_one_action (trigger=0x564a4ab6b5d0, action=0x564a4ab6b4e0, action_path_indexes=0x0, action_path_length=0) at commands/list_triggers.cpp:965
#10 0x0000564a49b400c7 in print_one_trigger (trigger=0x564a4ab6b5d0) at commands/list_triggers.cpp:1123
#11 0x0000564a49b4036e in print_sorted_triggers (triggers=0x564a4ab6b110) at commands/list_triggers.cpp:1201
#12 0x0000564a49b40c92 in cmd_list_triggers (argc=0, argv=0x7fffa9404f18) at commands/list_triggers.cpp:1420
#13 0x0000564a49b438d6 in handle_command (argc=1, argv=0x7fffa9404f10) at lttng.cpp:238
#14 0x0000564a49b4402c in parse_args (argc=2, argv=0x7fffa9404f08) at lttng.cpp:427
#15 0x0000564a49b441a8 in main (argc=2, argv=0x7fffa9404f08) at lttng.cpp:476
</pre></p>
<p>This is a trigger created by the following test:<br /><code>tests/regression/tools/notification/test_notification_kernel_userspace_probe</code></p>
<p>It can be reproduced by stopping the test before invoking the event-generating application.</p> LTTng-tools - Bug #1342 (New): Rotation thread's handling of session consumed size notifications ...https://bugs.lttng.org/issues/13422021-12-09T21:40:35ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Related to this change:<br /><a class="external" href="https://review.lttng.org/c/lttng-tools/+/6886/">https://review.lttng.org/c/lttng-tools/+/6886/</a></p>
The race described in that change has another consequence: the notification could refer to a different session instance of the same name:
<ul>
<li>notification emitted for session <code>foo</code>,</li>
<li>session <code>foo</code> is destroyed,</li>
<li>a new session named <code>foo</code> is created,</li>
<li>a rotation is triggered for <code>foo</code>.</li>
</ul>
There are multiple solutions for this:
<ul>
<li>Provide a unique session id as part the of the evaluation of <code>SESSION_CONSUMED_SIZE</code>,</li>
<li>Register an internal trigger that has the <code>rotate session</code> action (since ids are sampled during their handling).</li>
</ul>
<p>I can't work on a fix right now, but here is a placeholder patch that will eventually contain the fix:<br /><a class="external" href="https://review.lttng.org/c/lttng-tools/+/6907">https://review.lttng.org/c/lttng-tools/+/6907</a></p>
<p>This is hypothetical and I noticed it during a code review; nobody reported this problem.</p> LTTng-tools - Bug #1324 (New): lttng_enable_event() and lttng_enable_event_with_filter() cannot a...https://bugs.lttng.org/issues/13242021-08-24T19:48:46ZPhilippe Proulxeeppeliteloop@gmail.com
<p>Currently:</p>
<ul>
<li><code>lttng_enable_event()</code> calls <code>lttng_enable_event_with_exclusions()</code>, passing no filter expression and no event name exclusion patterns.</li>
<li><code>lttng_enable_event_with_filter()</code> calls <code>lttng_enable_event_with_exclusions()</code>, passing no event name exclusion patterns.</li>
</ul>
<p>This means that if you want to enable a recording event rule described with a filter expression and event name exclusion patterns, you need to:</p>
<ol>
<li>Get the filter expression from its descriptor with <code>lttng_event_get_filter_expression()</code>.</li>
<li>Build an array of event name exclusion patterns, getting them from its descriptor with <code>lttng_event_get_exclusion_name_count()</code> and <code>lttng_event_get_exclusion_name()</code>.</li>
<li>Call <code>lttng_enable_event_with_exclusions()</code>.</li>
</ol>
<p>This is what you would need to do to blindly enable a disabled recording event rule of which the descriptor comes from <code>lttng_list_events()</code>.</p>
<p><code>lttng_enable_event()</code> and <code>lttng_enable_event_with_filter()</code> could do the steps above for you, if this doesn't break any backward compatibility.</p> LTTng - Bug #1315 (Feedback): Kernel panics after `pkill lttng`; root session daemon has active t...https://bugs.lttng.org/issues/13152021-05-07T20:09:26ZPhilippe Proulxeeppeliteloop@gmail.com
<p>I don't know how to reproduce this bug.</p>
<p>I played with the <code>lttng add-trigger</code> command while writing usage examples for its manual page.</p>
<p>I started the root session daemon as such:</p>
<pre>
# lttng-sessiond --daemonize --group=eepp
</pre>
<p>I then ran commands such as:</p>
<pre>
$ lttng add-trigger --name user --condition=event-rule-matches --domain=user --action=notify
</pre>
<pre>
$ lttng add-trigger --condition=event-rule-matches \
--domain=user --action=notify \
--rate-policy=every:10
</pre>
<pre>
$ lttng add-trigger --owner-uid=33 \
--condition=event-rule-matches \
--domain=kernel --name='sched*' \
--action=notify
</pre>
<pre>
$ lttng add-trigger --condition=event-rule-matches \
--domain=kernel --type=syscall \
--filter='fd < 3' \
--action=start-session my-session \
--rate-policy=once-after:40
</pre>
<p>Note that I had no tracing session named <code>my-session</code>.</p>
<p>After a few minutes, Xorg froze. I managed to login again in a virtual console. I ran:</p>
<pre>
# pkill lttng
</pre>
<p>and got an instant kernel panic.</p>
<p>Attached is a photo of what was on the screen after running <code>pkill</code>.</p>
<p>Using:</p>
<ul>
<li>LTTng-tools <code>60860e547ce31ea629e846e00b66342425474b8d</code>.</li>
<li>LTTng-UST <code>a0f2513af262a19822d46f84cd5e34be0badc484</code></li>
<li>LTTng-modules <code>51ef453614a6db2b778595b16d93283d25db974a</code></li>
<li>liburcu <code>5e1b7c840a2b21b8442b322cedbb70a790e49520</code></li>
</ul> LTTng-tools - Bug #1219 (New): regression/tools/clear intermitent failure on powerpchttps://bugs.lttng.org/issues/12192020-02-06T20:31:49ZMichael Jeansonmjeanson@efficios.com
<p>This sometimes fail on the 2.12 branch on powerpc, might not be related to the architecture, just the fact that our ppc builders are slow.</p>
<pre>
15:14:14 # Waiting for live viewers on url: net://localhost
15:14:14 ok 955 - Waiting for live viewers on url: net://localhost
15:14:14 PASS: tools/clear/test_ust 955 - Waiting for live viewers on url: net://localhost
15:14:14 02-05 15:04:36.302 27915 27915 E PLUGIN/CTF/MSG-ITER create_msg_stream_end@msg-iter.c:2520 [lttng-live] Cannot create stream end message because stream is NULL: msg-it-addr=0x1b3d820
15:14:14 02-05 15:04:36.302 27915 27915 E PLUGIN/SRC.CTF.LTTNG-LIVE lttng_live_iterator_close_stream@lttng-live.c:855 [lttng-live] Error getting the next message from CTF message iterator
15:14:14 02-05 15:04:36.302 27915 27915 W LIB/MSG-ITER bt_self_component_port_input_message_iterator_next@iterator.c:920 Component input port message iterator's "next" method failed: iter-addr=0x1b3b680, iter-upstream-comp-name="lttng-live", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE, iter-upstream-comp-class-name="lttng-live", iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
15:14:14 02-05 15:04:36.302 27915 27915 E PLUGIN/FLT.UTILS.MUXER muxer_upstream_msg_iter_next@muxer.c:444 [muxer] Error or unsupported status code: status-code=-1
15:14:14 02-05 15:04:36.302 27915 27915 E PLUGIN/FLT.UTILS.MUXER validate_muxer_upstream_msg_iters@muxer.c:975 [muxer] Cannot validate muxer's upstream message iterator wrapper: muxer-msg-iter-addr=0x1b36ec0, muxer-upstream-msg-iter-wrap-addr=0x1b3b410
15:14:14 02-05 15:04:36.302 27915 27915 E PLUGIN/FLT.UTILS.MUXER muxer_msg_iter_next@muxer.c:1371 [muxer] Cannot get next message: comp-addr=0x1b3aae0, muxer-comp-addr=0x1b3aa20, muxer-msg-iter-addr=0x1b36ec0, msg-iter-addr=0x1b3b510, status=ERROR
15:14:14 02-05 15:04:36.302 27915 27915 W LIB/MSG-ITER bt_self_component_port_input_message_iterator_next@iterator.c:920 Component input port message iterator's "next" method failed: iter-addr=0x1b3b510, iter-upstream-comp-name="muxer", iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER, iter-upstream-comp-class-name="muxer", iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu", iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
15:14:14 02-05 15:04:36.302 27915 27915 W LIB/GRAPH consume_graph_sink@graph.c:600 Component's "consume" method failed: status=ERROR, comp-addr=0x1b3add0, comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK, comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages (`text` fo", comp-class-is-frozen=1, comp-class-so-handle-addr=0x1b45cc0, comp-class-so-handle-path="/home/jenkins/workspace/lttng-tools_stable-2.12_slesbuild/arch/sles12sp2/babeltrace_version/stable-2.0/build/std/conf/std/liburcu_version/stable-0.11/test_type/base/deps/build/lib/babeltrace2/plugins/babeltrace-plugin-text.so", comp-input-port-count=1, comp-output-port-count=0
15:14:14 02-05 15:04:36.302 27915 27915 E CLI cmd_run@babeltrace2.c:2590 Graph failed to complete successfully
15:14:14
15:14:14 ERROR: [Babeltrace CLI] (babeltrace2.c:2590)
15:14:14 Graph failed to complete successfully
15:14:14 CAUSED BY [libbabeltrace2] (graph.c:600)
15:14:14 Component's "consume" method failed: status=ERROR, comp-addr=0x1b3add0,
15:14:14 comp-name="pretty", comp-log-level=WARNING, comp-class-type=SINK,
15:14:14 comp-class-name="pretty", comp-class-partial-descr="Pretty-print messages
15:14:14 (`text` fo", comp-class-is-frozen=1, comp-class-so-handle-addr=0x1b45cc0,
15:14:14 comp-class-so-handle-path="/home/jenkins/workspace/lttng-tools_stable-2.12_slesbuild/arch/sles12sp2/babeltrace_version/stable-2.0/build/std/conf/std/liburcu_version/stable-0.11/test_type/base/deps/build/lib/babeltrace2/plugins/babeltrace-plugin-text.so",
15:14:14 comp-input-port-count=1, comp-output-port-count=0
15:14:14 CAUSED BY [libbabeltrace2] (iterator.c:920)
15:14:14 Component input port message iterator's "next" method failed:
15:14:14 iter-addr=0x1b3b510, iter-upstream-comp-name="muxer",
15:14:14 iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=FILTER,
15:14:14 iter-upstream-comp-class-name="muxer",
15:14:14 iter-upstream-comp-class-partial-descr="Sort messages from multiple inpu",
15:14:14 iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
15:14:14 CAUSED BY [libbabeltrace2] (iterator.c:920)
15:14:14 Component input port message iterator's "next" method failed:
15:14:14 iter-addr=0x1b3b680, iter-upstream-comp-name="lttng-live",
15:14:14 iter-upstream-comp-log-level=WARNING, iter-upstream-comp-class-type=SOURCE,
15:14:14 iter-upstream-comp-class-name="lttng-live",
15:14:14 iter-upstream-comp-class-partial-descr="Connect to an LTTng relay daemon",
15:14:14 iter-upstream-port-type=OUTPUT, iter-upstream-port-name="out", status=ERROR
15:14:14 CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (lttng-live.c:855)
15:14:14 Error getting the next message from CTF message iterator
15:14:14 CAUSED BY [lttng-live: 'source.ctf.lttng-live'] (msg-iter.c:2520)
15:14:14 Cannot create stream end message because stream is NULL: msg-it-addr=0x1b3d820
15:14:14 ok 956 - Clear session RCZWkxL3yzk8ayBb
15:14:14 PASS: tools/clear/test_ust 956 - Clear session RCZWkxL3yzk8ayBb
15:14:14 ok 957 - Stop lttng tracing for session RCZWkxL3yzk8ayBb
15:14:14 PASS: tools/clear/test_ust 957 - Stop lttng tracing for session RCZWkxL3yzk8ayBb
15:14:14 ok 958 - Destroy session RCZWkxL3yzk8ayBb
15:14:14 PASS: tools/clear/test_ust 958 - Destroy session RCZWkxL3yzk8ayBb
15:14:14 # Wait for viewer to exit
15:14:14 not ok 959 - Babeltrace succeeds
15:14:14 FAIL: tools/clear/test_ust 959 - Babeltrace succeeds
15:14:14 # Failed test 'Babeltrace succeeds'
15:14:14 # in ./tools/clear/test_ust:test_ust_streaming_live_viewer() at line 292.
</pre> LTTng-tools - Feature #1180 (New): SDT tracing does not work when the probes are compiled with se...https://bugs.lttng.org/issues/11802019-03-27T15:41:21ZNaser Ez
<p>Tracing SDT probes does not work when the probes are compiled with semaphores.</p>
<p>For example, when we have a probe like:</p>
<pre>
Provider: nginx
Name: http__subrequest__start
Location: 0x0000000000429e9c, Base: 0x0000000000473810, Semaphore: 0x00000000006920ba
Arguments: 8@%rbx__
</pre>
<p>then running the following command:<br /><pre>
lttng enable-event -k nginx:http__subrequest__start --userspace-probe=sdt:/usr/local/nginx/sbin/nginx:nginx:http__subrequest__start
</pre></p>
<p>will generate this error:<br /><pre>
Error: Event nginx:http__subrequest__start: Invalid userspace probe location (channel channel0, session auto-20190327-105118)
</pre></p> LTTng-tools - Bug #994 (Confirmed): Configured metadata data channel not listed for UST domainhttps://bugs.lttng.org/issues/9942016-01-27T16:17:44ZBernd HufmannBernd.Hufmann@ericsson.com
<p>I noticed this for LTTng 2.7 and 2.6.</p>
<p>After configuring the metadata channel for UST, the metadata channel is not listed when listing the session. By the way, it works for the kernel domain.</p>
<p>To reproduce:</p>
<blockquote>
<p>lttng create my<br />lttng enable-channel metadata -u -s my --subbuf-size 65536<br />lttng list my</p>
</blockquote>
<p>Tracing session my: [inactive]<br /> Trace path: /home/user/lttng-traces/my-20160127-105230</p>
<p>=== Domain: UST global ===</p>
<p>Buffer type: per UID</p> LTTng-tools - Bug #987 (Confirmed): Incomplete string comparison patternhttps://bugs.lttng.org/issues/9872016-01-06T23:23:31ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>In a few instances in lttng-tools, a string comparison pattern involving strncmp and strlen() to limit the length of the comparison is used.</p>
<p>One can quickly find them with:</p>
<p>grep -r -A 3 strncmp . | less -> search for strlen or strnlen</p>
<p>Many of those are OK: they indeed only want to match the beginning of the strings. However, this pattern is incorrectly used in cases where a full match is required, for instance:</p>
<p>ust-registry.c:ht_match_event()<br />./src/bin/lttng/commands/enable_channels.c: if (!strncmp(output_mmap, opt_output, strlen(output_mmap))) {<br />./src/bin/lttng/commands/enable_channels.c: } else if (!strncmp(output_splice, opt_output, strlen(output_splice))) {<br />./src/bin/lttng-sessiond/snapshot.c: if (!strncmp(output->name, name, strlen(name))) {</p> LTTng-modules - Feature #950 (New): Exclude specific kernel events when enabling all of themhttps://bugs.lttng.org/issues/9502015-10-09T22:55:29ZJulien Desfossezjdesfossez@efficios.com
<p>It would be nice to be able to do that (like in UST):<br />lttng enable-event -k -a --exclude sched_switch</p>
<p>My current workaround:<br />lttng enable-event <del>k $(lttng list -k | grep -v -E "Kernel|---</del>" | grep -v sched_switch | awk '{print $1}' | tr "\n" ",")</p> LTTng-tools - Bug #901 (In Progress): Some liblttng-ctl don't return LTTNG_OK on successhttps://bugs.lttng.org/issues/9012015-08-05T18:26:10ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>It appears that some liblttng-ctl functions, such as lttng_create_session_snapshot() have conflicting return code conventions.</p>
<p>In this specific case, the header under lttng/session.h asserts that the function will<br /><pre>
/*
[...]
* Return 0 on success else a negative LTTng error code.
*/
</pre></p>
<p>while the header in lttng-ctl.c affirms that it<br /><pre>
/*
[...]
* Returns LTTNG_OK on success or a negative error code.
*/
</pre></p>
<p>and the function actually returns<br /><pre>
ret = lttng_ctl_ask_sessiond_varlen(&lsm, uris, ...
</pre></p>
<p>which, itself will<br /><pre>
/*
[...]
Return size of data (only payload, not header) or a negative error code.
*/
</pre></p>
<p>This pattern is used in multiple places which breaks code which checks for "ret == LTTNG_OK" instead of "ret < 0".</p>