LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912024-03-13T17:07:08ZLTTng bugs repository
Redmine LTTng-tools - Bug #1411 (Feedback): Memory leak when relay daemon exits before application startshttps://bugs.lttng.org/issues/14112024-03-13T17:07:08ZMikael Beckius
<p>When the relay daemon is shutdown after creating a live session but before applications are started the shared memory allocated for tracing appears to remain and new memory is allocated for every application start.</p>
<p>How to reproduce:<br />host:~# lttng create micke --live<br />Spawning a session daemon<br />Spawning a relayd daemon<br />Live session micke created.<br />Traces will be output to tcp4://127.0.0.1:5342/ [data: 5343]<br />Live timer interval set to 1000000 us</p>
<p>host:~# lttng enable-event --userspace --all<br />All ust events are enabled in channel channel0</p>
<p>host:~# lttng start<br />Tracing started for session micke</p>
<p>host:~# killall -9 lttng-relayd</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 207Mi 15Gi 572Ki 320Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# ./micke-lttng<br />Mikael LTTNG 2015 - Starting<br />Mikael LTTNG 2015 - Signing out</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 248Mi 15Gi 40Mi 360Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# ./micke-lttng<br />Mikael LTTNG 2015 - Starting<br />Mikael LTTNG 2015 - Signing out</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 288Mi 15Gi 80Mi 400Mi 15Gi<br />Swap: 0B 0B 0B</p>
<p>host:~# lttng destroy micke<br />Destroying session micke..<br />Session micke destroyed</p>
<p>host:~# free -h<br /> total used free shared buff/cache available<br />Mem: 15Gi 289Mi 15Gi 80Mi 400Mi 15Gi<br />Swap: 0B 0B 0B<br />host:~#</p>
<p>Version:<br />lttng-tools 2.13.11</p>
<p>Analyzis:<br />It seems that when the first application of a session starts, after the relay daemon has been shutdown, a failure to transfer streams to the relay deamon triggers a clean up through a call to ust_consumer_destroy_channel. But it appears that the cleanup isn't complete and the channel reference count remains incremented. Decrementing the reference count appears to be blocked in clean_channel_stream_list by stream->monitor = 0; preventing CONSUMER_CHANNEL_DEL from reaching consumer_del_channel(chan);</p>
<p>Information has it that this problem is NOT reproduced on 2.13 but I haven't tested that myself</p> LTTng-tools - Bug #1392 (Feedback): lttng view: Stream 0 is not declared in metadatahttps://bugs.lttng.org/issues/13922023-09-27T20:38:17ZRicardo Nabinger Sanchezrnsanchez@gmail.com
<p>While trying to capture UST events from a simple program, even though kernel events seem to be properly captured, UST events are not. Trace Compass cannot decode the UST data (but it can decode from the kernel channel).<br />The application is single-threaded, performs only disk and stdout I/O, nothing fancy. Compiled with <code>-finstrument-functions</code>, no optimizations, and with debugging symbols.</p>
<p>Session setup:<br /><pre>
# lttng create
# lttng enable-channel -u --subbuf-size=64M --num-subbuf=8 chan_ust
# lttng enable-channel -k --subbuf-size=64M --num-subbuf=2 chan_kernel
# lttng add-context -u -t vpid -t ip -t procname -t vtid -c chan_ust
# lttng enable-event -u -a -c chan_ust
# lttng enable-event -k -c chan_kernel --syscall --all
# lttng enable-event -k -c chan_kernel lttng_statedump_block_device,lttng_statedump_file_descriptor,lttng_statedump_process_state,mm_page_alloc,mm_page_free,net_dev_xmit,netif_receive_skb,sched_pi_setprio,sched_process_exec,sched_process_fork,sched_switch,sched_wakeup,sched_waking,softirq_entry,softirq_exit,softirq_raise
# lttng start
(possibly excessive preloads, but these are from my notes, collecting data on Varnish Cache)
# LTTNG_UST_BLOCKING_RETRY_TIMEOUT=-1 LD_PRELOAD=liblttng-ust-fd.so:liblttng-ust-fork.so:liblttng-ust-dl.so:liblttng-ust-cyg-profile.so:liblttng-ust-libc-wrapper.so:liblttng-ust-pthread-wrapper.so tests/test-playlist tests/playlist/bug_ffmpeg_1000-x20.m3u
# lttng stop
# lttng view > /dev/null
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
[error] Stream 0 is not declared in metadata.
</pre></p>
Environment:
<ul>
<li>lttng-modules-2.13.10 (from tarball)</li>
<li>lttng-tools-2.13.11 (from tarball)</li>
<li>lttng-ust-2.13.6 (from tarball)</li>
<li>babeltrace (git; at <code>v1.5.11</code>)</li>
<li>Linux kernel 6.6.0-rc1</li>
<li>Slackware64-current, up-to-date as of 2023-09-26</li>
<li>Trace Compass 9.1.0.202309200833</li>
</ul>
<p>IRC snippet:<br /><pre>
Sep/27 16:13:54 rnsanchez I'm trying to find out what I'm doing wrong.. I keep getting "[error] Stream 0 is not declared in metadata." in my captures. I guess this is making tracecompass not being able to show UST events (simple function enter-exit instrumentation)
Sep/27 16:16:46 rnsanchez using my old notes, while checking https://lttng.org/docs/v2.13/#doc-liblttng-ust-cyg-profile and other sections
Sep/27 16:18:25 jgalar rnsanchez: shot in the dark, but are you making sure the session is stopped or destroyed before reading the trace?
Sep/27 16:18:28 rnsanchez if it helps knowing, events from kernel channel are fine and I can see them in tracecompass (the trace is huge)
Sep/27 16:18:52 rnsanchez jgalar: yes, with lttng stop, then I copy the data from root to my user
Sep/27 16:19:10 rnsanchez but I get that Stream 0 regardless.. it's from lttng view
Sep/27 16:20:11 jgalar it sounds like a metadata flushing issue, any chance you can share the trace? (or at least, the metadata files)
Sep/27 16:20:42 rnsanchez yes, sure. just a moment while I upload it
Sep/27 16:20:48 jgalar okay thanks
Sep/27 16:21:43 ebugden not sure if related, but it's triggering deja vu of this issue: https://github.com/efficios/barectf/commit/d024537859c1d869bfa1cedc8abe8e3f7a648faa
Sep/27 16:22:23 ebugden maybe a similar problem in a similar area in lttng view
Sep/27 16:25:28 rnsanchez jgalar: https://rnsanchez.wait4.org/auto-20230927-161138.tar.xz
Sep/27 16:27:33 rnsanchez jgalar: this last one has incomplete events as I was trying to investigate, but I can bundle another one with full events as I used to collect
Sep/27 16:31:51 rnsanchez ebugden: is there anything on my side I can do?
Sep/27 16:34:38 ebugden rnsanchez: i don't feel i'm familiar enough with this situation to say; i was tossing out the association as possible food for thought for jgalar
Sep/27 16:35:12 rnsanchez oh ok :-)
Sep/27 16:43:03 rnsanchez jgalar: just in case, here is with full events: https://rnsanchez.wait4.org/auto-20230927-164014.tar.xz
Sep/27 16:45:53 jgalar ebugden: yup, the errors are indeed similar
Sep/27 16:50:59 jgalar hmm, the user space trace's metadata doesn't declare any stream class, that's... unexpected...
Sep/27 16:51:14 jgalar which versions of lttng-ust and lttng-tools are you running?
Sep/27 16:55:03 rnsanchez ust 2.13.6 (tarball) and tools 2.13.11 (tarball too)
Sep/27 16:55:07 rnsanchez compiled today
Sep/27 16:55:13 jgalar okay
Sep/27 17:17:57 ebugden rnsanchez (cc: jgalar): would you open a bug report on https://bugs.lttng.org/ ?
Sep/27 17:18:16 rnsanchez ebugden: sure
Sep/27 17:19:21 ebugden thanks! (we have a few hypotheses, but we'd need more information about exactly how the trace is generated)
Sep/27 17:19:58 jgalar rnsanchez: it's weird, the user space trace's stream files just have the packet headers... if you can reproduce the problem while running lttng-sessiond with the `-vvv --verbose-consumer` options to get the logs, it would be helpful
Sep/27 17:20:47 rnsanchez I can try. I suppose I have to finish the sessiond already running and manually launch another?
Sep/27 17:21:06 jgalar yep
</pre></p> Babeltrace - Bug #1382 (Resolved): stable-2.0 branch fails to buildhttps://bugs.lttng.org/issues/13822023-07-12T14:04:04ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>stable-2.0 branch at commit 375847ee0df2f fails to build on my debian laptop:</p>
<pre>
make[2]: Entering directory '/home/compudj/git/babeltrace/src/ctf-writer'
CC trace.lo
In file included from object-pool.h:54,
from clock-class.h:29,
from trace.c:43:
In function 'bt_ctf_object_set_parent',
inlined from 'bt_ctf_object_set_parent' at object.h:120:6,
inlined from 'bt_ctf_trace_common_add_stream_class' at trace.c:1243:3:
object.h:141:26: error: null pointer dereference [-Werror=null-dereference]
141 | if (child->parent) {
| ~~~~~^~~~~~~~
object.h:141:26: error: null pointer dereference [-Werror=null-dereference]
</pre>
<p>with gcc version 12.2.0 (Debian 12.2.0-14)</p> LTTng-tools - Bug #1377 (Resolved): tests/regression/kernel/test_callstack fails with gcc 13https://bugs.lttng.org/issues/13772023-05-25T22:35:00ZRichard Purdie
<p>When Yocto Project tests upgrading to gcc 13, we see a ptest failure in lttng-tools (2.13.9) in tests/regression/kernel/test_callstack (6.1 kernel). The log of the failing test:</p>
1..22
<ol>
<li>Kernel tracer - Callstack context<br />ok 1 - Start session daemon<br />PASS: kernel/test_callstack 1 - Start session daemon</li>
<li>Userspace callstack test<br />ok 2 - Create session callstack in -o /tmp/tmp.test_user_callstack_trace_path.1DdyIt<br />PASS: kernel/test_callstack 2 - Create session callstack in -o /tmp/tmp.test_user_callstack_trace_path.1DdyIt<br />ok 3 - Enable channel chan0 for session callstack<br />PASS: kernel/test_callstack 3 - Enable channel chan0 for session callstack<br />ok 4 - Enable kernel syscall gettid for session callstack on channel chan0<br />PASS: kernel/test_callstack 4 - Enable kernel syscall gettid for session callstack on channel chan0<br />ok 5 - Add context command for type: callstack-user<br />PASS: kernel/test_callstack 5 - Add context command for type: callstack-user<br />ok 6 - Untrack command with opts: -s callstack --all --pid -k<br />PASS: kernel/test_callstack 6 - Untrack command with opts: -s callstack --all --pid -k<br />ok 7 - Track command with opts: -s callstack -k --pid=2503<br />PASS: kernel/test_callstack 7 - Track command with opts: -s callstack -k --pid=2503<br />ok 8 - Start tracing for session <br />PASS: kernel/test_callstack 8 - Start tracing for session<br />ok 9 - Stop lttng tracing for session <br />PASS: kernel/test_callstack 9 - Stop lttng tracing for session<br />Traceback (most recent call last):<br /> File "/usr/lib/lttng-tools/ptest/tests/regression/././kernel//../../utils/parse-callstack.py", line 160, in <module><br /> main()<br /> File "/usr/lib/lttng-tools/ptest/tests/regression/././kernel//../../utils/parse-callstack.py", line 155, in main<br /> raise Exception('Expected function name not found in recorded callstack')<br />Exception: Expected function name not found in recorded callstack<br />ok 10 - Destroy session callstack<br />PASS: kernel/test_callstack 10 - Destroy session callstack<br />not ok 11 - Validate userspace callstack<br />FAIL: kernel/test_callstack 11 - Validate userspace callstack</li>
<li> Failed test 'Validate userspace callstack'</li>
<li> in ./kernel/test_callstack:test_user_callstack() at line 80.</li>
<li>Kernel callstack test<br />ok 12 - Create session callstack in -o /tmp/tmp.test_kernel_callstack_trace_path.rHPMq3<br />PASS: kernel/test_callstack 12 - Create session callstack in -o /tmp/tmp.test_kernel_callstack_trace_path.rHPMq3<br />ok 13 - Enable channel chan0 for session callstack<br />PASS: kernel/test_callstack 13 - Enable channel chan0 for session callstack<br />ok 14 - Enable kernel syscall read for session callstack on channel chan0<br />PASS: kernel/test_callstack 14 - Enable kernel syscall read for session callstack on channel chan0<br />ok 15 - Add context command for type: callstack-kernel<br />PASS: kernel/test_callstack 15 - Add context command for type: callstack-kernel<br />ok 16 - Untrack command with opts: -s callstack --all --pid -k<br />PASS: kernel/test_callstack 16 - Untrack command with opts: -s callstack --all --pid -k<br />ok 17 - Track command with opts: -s callstack -k --pid=2532<br />PASS: kernel/test_callstack 17 - Track command with opts: -s callstack -k --pid=2532<br />ok 18 - Start tracing for session <br />PASS: kernel/test_callstack 18 - Start tracing for session<br />ok 19 - Stop lttng tracing for session <br />PASS: kernel/test_callstack 19 - Stop lttng tracing for session<br />ok 20 - Destroy session callstack<br />PASS: kernel/test_callstack 20 - Destroy session callstack<br />ok 21 - Validate kernel callstack<br />PASS: kernel/test_callstack 21 - Validate kernel callstack</li>
<li>Killing (signal SIGTERM) lttng-sessiond and lt-lttng-sessiond pids: 2469 2470 <br />ok 22 - Wait after kill session daemon<br />PASS: kernel/test_callstack 22 - Wait after kill session daemon</li>
<li>Looks like you failed 1 test of 22.</li>
</ol>
<p>I've attached the test binary to this bug.</p> LTTng - Bug #1370 (Confirmed): Why "lttng create --live" spawns a local relay daemon but not in ...https://bugs.lttng.org/issues/13702023-04-06T15:33:05ZBin Yuan
<p>The relayd spawned by lttng-create command don't close the file descriptor such like stdout.<br />Why not spawn the relayd with "--daemonize" option.</p> LTTng - Bug #1369 (Feedback): Nano clock value overflows the signed 64-bit integer range on the b...https://bugs.lttng.org/issues/13692023-03-26T02:41:32ZBin Yuan
<p>I launched a program producing traces to the lttng consumerd and lttng relayd, also launching a babeltrace command as lttng live consumer to show the tracing real-timely.</p>
<p>It works well when the channel is set to per user mode. But when it is changed to per process per user without no other changes, the babeltrace2 client crashing showing a overflow error:</p>
<p>The last error message of babeltrace2 exception stack shows :</p>
<p>"Cannot convert cycle to nanoseconds from origin for given clock class: value overflows the signed 64-bit integer range: cc-addr=****, cc-name="monotonic", cc-freq=1000000000, ....."</p>
<p>What's the difference resulting this crash ? How to solve it since the perprocess per user mode is perfered for me.</p>
<p>version info:<br />babeltrace2: 2.0.4<br />lttng comands: 2.12.12</p> LTTng-tools - Bug #1362 (New): Listing a trigger with an event rule matches condition with a upro...https://bugs.lttng.org/issues/13622022-10-24T21:10:59ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Running against <code>d67dc273f</code>.</p>
<p>When listing triggers, I get a crash:</p>
<pre>
[root@carbonara lttng-tools]# /tmp/lttng/bin/lttng list-triggers
- name: uprobe_trigger
owner uid: 0
condition: event rule matches
rule: uprobe_trigger (type: kernel:uprobe, location type: ELF, location: tests/regression/tools/notification//../../..//utils/testapp/userspace-probe-elf-binary/.libs/userspace-probe-elf-binary:test_function)
errors: none
action:notify
lttng: ../../src/common/dynamic-array.hpp:53: void* lttng_dynamic_array_get_element(const lttng_dynamic_array*, size_t): Assertion `element_index < array->size' failed.
Aborted (core dumped)
</pre>
<p>gdb output:<br /><pre>
Using host libthread_db library "/usr/lib/libthread_db.so.1".
Core was generated by `/tmp/lttng/bin/lttng list-triggers'.
Program terminated with signal SIGABRT, Aborted.
#0 0x00007ffbb3aa164c in ?? () from /usr/lib/libc.so.6
(gdb) bt
#0 0x00007ffbb3aa164c in ?? () from /usr/lib/libc.so.6
#1 0x00007ffbb3a51958 in raise () from /usr/lib/libc.so.6
#2 0x00007ffbb3a3b53d in abort () from /usr/lib/libc.so.6
#3 0x00007ffbb3a3b45c in ?? () from /usr/lib/libc.so.6
#4 0x00007ffbb3a4a486 in __assert_fail () from /usr/lib/libc.so.6
#5 0x0000564a49b9d130 in lttng_dynamic_array_get_element (array=0x564a4ab6c040, element_index=0) at ../../src/common/dynamic-array.hpp:53
#6 0x0000564a49b9d378 in lttng_action_path_copy (src=0x564a4ab6c040, dst=0x564a4ab6c540) at actions/path.cpp:110
#7 0x0000564a49b4d505 in lttng_error_query_action_create (trigger=0x564a4ab6b5d0, action_path=0x564a4ab6c040) at error-query.cpp:232
#8 0x0000564a49b3ee7c in print_action_errors (trigger=0x564a4ab6b5d0, action_path_indexes=0x0, action_path_length=0) at commands/list_triggers.cpp:765
#9 0x0000564a49b3f99a in print_one_action (trigger=0x564a4ab6b5d0, action=0x564a4ab6b4e0, action_path_indexes=0x0, action_path_length=0) at commands/list_triggers.cpp:965
#10 0x0000564a49b400c7 in print_one_trigger (trigger=0x564a4ab6b5d0) at commands/list_triggers.cpp:1123
#11 0x0000564a49b4036e in print_sorted_triggers (triggers=0x564a4ab6b110) at commands/list_triggers.cpp:1201
#12 0x0000564a49b40c92 in cmd_list_triggers (argc=0, argv=0x7fffa9404f18) at commands/list_triggers.cpp:1420
#13 0x0000564a49b438d6 in handle_command (argc=1, argv=0x7fffa9404f10) at lttng.cpp:238
#14 0x0000564a49b4402c in parse_args (argc=2, argv=0x7fffa9404f08) at lttng.cpp:427
#15 0x0000564a49b441a8 in main (argc=2, argv=0x7fffa9404f08) at lttng.cpp:476
</pre></p>
<p>This is a trigger created by the following test:<br /><code>tests/regression/tools/notification/test_notification_kernel_userspace_probe</code></p>
<p>It can be reproduced by stopping the test before invoking the event-generating application.</p> LTTng-tools - Bug #1361 (Resolved): lttng: Fix reproducibility issues https://bugs.lttng.org/issues/13612022-10-21T11:40:07ZAlexander Kanavin
<p>Yocto has added a patch to fix reproducibility issues:<br /><a class="external" href="https://git.yoctoproject.org/poky/tree/meta/recipes-kernel/lttng/lttng-tools/determinism.patch">https://git.yoctoproject.org/poky/tree/meta/recipes-kernel/lttng/lttng-tools/determinism.patch</a></p>
<p>The description from RP is:
=======<br />Add a hack to hardcode in specific rpaths which we then remove,<br /> allowing the build to be reproducible.</p>
<p>This is a bit ugly. Specifing abs_builddir as an RPATH is plain wrong when<br />cross compiling. Sadly, removing the rpath makes libtool/automake do<br />weird things and breaks the build as shared libs are no longer generated.</p>
<p>We already try and delete the RPATH at do_install with chrpath however<br />that does leave the path in the string table so it doesn't help us<br />with reproducibility.</p>
<p>Instead, hack in a bogus but harmless path, then delete it later in<br />our do_install. Ultimately we may want to pass a specific path to use<br />to configure if we really do need to set an RPATH at all. It is unclear<br />to me whether the tests need that or not.</p>
<p>Fixes reproducibility issues for lttng-tools.</p>
<p>Upstream-Status: Pending [needs discussion with upstream about the correct solution]
====</p>
<p>And so this bug is created so that the discussion can take place :-)</p> LTTng-tools - Bug #1360 (Resolved): Test stop/hang when run ptest of lttng-tools (OE/yocto) https://bugs.lttng.org/issues/13602022-10-14T06:42:59ZHeng Guo
<p>Linux and related packages versions:<br />Linux version: 5.10.79 (OE/yocto)<br />Liburcu 0.13.2<br />Lttng_tools: 2.13.8<br />Lttng_ust: 2.13.5<br />Lttng modules: 2.13.5</p>
<p>Build environment:<br />Build kernel and rootfs on Ubuntu (kernel 4.18.0)<br />Target BSP: intel-x86-64<br />Test is run on target board: Intel-snr (cpu atom) and command is:<br />./run-ptest <br />or <br />make -k -s LOG_DRIVER_FLAGS=--ignore-exit top_srcdir=$PWD top_builddir=$PWD check</p>
<p>Issue: <br />Using ptest from (OE/yocto), so all lttng-tools unit tests can be run.<br />Ptest stop/hang at tools/base-path/test_ust, please find the attached log: test_ust.log and ptest-lttng-tools-2.13.8.log.</p>
<p>Root cause:<br />In configure.ac, "no" string is set to $PGREP if pgrep is not found during build. Refer to below codes,</p>
<p>--------<br />AC_PATH_PROG([PGREP], [pgrep], [no])<br />AM_CONDITIONAL([HAVE_PGREP], [test "x$PGREP" != "xno"])<br /></code></pre><br />In tests/utils/utils.sh, NULL string is used to check for $PGREP, so correct pgrep is not set correctly to $PGREP. Refer to below codes,<br /><pre><code class="C syntaxhl" data-language="C"><span class="cp"># Check pgrep from env, default to pgrep if none
</span><span class="k">if</span> <span class="p">[</span> <span class="o">-</span><span class="n">z</span> <span class="s">"$PGREP"</span> <span class="p">];</span> <span class="n">then</span>
<span class="n">PGREP</span><span class="o">=</span><span class="n">pgrep</span>
</code></pre></p>
<p>Solution:<br /> Check "no" string instead of NULL in utils.sh.<br /> The fix patch is attached: 0001-Fix-tests-PGREP-is-not-checked-correctly-in-utils-sc.patch<br /> Test log is attached too: test_ust-fix.log and ptest-lttng-tools-2.13.8-fix.log</p> LTTng-modules - Bug #1358 (New): Failed to deploy lttng modules on NVIDIA jetson devicehttps://bugs.lttng.org/issues/13582022-09-13T13:22:48Zliuhonggang liu
<p>Hello, I installed lttng and lttng modules on NVIDIA Orin. <br />When using apt install, the results are as follows.<br /><pre><code class="shell syntaxhl" data-language="shell"><span class="c"># lttng list --kernel</span>
Error: Unable to list kernel events: Kernel tracer not available
</code></pre></p>
<pre><code class="shell syntaxhl" data-language="shell"><span class="c">#ps aux | grep lttng-sessiond</span>
root 52100 0.0 0.0 1022064 12736 ? Ssl 15:05 0:00 /usr/bin/lttng-sessiond
root 52101 0.0 0.0 41968 664 ? S 15:05 0:00 /usr/bin/lttng-sessiond
orin-d 62549 0.0 0.0 11640 684 pts/0 S+ 20:54 0:00 <span class="nb">grep</span> <span class="nt">--color</span><span class="o">=</span>auto lttng-sessiond
</code></pre>
<pre>
# dpkg -l | grep lttng
ii liblttng-ctl0:arm64 2.12.4-1~ubuntu20.04.1 arm64 LTTng control and utility library
ii liblttng-ust-ctl4:arm64 2.12.2-1~ubuntu20.04.1 arm64 LTTng 2.0 Userspace Tracer (trace control library)
ii liblttng-ust-dev:arm64 2.12.2-1~ubuntu20.04.1 arm64 LTTng 2.0 Userspace Tracer (development files)
ii liblttng-ust-python-agent0:arm64 2.12.2-1~ubuntu20.04.1 arm64 LTTng 2.0 Userspace Tracer (Python agent native library)
ii liblttng-ust0:arm64 2.12.2-1~ubuntu20.04.1 arm64 LTTng 2.0 Userspace Tracer (tracing libraries)
ii lttng-modules-dkms 2.12.6-1~ubuntu20.04.1 all Linux Trace Toolkit (LTTng) kernel modules (DKMS)
ii lttng-tools 2.12.4-1~ubuntu20.04.1 arm64 LTTng control and utility programs
ii python3-lttng 2.12.4-1~ubuntu20.04.1 arm64 LTTng control and utility Python bindings
</pre>
<p>The device information is as follows.<br /><pre><code class="shell syntaxhl" data-language="shell"><span class="c"># uname -a</span>
Linux orind-d 5.10.65-tegra <span class="c">#2 SMP PREEMPT Thu Jun 16 18:24:26 CST 2022 aarch64 aarch64 aarch64 GNU/Linux</span>
</code></pre><br /><pre><code class="shell syntaxhl" data-language="shell"><span class="c"># cat /etc/os-release</span>
<span class="nv">NAME</span><span class="o">=</span><span class="s2">"Ubuntu"</span>
<span class="nv">VERSION</span><span class="o">=</span><span class="s2">"20.04.4 LTS (Focal Fossa)"</span>
<span class="nv">ID</span><span class="o">=</span>ubuntu
<span class="nv">ID_LIKE</span><span class="o">=</span>debian
<span class="nv">PRETTY_NAME</span><span class="o">=</span><span class="s2">"Ubuntu 20.04.4 LTS"</span>
<span class="nv">VERSION_ID</span><span class="o">=</span><span class="s2">"20.04"</span>
<span class="nv">HOME_URL</span><span class="o">=</span><span class="s2">"https://www.ubuntu.com/"</span>
<span class="nv">SUPPORT_URL</span><span class="o">=</span><span class="s2">"https://help.ubuntu.com/"</span>
<span class="nv">BUG_REPORT_URL</span><span class="o">=</span><span class="s2">"https://bugs.launchpad.net/ubuntu/"</span>
<span class="nv">PRIVACY_POLICY_URL</span><span class="o">=</span><span class="s2">"https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"</span>
<span class="nv">VERSION_CODENAME</span><span class="o">=</span>focal
<span class="nv">UBUNTU_CODENAME</span><span class="o">=</span>focal
</code></pre></p>
<p>At the same time, I tried the method of source code, such as the source code installation method introduced in <a class="external" href="https://lttng.org/docs/v2.13/">https://lttng.org/docs/v2.13/</a>.<br /><pre>
# dpkg -l | grep -e libuuid -e popt -e userspace -e libxml2
ii can-utils 2018.02.0-1ubuntu1 arm64 SocketCAN userspace utilities and tools
ii dmsetup 2:1.02.167-1ubuntu1 arm64 Linux Kernel Device Mapper userspace library
ii gvfs:arm64 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - GIO module
ii gvfs-backends 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - backends
ii gvfs-bin 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - deprecated command-line tools
ii gvfs-common 1.44.1-1ubuntu1 all userspace virtual filesystem - common data files
ii gvfs-daemons 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - servers
ii gvfs-fuse 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - fuse server
ii gvfs-libs:arm64 1.44.1-1ubuntu1 arm64 userspace virtual filesystem - private libraries
ii libdevmapper1.02.1:arm64 2:1.02.167-1ubuntu1 arm64 Linux Kernel Device Mapper userspace library
ii libi2c0:arm64 4.1-2build2 arm64 userspace I2C programming library
ii libibverbs1:arm64 28.0-1ubuntu1 arm64 Library for direct userspace use of RDMA (InfiniBand/iWARP)
ii libnftnl11:arm64 1.1.5-1 arm64 Netfilter nftables userspace API library
ii libpopt-dev:arm64 1.16-14 arm64 lib for parsing cmdline parameters - development files
ii libpopt0:arm64 1.16-14 arm64 lib for parsing cmdline parameters
ii liburcu-dev:arm64 0.12.2-1~ubuntu20.04.2 arm64 userspace RCU (read-copy-update) library - development files
ii liburcu6:arm64 0.12.2-1~ubuntu20.04.2 arm64 userspace RCU (read-copy-update) library
ii libusb-1.0-0:arm64 2:1.0.23-2build1 arm64 userspace USB programming library
ii libusb-1.0-0-dev:arm64 2:1.0.23-2build1 arm64 userspace USB programming library development files
ii libuuid1:arm64 2.34-0.1ubuntu9.3 arm64 Universally Unique ID library
ii libxml2:arm64 2.9.10+dfsg-5ubuntu0.20.04.1 arm64 GNOME XML library
ii libxml2-dev:arm64 2.9.10+dfsg-5ubuntu0.20.04.1 arm64 Development files for the GNOME XML library
ii libxml2-utils 2.9.10+dfsg-5ubuntu0.20.04.3 arm64 XML utilities
ii network-manager 1.22.10-1ubuntu2.3 arm64 network management framework (daemon and userspace tools)
ii nvidia-l4t-optee 34.1.0-20220406120854 arm64 OP-TEE userspace daemons, test programs and libraries
ii python3-lxml:arm64 4.5.0-1ubuntu0.5 arm64 pythonic binding for the libxml2 and libxslt libraries
</pre></p>
<pre>
sudo ln -snf /usr/src/linux-headers-5.10.65-tegra-ubuntu20.04_aarch64/kernel-5.10 /lib/modules/5.10.65-tegra/build
</pre>
<pre>
# orin-d@orind-d:~/tmp/lttng-modules-2.13.5$ make
/home/orin-d/tmp/lttng-modules-2.13.5/src/wrapper/kallsyms.c:20:3: error: #error "LTTng-modules requires CONFIG_KPROBES on kernels >= 5.7.0"
20 | # error "LTTng-modules requires CONFIG_KPROBES on kernels >= 5.7.0"
| ^~~~~
make[2]: *** [scripts/Makefile.build:281: /home/orin-d/tmp/lttng-modules-2.13.5/src/wrapper/kallsyms.o] Error 1
make[1]: *** [Makefile:1852: /home/orin-d/tmp/lttng-modules-2.13.5/src] Error 2
make[1]: Leaving directory '/usr/src/linux-headers-5.10.65-tegra-ubuntu20.04_aarch64/kernel-5.10'
make: *** [Makefile:31: modules] Error 2
</pre> LTTng-tools - Bug #1354 (Resolved): Cannot Modprobe unload lttng-clock-plugin-test in test case t...https://bugs.lttng.org/issues/13542022-05-13T08:09:24ZHeng Guo
<p>Issue
=====<br />Below modprobe FATAL is got in test_clock_override:</p>
<pre><code>ok 29 - Wait after kill session daemon<br /> modprobe: FATAL: Module lttng_clock_plugin_test is in use.<br /> ok 30 - Unique event timestamps with clock override: 1 expect 1</code></pre>
<p>Cause
=====<br />This issue was introduced by the following commit:</p>
<p>commit d267104b87f12ea8f0bf73bff1af272f98dc069e<br />Author: Francis Deslauriers <<a class="email" href="mailto:francis.deslauriers@efficios.com">francis.deslauriers@efficios.com</a>><br />Date: Tue Sep 15 12:10:18 2020 -0400<br /> Cleanup: use `modprobe --remove` rather than `rmmod`</p>
<p>Modprobe unload lttng-clock-plugin-test before lttng-test</p>
<p>Patch
=====<br />Please find the attached file: 0001-Tests-fix-wrong-sequence-of-unload-modules.patch</p>
<p>Test
====<br />Test log is attached.<br />fail.log is the log without my patch<br />ok.log is the log with my patch</p> LTTng-tools - Bug #1324 (New): lttng_enable_event() and lttng_enable_event_with_filter() cannot a...https://bugs.lttng.org/issues/13242021-08-24T19:48:46ZPhilippe Proulxeeppeliteloop@gmail.com
<p>Currently:</p>
<ul>
<li><code>lttng_enable_event()</code> calls <code>lttng_enable_event_with_exclusions()</code>, passing no filter expression and no event name exclusion patterns.</li>
<li><code>lttng_enable_event_with_filter()</code> calls <code>lttng_enable_event_with_exclusions()</code>, passing no event name exclusion patterns.</li>
</ul>
<p>This means that if you want to enable a recording event rule described with a filter expression and event name exclusion patterns, you need to:</p>
<ol>
<li>Get the filter expression from its descriptor with <code>lttng_event_get_filter_expression()</code>.</li>
<li>Build an array of event name exclusion patterns, getting them from its descriptor with <code>lttng_event_get_exclusion_name_count()</code> and <code>lttng_event_get_exclusion_name()</code>.</li>
<li>Call <code>lttng_enable_event_with_exclusions()</code>.</li>
</ol>
<p>This is what you would need to do to blindly enable a disabled recording event rule of which the descriptor comes from <code>lttng_list_events()</code>.</p>
<p><code>lttng_enable_event()</code> and <code>lttng_enable_event_with_filter()</code> could do the steps above for you, if this doesn't break any backward compatibility.</p> LTTng-tools - Bug #901 (In Progress): Some liblttng-ctl don't return LTTNG_OK on successhttps://bugs.lttng.org/issues/9012015-08-05T18:26:10ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>It appears that some liblttng-ctl functions, such as lttng_create_session_snapshot() have conflicting return code conventions.</p>
<p>In this specific case, the header under lttng/session.h asserts that the function will<br /><pre>
/*
[...]
* Return 0 on success else a negative LTTng error code.
*/
</pre></p>
<p>while the header in lttng-ctl.c affirms that it<br /><pre>
/*
[...]
* Returns LTTNG_OK on success or a negative error code.
*/
</pre></p>
<p>and the function actually returns<br /><pre>
ret = lttng_ctl_ask_sessiond_varlen(&lsm, uris, ...
</pre></p>
<p>which, itself will<br /><pre>
/*
[...]
Return size of data (only payload, not header) or a negative error code.
*/
</pre></p>
<p>This pattern is used in multiple places which breaks code which checks for "ret == LTTNG_OK" instead of "ret < 0".</p> LTTng-UST - Bug #556 (Resolved): Segmentation fault when printing an invalid commandhttps://bugs.lttng.org/issues/5562013-06-03T19:12:07ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>I'm running a stress test launching 3000 applications emitting 100 events per second during 20 seconds each which causes the consumerd to segfault. While that is reproducible somewhat easily (that's the problem I was trying to debug), I just ran into a case that causes both one of the traced applications and the consumer daemon to segfault.</p>
<p>Traced application backtrace follows:<br /><pre>
Core was generated by `./TestApp_100perSecOnly 20 np'.
Program terminated with signal 11, Segmentation fault.
#0 print_cmd (handle=1634037881, cmd=1935763820) at lttng-ust-comm.c:212
212 if (cmd_name_mapping[cmd]) {
(gdb) bt
#0 print_cmd (handle=1634037881, cmd=1935763820) at lttng-ust-comm.c:212
#1 ust_listener_thread (arg=0x7f3314c19b20 <local_apps>) at lttng-ust-comm.c:1066
#2 0x00007f3313d8cdd2 in start_thread () from /usr/lib/libpthread.so.0
#3 0x00007f33132a5ced in clone () from /usr/lib/libc.so.6
(gdb) up
#1 ust_listener_thread (arg=0x7f3314c19b20 <local_apps>) at lttng-ust-comm.c:1066
1066
(gdb) print lum
$13 = {
handle = 1634037881,
cmd = 1935763820,
padding = " integer { size = 27; align = 1;",
u = {
channel = {
len = 2334102031740531488,
type = (LTTNG_UST_CHAN_METADATA | unknown: 1634082876),
padding = "lse; } := uint27_t;\n\ntrace {\n\tmajor = 1;\n\tminor = 8;\n\tuuid = \"f3a29a7d-7c01-4dfd-b463-696c4884ec49\";\n\tbyte_order = le;\n\tpacket.header := struct {\n\t\tuint32_t magic;\n\t\tuint8_t uuid[16];\n\t\tuint32_t stre"...,
data = 0x7f33129bda54 "\n\ttracer_major = 2;\n\ttracer_minor = 2;\n\ttracer_patchlevel = 0;\n\tvpid = 18492;\n\tprocname = \"TestApp_100perS\";\n};\n\nclock {\n\tname = monotonic;\n\tuuid = \"6a86dfb3-e819-4f9b-a6c2-b31292b16173\";\n\tdescription"...
},
stream = {
len = 2334102031740531488,
stream_nr = 1634082877,
padding = "lse; } := uint27_t;\n\ntrace {\n\tmajor = 1;\n\tminor = 8;\n\tuuid = \"f3a29a7d-7c01-4dfd-b463-696c4884ec49\";\n\tbyte_order = le;\n\tpacket.header := struct {\n\t\tuint32_t magic;\n\t\tuint8_t uuid[16];\n\t\tuint32_t stre"...
},
event = {
instrumentation = (unknown: 1734964000),
name = "ned = false; } := uint27_t;\n\ntrace {\n\tmajor = 1;\n\tminor = 8;\n\tuuid = \"f3a29a7d-7c01-4dfd-b463-696c4884ec49\";\n\tbyte_order = le;\n\tpacket.header := struct {\n\t\tuint32_t magic;\n\t\tuint8_t uuid[16];\n\t\tuint3"...,
loglevel_type = (LTTNG_UST_LOGLEVEL_RANGE | unknown: 544106848),
loglevel = 1965170749,
padding = "st\";\n\ttracer_nam",
u = {
padding = "e = \"lttng-ust\";\n\ttracer_major = 2;\n\ttracer_minor = 2;\n\ttracer_patchlevel = 0;\n\tvpid = 18492;\n\tprocname = \"TestApp_100perS\";\n};\n\nclock {\n\tname = monotonic;\n\tuuid = \"6a86dfb3-e819-4f9b-a6c2-b31292b1617"...
}
},
context = {
ctx = 1734964000,
padding = "ned = false; } :",
u = {
padding = "= uint27_t;\n\ntrace {\n\tmajor = 1;\n\tminor = 8;\n\tuuid = \"f3a29a7d-7c01-4dfd-b463-696c4884ec49\";\n\tbyte_order = le;\n\tpacket.header := struct {\n\t\tuint32_t magic;\n\t\tuint8_t uuid[16];\n\t\tuint32_t stream_id;\n\t"...
}
},
version = {
major = 1734964000,
minor = 543450478,
patchlevel = 1634082877
},
tracepoint = {
name = " signed = false; } := uint27_t;\n\ntrace {\n\tmajor = 1;\n\tminor = 8;\n\tuuid = \"f3a29a7d-7c01-4dfd-b463-696c4884ec49\";\n\tbyte_order = le;\n\tpacket.header := struct {\n\t\tuint32_t magic;\n\t\tuint8_t uuid[16];\n\t\tu"...,
loglevel = 1836016649,
padding = "ain = \"ust\";\n\ttr"
},
filter = {
data_size = 1734964000,
reloc_offset = 543450478,
seqnum = 4279953930213269565
},
padding = " signed = false; } := uint27_t;\n"
}
}
</pre></p>
<p>I will submit a patch to check the command against the size of the command string array to make sure an invalid command does not trigger an out-of-bounds error. There is unfortunately no way to know if a command really is invalid or just "unknown"...</p>
<p>Maybe we should log the command's ID in such cases to make corrupted messages easier to spot?</p> LTTng-UST - Bug #537 (Resolved): make CFLAGS=-g breaks examples buildhttps://bugs.lttng.org/issues/5372013-05-17T22:42:17ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>cd doc/examples/easy-ust<br />compudj@thinkos:~/git/lttng-ust/doc/examples/easy-ust$ (git:master $)> make<br />gcc -I. -c -o tp.o tp.c<br />gcc -o sample sample.o tp.o -ldl -llttng-ust <br />compudj@thinkos:~/git/lttng-ust/doc/examples/easy-ust$ (git:master $)> make clean<br />rm -f <strong>.html<br />rm -f *.o sample<br />compudj@thinkos:~/git/lttng-ust/doc/examples/easy-ust$ (git:master $)> make CFLAGS=-g<br />gcc -g -c -o sample.o sample.c<br />gcc -g -c -o tp.o tp.c<br />In file included from sample_component_provider.h:143:0,<br /> from tp.c:33:<br />/usr/local/include/lttng/tracepoint-event.h:60:28: fatal error: ./sample_component_provider.h: No such file or directory<br />compilation terminated.<br />make: *</strong>* [tp.o] Error 1</p>
<p>for some reason, the CFLAGS += have no effect when a CFLAGS is specified on the command line.</p>