LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912020-12-02T21:27:04ZLTTng bugs repository
Redmine Babeltrace - Bug #1293 (New): Use after free in sink.ctf.fs finalizehttps://bugs.lttng.org/issues/12932020-12-02T21:27:04ZSimon Marchisimon.marchi@polymtl.ca
<p>I run this:</p>
<pre>./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'</pre>
<p>and interrupt with with ^C while it's running. I get:</p>
<pre>
➜ babeltrace ./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'
^C=================================================================
==1611811==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d000001de8 at pc 0x7faa59a98c13 bp 0x7fff9f10b9b0 sp 0x7fff9f10b9a0
READ of size 8 at 0x60d000001de8 thread T0
#0 0x7faa59a98c12 in bt_trace_get_environment_entry_count /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345
#1 0x7faa5663faed in translate_trace_ctf_ir_to_tsdl /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/translate-ctf-ir-to-tsdl.c:935
#2 0x7faa566496f4 in fs_sink_trace_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-trace.c:499
#3 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
#4 0x7faa5989663a in g_hash_table_remove_all (/usr/lib/libglib-2.0.so.0+0x3d63a)
#5 0x7faa59899d5e in g_hash_table_destroy (/usr/lib/libglib-2.0.so.0+0x40d5e)
#6 0x7faa5662a894 in destroy_fs_sink_comp /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:132 #7 0x7faa5663161b in ctf_fs_sink_finalize /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:1141
#8 0x7faa59a2f50b in finalize_component /home/simark/src/babeltrace/src/lib/graph/component.c:97 #9 0x7faa59a2f87a in destroy_component /home/simark/src/babeltrace/src/lib/graph/component.c:148
#10 0x7faa59a340e2 in bt_object_try_spec_release /home/simark/src/babeltrace/src/lib/object.h:145 #11 0x7faa5987765f (/usr/lib/libglib-2.0.so.0+0x1e65f)
#12 0x7faa59a34ee6 in destroy_graph /home/simark/src/babeltrace/src/lib/graph/graph.c:103
#13 0x7faa59a346af in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#14 0x7faa59a34800 in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335 #15 0x7faa59a3adb4 in bt_graph_put_ref /home/simark/src/babeltrace/src/lib/graph/graph.c:1331
#16 0x55e2ffb90c67 in cmd_run_ctx_destroy /home/simark/src/babeltrace/src/cli/babeltrace2.c:1685 #17 0x55e2ffb95d9e in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2538
#18 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#19 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
#20 0x55e2ffb87fdd in _start (/home/simark/build/babeltrace/src/cli/.libs/lt-babeltrace2+0x1ffdd)
0x60d000001de8 is located 104 bytes inside of 144-byte region [0x60d000001d80,0x60d000001e10)
freed by thread T0 here:
#0 0x7faa59c1c0e9 in __interceptor_free /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:123
#1 0x7faa59a97b4a in destroy_trace /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:143
#2 0x7faa59a90621 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#3 0x7faa59a8ff99 in bt_object_with_parent_release_func /home/simark/src/babeltrace/src/lib/object.h:178
#4 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#5 0x7faa59a8c2f1 in bt_packet_recycle /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:131
#6 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#7 0x7faa59a8b47a in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335
#8 0x7faa59a8ccc4 in bt_packet_put_ref /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:236
#9 0x7faa56643f48 in fs_sink_stream_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-stream.c:39
#10 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
previously allocated by thread T0 here:
#0 0x7faa59c1c639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7faa598a9641 in g_malloc0 (/usr/lib/libglib-2.0.so.0+0x50641)
#2 0x7faa566568b8 in ctf_fs_trace_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1080
#3 0x7faa566572b0 in ctf_fs_component_create_ctf_fs_trace_one_path /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1183
#4 0x7faa5665be1d in ctf_fs_component_create_ctf_fs_trace /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2097
#5 0x7faa5665dff0 in ctf_fs_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2397
#6 0x7faa5665e172 in ctf_fs_init /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2431
#7 0x7faa59a39c15 in add_component_with_init_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1048
#8 0x7faa59a3a2fb in add_source_component_with_initialize_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1127
#9 0x7faa59a3a3a2 in bt_graph_add_source_component /home/simark/src/babeltrace/src/lib/graph/graph.c:1152
#10 0x55e2ffb94343 in cmd_run_ctx_create_components_from_config_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2252
#11 0x55e2ffb94ff7 in cmd_run_ctx_create_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2347
#12 0x55e2ffb95825 in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2461
#13 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#14 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
SUMMARY: AddressSanitizer: heap-use-after-free /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345 in bt_trace_get_environment_entry_count
</pre> LTTng-tools - Feature #1287 (New): Use abstract sockets for lttng-consumerd UST shared memory fileshttps://bugs.lttng.org/issues/12872020-10-13T15:35:32ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Abstract sockets (unix(7)) are not tied to the filesystem, and are available since Linux 2.2.</p>
<p>Those are Linux-specific.</p>
<p>Those are the same as regular unix domain but their first character of path is NULL. They have the benefit of not requiring unlinking of files left behind.</p>
<p>We could use those abstract sockets in lttng-consumerd on Linux.</p> LTTng-modules - Feature #1265 (New): Turn lttng-probes.c probe_list into a hash tablehttps://bugs.lttng.org/issues/12652020-05-06T17:14:59ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Turns O(n^2) trace systems registration (cost for n systems) into O(n). (O(1) per system)</p> Babeltrace - Bug #1254 (New): Trace with non-monotonic clocks make babeltrace2 aborthttps://bugs.lttng.org/issues/12542020-04-07T20:08:37ZSimon Marchisimon.marchi@polymtl.ca
<p>The trace as attachment here: <a class="external" href="https://www.eclipse.org/lists/tracecompass-dev/msg01505.html">https://www.eclipse.org/lists/tracecompass-dev/msg01505.html</a><br />... and the trace here: <a class="external" href="https://filebin.net/8bbv15rl60da6s9g/example.tgz?t=o3y9sgrz">https://filebin.net/8bbv15rl60da6s9g/example.tgz?t=o3y9sgrz</a></p>
<p>... both make babeltrace2 abort with:</p>
<pre>
$ ./src/cli/babeltrace2 /home/simark/Downloads/une-trace
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Babeltrace 2 library postcondition not satisfied; error is:
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Clock snapshots are not monotonic
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Aborting...
[1] 2011726 abort (core dumped) ./src/cli/babeltrace2 /home/simark/Downloads/une-trace
</pre>
<p>This should at least be reported as an error.</p> LTTng - Bug #1209 (New): Tracking a PID after start of session result in error on lttng-sessiondhttps://bugs.lttng.org/issues/12092019-11-18T03:34:17ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Against lttng-tools/lttng-ust master:</p>
<pre>
lttng-sessiond
lttng create
lttng enable-event -u -a
lttng start
start app (get PID of said app)
lttng track -u --pid $PID
</pre>
<p>Yield on the sessiond error stdout:<br /><pre>
Error: Error starting tracing for app pid: 2574 (ret: -1024)
</pre></p>
<p>When digging a bit moire it seems that the app is told to enable tracing even if it is already started.</p>
<p>This seems to be a side effect of calling ust_app_global_update from ust_global_update_all from trace_ust_track_pid and taking the true path for the following path:</p>
<pre>
void ust_app_global_update(struct ltt_ust_session *usess, struct ust_app *app)
{
assert(usess);
assert(usess->active);
DBG2("UST app global update for app sock %d for session id %" PRIu64,
app->sock, usess->id);
if (!app->compatible) {
return;
}
if (trace_ust_pid_tracker_lookup(usess, app->pid)) {
/*
* Synchronize the application's internal tracing configuration
* and start tracing.
*/
ust_app_synchronize(usess, app);
ust_app_start_trace(usess, app);
} else {
ust_app_global_destroy(usess, app);
}
}
</pre>
<p>We end up "starting" tracing even when it is already the case, the app return -16 on the enable session call.</p>
<pre>
libust[14705/14707]: Message Received "Enable" (128), Handle "session" (1) (in print_cmd() at lttng-ust-comm.c:463)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: Return value: -16 (in handle_message() at lttng-ust-comm.c:1083)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: message successfully sent (in send_reply() at lttng-ust-comm.c:641)
Error: Error starting tracing for app pid: 14705 (ret: -1024)
ok 5 - Track command with opts: 0 -u --vpid 14705
</pre>
<p>I did not validate but this might have been introduced by 88e3c2f5610b9ac89b0923d448fee34140fc46fb if not already present in the past.</p> LTTng-tools - Feature #1197 (New): Use renameat2() to atomically exchange metadata on metadata re...https://bugs.lttng.org/issues/11972019-09-23T16:56:22ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>When using shm-path, a use-case is to expect the sessiond or the system to crash at any point during tracing.</p>
<p>If such a crash occurs while regenerating the metadata, lttng-crash may find a truncated file.</p>
<p>It would be nice if we could generate the new metadata in a ".metadata.new" file while keeping the old file around, and then exchange both files using renameat2().</p>
<p>However, renameat2() appeared in kernel 3.15, so we would have to figure out a fallback.</p> LTTng - Bug #1192 (New): Web documentation improvement: describe use of CREATE_TRACE_POINTS for l...https://bugs.lttng.org/issues/11922019-08-05T14:36:42ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Following discussion with David Goulet who is currently instrumenting the tor project with lttng-ust, it appears that the following use-case should be documented more clearly in the web documentation:</p>
<ul>
<li>Instrumentation of applications with LTTng-UST</li>
</ul>
<p>A tracepoint provider header should be included <em>once</em> per application within a compile unit after a #define CREATE_TRACE_POINTS.</p>
<p>It can be included multiple times throughout other compile units of the applications, but make sure the CREATE_TRACE_POINTS macro is not defined before those include, otherwise LTTng-UST will complain about multiple tracepoint probe definitions.</p> Babeltrace - Feature #1164 (New): Write a plugin to anonymize traceshttps://bugs.lttng.org/issues/11642018-05-17T16:14:29ZGeneviève Bastiengbastien+lttng@versatic.net
<p>Here's a feature that was discussed during last hack-a-thon:</p>
<p>Write a babeltrace plugin that would allow to remove all internal information from the trace, so that the trace can be sent for analysis without exposing internal information.</p>
<p>The kind of information to anonymize (not exhaustive, more thoughts need to be put in it):</p>
<ul>
<li>In the metadata: host names</li>
</ul>
<ul>
<li>IP addresses: Change them for dummy IPs</li>
</ul>
<ul>
<li>File names</li>
</ul>
<ul>
<li>Process names?</li>
</ul>
<p>The plugin should keep a mapping of the anonymized information so that results can be mapped back to original data.</p> LTTng-modules - Feature #1067 (New): Update writeback instrumentation for newer kernelshttps://bugs.lttng.org/issues/10672016-10-07T19:38:23ZJérémie Galarneaujeremie.galarneau@efficios.comBabeltrace - Feature #1045 (New): Wire up debug info on lttng_ust_cyg_profile event fieldshttps://bugs.lttng.org/issues/10452016-07-13T14:53:52ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>#lttng paste</p>
<p>09:56 < rnsanchez> is there a default procedure for "hydrating" instrument-functions traces like this?<br />09:56 < rnsanchez> [13:54:09.866652414] (+0.000001178) priminho lttng_ust_cyg_profile:func_exit: { cpu_id = 1 }, { addr = 0x46BB90, call_site = 0x46BF7B }<br />09:56 < rnsanchez> (kind of replacing the addr with their proper symbols)<br />09:57 < milian> rnsanchez: I'm not an lttng dev, but could imagine that one would be able to write that by analyzing mmap + openat to find the offset into a library, which you can then feed into addr2line, or libdw/libbacktrace<br />09:59 < rnsanchez> I could propably pass it (babeltrace) through some script to do that. but since the trace is huge (and this is not even a "real" trace), I was wondering if there is a better way to do that<br />09:59 < milian> I'd also be interested in that<br />10:00 < rnsanchez> well maybe there is one. building a symbol-table cache for the known things (a binary of special interest) and then feeding babeltrace through awk, replacing the symbols found with their names<br />10:01 < rnsanchez> some would miss, of course, but perhaps a good amount would help<br />10:46 < Compudj> rnsanchez, milian: currently, babeltrace is a bit "hardwired" to the "ip" context for symbol resolution<br />10:46 < Compudj> but all the infrastructure code is there<br />10:49 < Compudj> see babeltrace: formats/ctf-text/types/integer.c<br />10:49 < Compudj> there is a call to ctf_text_integer_write_debug_info<br />10:50 < Compudj> implemented in include/babeltrace/trace-debug-info.h<br />10:50 < Compudj> it checks if integer_definition->debug_info_src is non-null<br />10:51 < Compudj> this is wired up in lib/debug-info.c register_event_debug_infos()<br />10:51 < Compudj> it is where it is tied to the "ip" context<br />10:51 < Compudj> it should be extended to be tied to the lttng_ust_cyg_profile event fields too</p> LTTng-tools - Feature #986 (New): Warn when unreasonably short timer values are usedhttps://bugs.lttng.org/issues/9862016-01-05T22:35:59ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Most timer configuration options are expressed in microseconds which makes it easy for users to specify an unreasonably short intervals of time. These configuration mistakes result in unusually high tracing overhead and may be quite challenging to figure out for inexperienced users.</p>
<p>The lttng client should warn when timers are set to very low values.</p> Userspace RCU - Feature #941 (New): URCU flavor which can be used across processes using shared m...https://bugs.lttng.org/issues/9412015-09-26T16:23:20ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>The appears to be interest for a URCU flavor which can be used across a set of processes communicating through shared memory.</p> Userspace RCU - Feature #940 (New): Wire up sys membarrier on each architecturehttps://bugs.lttng.org/issues/9402015-09-26T16:00:41ZMathieu Desnoyersmathieu.desnoyers@efficios.comLTTng-tools - Feature #821 (New): trace session name in the metadatahttps://bugs.lttng.org/issues/8212014-07-22T20:36:26ZJulien Desfossezjdesfossez@efficios.com
<p>Would it be possible to add the session name into the trace metadata so that we can find it without relying on the trace directory (which can be quite complex) ?</p> LTTng-UST - Feature #483 (New): Use "man 3 backtrace" to dump the stack state at record start (at...https://bugs.lttng.org/issues/4832013-03-26T11:32:44ZPaul Woegererpaul_woegerer@mentor.com
<p>When an already running application gets traced with liblttng-ust-cyg-profile<br />function entry/exit instrumentation we should provide a way to reconstruct the<br />stack state at connection time. This can be achieved by using the backtrace<br />feature of glibc.</p>
<p>The following conversation on IRC motivated this feature request:</p>
<p>[09:51] <pwoegere> Compudj: Regarding <a class="external" href="http://git.lttng.org/?p=lttng-ust.git;a=blob;f=liblttng-ust-cyg-profile/lttng-ust-cyg-profile.c;h=d772e76b961a148d19bf04d56ae9481b697d99b5;hb=70d654f22a6b52beddfb86ec3daa453073c356d2#l39">http://git.lttng.org/?p=lttng-ust.git;a=blob;f=liblttng-ust-cyg-profile/lttng-ust-cyg-profile.c;h=d772e76b961a148d19bf04d56ae9481b697d99b5;hb=70d654f22a6b52beddfb86ec3daa453073c356d2#l39</a><br />[09:52] <pwoegere> Compudj: There is a disadvantage not to pass the return address on lttng_ust_cyg_profile:func_exit<br />[09:52] <pwoegere> Compudj: Think about the use case where you start recording in the middle of the application ...<br />[09:53] <pwoegere> Compudj: <br />[09:53] <pwoegere> All the lttng_ust_cyg_profile:func_exit events where<br />[09:53] <pwoegere> there is no corresponding func_entry (because it was emitted before the<br />[09:53] <pwoegere> attach happend) are basically worthless.<br />[09:56] <pwoegere> Compudj: If you also pass the call_site to func_exit to you will have useful func_exit events even when you don't have the corresponding func_entry<br />[11:40] <Compudj> pwoegere: yes, it's a question of trade-off<br />[11:41] <Compudj> pwoegere: is it worth it to almost double the size of the traces (and thus double the throughput needed) in order to handle the few func_exit events that would happen to be there at trace start without matching func_entry ?<br />[11:41] <Compudj> pwoegere: in my opinion, the saving in trace bandwidth is far more important<br />[12:20] <pwoegere> Compudj: We could use something like "man 3 backtrace" to dump the stack state at record start (attach) time. This would allow to reconstruct the missed stack state.<br />[12:24] <Compudj> pwoegere: it sounds like an excellent idea!<br />[12:24] <Compudj> pwoegere: could you open a feature request on bugs.lttng.org along with this reference ?</p>