LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912020-12-02T21:27:04ZLTTng bugs repository
Redmine Babeltrace - Bug #1293 (New): Use after free in sink.ctf.fs finalizehttps://bugs.lttng.org/issues/12932020-12-02T21:27:04ZSimon Marchisimon.marchi@polymtl.ca
<p>I run this:</p>
<pre>./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'</pre>
<p>and interrupt with with ^C while it's running. I get:</p>
<pre>
➜ babeltrace ./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'
^C=================================================================
==1611811==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d000001de8 at pc 0x7faa59a98c13 bp 0x7fff9f10b9b0 sp 0x7fff9f10b9a0
READ of size 8 at 0x60d000001de8 thread T0
#0 0x7faa59a98c12 in bt_trace_get_environment_entry_count /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345
#1 0x7faa5663faed in translate_trace_ctf_ir_to_tsdl /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/translate-ctf-ir-to-tsdl.c:935
#2 0x7faa566496f4 in fs_sink_trace_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-trace.c:499
#3 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
#4 0x7faa5989663a in g_hash_table_remove_all (/usr/lib/libglib-2.0.so.0+0x3d63a)
#5 0x7faa59899d5e in g_hash_table_destroy (/usr/lib/libglib-2.0.so.0+0x40d5e)
#6 0x7faa5662a894 in destroy_fs_sink_comp /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:132 #7 0x7faa5663161b in ctf_fs_sink_finalize /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:1141
#8 0x7faa59a2f50b in finalize_component /home/simark/src/babeltrace/src/lib/graph/component.c:97 #9 0x7faa59a2f87a in destroy_component /home/simark/src/babeltrace/src/lib/graph/component.c:148
#10 0x7faa59a340e2 in bt_object_try_spec_release /home/simark/src/babeltrace/src/lib/object.h:145 #11 0x7faa5987765f (/usr/lib/libglib-2.0.so.0+0x1e65f)
#12 0x7faa59a34ee6 in destroy_graph /home/simark/src/babeltrace/src/lib/graph/graph.c:103
#13 0x7faa59a346af in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#14 0x7faa59a34800 in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335 #15 0x7faa59a3adb4 in bt_graph_put_ref /home/simark/src/babeltrace/src/lib/graph/graph.c:1331
#16 0x55e2ffb90c67 in cmd_run_ctx_destroy /home/simark/src/babeltrace/src/cli/babeltrace2.c:1685 #17 0x55e2ffb95d9e in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2538
#18 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#19 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
#20 0x55e2ffb87fdd in _start (/home/simark/build/babeltrace/src/cli/.libs/lt-babeltrace2+0x1ffdd)
0x60d000001de8 is located 104 bytes inside of 144-byte region [0x60d000001d80,0x60d000001e10)
freed by thread T0 here:
#0 0x7faa59c1c0e9 in __interceptor_free /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:123
#1 0x7faa59a97b4a in destroy_trace /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:143
#2 0x7faa59a90621 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#3 0x7faa59a8ff99 in bt_object_with_parent_release_func /home/simark/src/babeltrace/src/lib/object.h:178
#4 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#5 0x7faa59a8c2f1 in bt_packet_recycle /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:131
#6 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#7 0x7faa59a8b47a in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335
#8 0x7faa59a8ccc4 in bt_packet_put_ref /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:236
#9 0x7faa56643f48 in fs_sink_stream_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-stream.c:39
#10 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
previously allocated by thread T0 here:
#0 0x7faa59c1c639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7faa598a9641 in g_malloc0 (/usr/lib/libglib-2.0.so.0+0x50641)
#2 0x7faa566568b8 in ctf_fs_trace_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1080
#3 0x7faa566572b0 in ctf_fs_component_create_ctf_fs_trace_one_path /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1183
#4 0x7faa5665be1d in ctf_fs_component_create_ctf_fs_trace /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2097
#5 0x7faa5665dff0 in ctf_fs_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2397
#6 0x7faa5665e172 in ctf_fs_init /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2431
#7 0x7faa59a39c15 in add_component_with_init_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1048
#8 0x7faa59a3a2fb in add_source_component_with_initialize_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1127
#9 0x7faa59a3a3a2 in bt_graph_add_source_component /home/simark/src/babeltrace/src/lib/graph/graph.c:1152
#10 0x55e2ffb94343 in cmd_run_ctx_create_components_from_config_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2252
#11 0x55e2ffb94ff7 in cmd_run_ctx_create_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2347
#12 0x55e2ffb95825 in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2461
#13 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#14 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
SUMMARY: AddressSanitizer: heap-use-after-free /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345 in bt_trace_get_environment_entry_count
</pre> LTTng-modules - Feature #1265 (New): Turn lttng-probes.c probe_list into a hash tablehttps://bugs.lttng.org/issues/12652020-05-06T17:14:59ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Turns O(n^2) trace systems registration (cost for n systems) into O(n). (O(1) per system)</p> LTTng - Bug #1209 (New): Tracking a PID after start of session result in error on lttng-sessiondhttps://bugs.lttng.org/issues/12092019-11-18T03:34:17ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Against lttng-tools/lttng-ust master:</p>
<pre>
lttng-sessiond
lttng create
lttng enable-event -u -a
lttng start
start app (get PID of said app)
lttng track -u --pid $PID
</pre>
<p>Yield on the sessiond error stdout:<br /><pre>
Error: Error starting tracing for app pid: 2574 (ret: -1024)
</pre></p>
<p>When digging a bit moire it seems that the app is told to enable tracing even if it is already started.</p>
<p>This seems to be a side effect of calling ust_app_global_update from ust_global_update_all from trace_ust_track_pid and taking the true path for the following path:</p>
<pre>
void ust_app_global_update(struct ltt_ust_session *usess, struct ust_app *app)
{
assert(usess);
assert(usess->active);
DBG2("UST app global update for app sock %d for session id %" PRIu64,
app->sock, usess->id);
if (!app->compatible) {
return;
}
if (trace_ust_pid_tracker_lookup(usess, app->pid)) {
/*
* Synchronize the application's internal tracing configuration
* and start tracing.
*/
ust_app_synchronize(usess, app);
ust_app_start_trace(usess, app);
} else {
ust_app_global_destroy(usess, app);
}
}
</pre>
<p>We end up "starting" tracing even when it is already the case, the app return -16 on the enable session call.</p>
<pre>
libust[14705/14707]: Message Received "Enable" (128), Handle "session" (1) (in print_cmd() at lttng-ust-comm.c:463)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: Return value: -16 (in handle_message() at lttng-ust-comm.c:1083)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: message successfully sent (in send_reply() at lttng-ust-comm.c:641)
Error: Error starting tracing for app pid: 14705 (ret: -1024)
ok 5 - Track command with opts: 0 -u --vpid 14705
</pre>
<p>I did not validate but this might have been introduced by 88e3c2f5610b9ac89b0923d448fee34140fc46fb if not already present in the past.</p> LTTng-tools - Feature #1197 (New): Use renameat2() to atomically exchange metadata on metadata re...https://bugs.lttng.org/issues/11972019-09-23T16:56:22ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>When using shm-path, a use-case is to expect the sessiond or the system to crash at any point during tracing.</p>
<p>If such a crash occurs while regenerating the metadata, lttng-crash may find a truncated file.</p>
<p>It would be nice if we could generate the new metadata in a ".metadata.new" file while keeping the old file around, and then exchange both files using renameat2().</p>
<p>However, renameat2() appeared in kernel 3.15, so we would have to figure out a fallback.</p> LTTng - Bug #1195 (Feedback): Userspace tracing issuehttps://bugs.lttng.org/issues/11952019-08-23T04:53:06ZParvataraddy shivaraj
<p>I have compiled lttng tools, modules and lttng ust for android(aarch64). Lttng kernel space tracing is working properly but userspace tracing is not working. It's generating the metadata no trace events. Is that I am missing anything while compiling. please find the attached session logs for more details. I have used bellow commands to generate userspace tracing.</p> LTTng - Bug #1192 (New): Web documentation improvement: describe use of CREATE_TRACE_POINTS for l...https://bugs.lttng.org/issues/11922019-08-05T14:36:42ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Following discussion with David Goulet who is currently instrumenting the tor project with lttng-ust, it appears that the following use-case should be documented more clearly in the web documentation:</p>
<ul>
<li>Instrumentation of applications with LTTng-UST</li>
</ul>
<p>A tracepoint provider header should be included <em>once</em> per application within a compile unit after a #define CREATE_TRACE_POINTS.</p>
<p>It can be included multiple times throughout other compile units of the applications, but make sure the CREATE_TRACE_POINTS macro is not defined before those include, otherwise LTTng-UST will complain about multiple tracepoint probe definitions.</p> Babeltrace - Feature #1164 (New): Write a plugin to anonymize traceshttps://bugs.lttng.org/issues/11642018-05-17T16:14:29ZGeneviève Bastiengbastien+lttng@versatic.net
<p>Here's a feature that was discussed during last hack-a-thon:</p>
<p>Write a babeltrace plugin that would allow to remove all internal information from the trace, so that the trace can be sent for analysis without exposing internal information.</p>
<p>The kind of information to anonymize (not exhaustive, more thoughts need to be put in it):</p>
<ul>
<li>In the metadata: host names</li>
</ul>
<ul>
<li>IP addresses: Change them for dummy IPs</li>
</ul>
<ul>
<li>File names</li>
</ul>
<ul>
<li>Process names?</li>
</ul>
<p>The plugin should keep a mapping of the anonymized information so that results can be mapped back to original data.</p> LTTng-tools - Feature #1137 (Confirmed): Version handshake for lttng-consumerd and lttng-sessiondhttps://bugs.lttng.org/issues/11372017-11-15T23:37:32ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
Scenario:
<ul>
<li>User installed both a 32bit and 64 bit version of lttng-tools 2.X.</li>
<li>User uses in script lttng-session --consumerd32-path= --consumerd64-path=</li>
<li>User update the 64bit version to lttng-tools 2.(X+1) without upgrading</li>
<li>User still uses it's script but with the lttng-sessiond bin from lttng-tools 2.(X+1)</li>
</ul>
<p>Currently the consumerd and sessiond do not exchange version information hence we end up with undefined behaviour.</p>
<p>The version numbering might be coupled with the version of lttng but it is most probably wiser to use a separate versioning.</p> LTTng-tools - Bug #1102 (In Progress): Trigger conditions are not evaluated on subscriptionhttps://bugs.lttng.org/issues/11022017-05-11T22:43:22ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Trigger conditions are not evaluated on subscription of a client. In the case of buffer usage conditions, this provides no way for clients to know the current state of the buffers.</p> LTTng-modules - Feature #1067 (New): Update writeback instrumentation for newer kernelshttps://bugs.lttng.org/issues/10672016-10-07T19:38:23ZJérémie Galarneaujeremie.galarneau@efficios.comBabeltrace - Feature #1045 (New): Wire up debug info on lttng_ust_cyg_profile event fieldshttps://bugs.lttng.org/issues/10452016-07-13T14:53:52ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>#lttng paste</p>
<p>09:56 < rnsanchez> is there a default procedure for "hydrating" instrument-functions traces like this?<br />09:56 < rnsanchez> [13:54:09.866652414] (+0.000001178) priminho lttng_ust_cyg_profile:func_exit: { cpu_id = 1 }, { addr = 0x46BB90, call_site = 0x46BF7B }<br />09:56 < rnsanchez> (kind of replacing the addr with their proper symbols)<br />09:57 < milian> rnsanchez: I'm not an lttng dev, but could imagine that one would be able to write that by analyzing mmap + openat to find the offset into a library, which you can then feed into addr2line, or libdw/libbacktrace<br />09:59 < rnsanchez> I could propably pass it (babeltrace) through some script to do that. but since the trace is huge (and this is not even a "real" trace), I was wondering if there is a better way to do that<br />09:59 < milian> I'd also be interested in that<br />10:00 < rnsanchez> well maybe there is one. building a symbol-table cache for the known things (a binary of special interest) and then feeding babeltrace through awk, replacing the symbols found with their names<br />10:01 < rnsanchez> some would miss, of course, but perhaps a good amount would help<br />10:46 < Compudj> rnsanchez, milian: currently, babeltrace is a bit "hardwired" to the "ip" context for symbol resolution<br />10:46 < Compudj> but all the infrastructure code is there<br />10:49 < Compudj> see babeltrace: formats/ctf-text/types/integer.c<br />10:49 < Compudj> there is a call to ctf_text_integer_write_debug_info<br />10:50 < Compudj> implemented in include/babeltrace/trace-debug-info.h<br />10:50 < Compudj> it checks if integer_definition->debug_info_src is non-null<br />10:51 < Compudj> this is wired up in lib/debug-info.c register_event_debug_infos()<br />10:51 < Compudj> it is where it is tied to the "ip" context<br />10:51 < Compudj> it should be extended to be tied to the lttng_ust_cyg_profile event fields too</p> Userspace RCU - Feature #941 (New): URCU flavor which can be used across processes using shared m...https://bugs.lttng.org/issues/9412015-09-26T16:23:20ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>The appears to be interest for a URCU flavor which can be used across a set of processes communicating through shared memory.</p> LTTng-tools - Feature #566 (Confirmed): User-space data buffering schemes and the lttng user inte...https://bugs.lttng.org/issues/5662013-06-20T20:40:18ZDaniel U. Thibaultdaniel.thibault@drdc-rddc.gc.ca
<p>Here is a typical log with the current almost-2.2.0-rc3 version of lttng:<br /><pre>
$ sudo -H lttng create uid -U net://131.132.32.77
Spawning a session daemon
Session uid created.
Traces will be written in net://131.132.32.77
$ sudo -H lttng enable-channel --buffers-uid -u canaluid
UST channel canaluid enabled for session uid
$ sudo -H lttng enable-event -u -a
Error: Events: Buffer type mismatch for session (channel channel0, session uid)
$ sudo -H lttng enable-event -u -a -c canaluid
All UST events are enabled in channel canaluid
$ sudo -H lttng start
Tracing started for session uid
$ sudo -H lttng destroy
</pre></p>
<p>Once <code>enable-channel --buffers-uid</code> is issued, it is understood the entire user-space domain will be using per-uid buffers. (This may change with later incarnations of lttng, I presume? Are there plans to allow per-channel control of the buffering schemes?)</p>
<p>The following <code>enable-event -u</code> command tries to create the default channel (<code>channel0</code>) because the user did not specify <code>-c</code>...But why does it try to create that channel using <code>--buffers-pid</code>? The session daemon <em>knows</em> that the user-space channels are now per-uid.</p>
<p>Resolution: The session daemon should switch the channel buffering scheme default of each session to per-uid whenever this is established by the user's first <code>enable-channel</code> or <code>enable-event</code> command. In other words, <code>--buffers-pid</code> should be the default only when the first <code>enable-channel</code> is issued (implicitly or explicitly).</p>
<p>It would also be appreciated if a user message were issued when that decision (per-pid vs. per-uid) is taken. Thus our previous session would become:</p>
<pre>
$ sudo -H lttng create uid -U net://131.132.32.77
Spawning a session daemon
Session uid created.
Traces will be written in net://131.132.32.77
$ sudo -H lttng enable-channel --buffers-uid -u canaluid
UST channel canaluid enabled for session uid
UST buffering is per UID
$ sudo -H lttng enable-event -u -a
All UST events are enabled in channel channel0
$ sudo -H lttng start
Tracing started for session uid
$ sudo -H lttng destroy
</pre> LTTng-tools - Bug #561 (Confirmed): Under certain conditions, a user-space trace may overwrite it...https://bugs.lttng.org/issues/5612013-06-13T18:14:30ZDaniel U. Thibaultdaniel.thibault@drdc-rddc.gc.ca
<p>Suppose we do this:</p>
<pre>
$ sudo -H lttng create asession
$ sudo -H lttng enable-event -a -u
$ sudo -H lttng start
</pre>
<p>And suppose we have an application that has been instrumented with some user-space tracepoint provider. Suppose the application's main loop is something like this (borrowed from easy-ust):</p>
<pre>
int main(int argc, char **argv)
{
int i = 0;
char themessage[20]; //Can hold up to "Hello World 9999999\0"
void *libtp_handle;
libtp_handle = dlopen("./libtp.so", RTLD_LAZY);
fprintf(stderr, "sample starting\n");
for (i = 0; i < 10000; i++) {
if ((i == 3333) && (libtp_handle)) dlclose(libtp_handle);
if (i == 6666) libtp_handle = dlopen("./libtp.so", RTLD_LAZY);
sprintf(themessage, "Hello World %u", i);
tracepoint(sample_component, event, themessage);
usleep(1);
}
fprintf(stderr, "sample done\n");
if (libtp_handle) return dlclose(libtp_handle);
return 0;
}
</pre>
<p>The trace produced will capture two separate processes: the first one for the app's first 3333 loops, the second for the app's last 3333 loops. This is because the app will register itself as a user-space event source, then withdraw its registration only to later re-register.</p>
<p>As it happens, most of the time the two third-runs will be a second apart, resulting in the trace holding two pid subdirectories: say <code>sample-17541-20130613-135148</code> and <code>sample-17541-20130613-135149</code>. But now and again both processes will be within the same one-second window, and the trace will thus contain only one pid subdirectory, say <code>sample-17541-20130613-135151</code> ---the problem is that the app's first 3333 loops were written to disk and then the last 3333 loops were written to the same file.</p>
<p>This only gets worse if the dlopen/dlclose calls are more tightly packed in time.</p>
<p>The bug boils down to this: once a tracing session detects a new process client, lttng should detect path collisions and correct for them. One solution would be to have a trace's path be:</p>
<pre>
tracepath/ust/pid/process_name-VPID-yyyymmdd-hhmmss[-n]/
</pre>
<p>In my example, the first 3333 loops would go to <code>tracepath/ust/pid/sample-17541-20130613-135151</code> and the last 3333 loops to <code>tracepath/ust/pid/sample-17541-20130613-135151-1</code></p>
<p>I suppose a similar problem can happen with per-uid traces.</p> LTTng-UST - Feature #483 (New): Use "man 3 backtrace" to dump the stack state at record start (at...https://bugs.lttng.org/issues/4832013-03-26T11:32:44ZPaul Woegererpaul_woegerer@mentor.com
<p>When an already running application gets traced with liblttng-ust-cyg-profile<br />function entry/exit instrumentation we should provide a way to reconstruct the<br />stack state at connection time. This can be achieved by using the backtrace<br />feature of glibc.</p>
<p>The following conversation on IRC motivated this feature request:</p>
<p>[09:51] <pwoegere> Compudj: Regarding <a class="external" href="http://git.lttng.org/?p=lttng-ust.git;a=blob;f=liblttng-ust-cyg-profile/lttng-ust-cyg-profile.c;h=d772e76b961a148d19bf04d56ae9481b697d99b5;hb=70d654f22a6b52beddfb86ec3daa453073c356d2#l39">http://git.lttng.org/?p=lttng-ust.git;a=blob;f=liblttng-ust-cyg-profile/lttng-ust-cyg-profile.c;h=d772e76b961a148d19bf04d56ae9481b697d99b5;hb=70d654f22a6b52beddfb86ec3daa453073c356d2#l39</a><br />[09:52] <pwoegere> Compudj: There is a disadvantage not to pass the return address on lttng_ust_cyg_profile:func_exit<br />[09:52] <pwoegere> Compudj: Think about the use case where you start recording in the middle of the application ...<br />[09:53] <pwoegere> Compudj: <br />[09:53] <pwoegere> All the lttng_ust_cyg_profile:func_exit events where<br />[09:53] <pwoegere> there is no corresponding func_entry (because it was emitted before the<br />[09:53] <pwoegere> attach happend) are basically worthless.<br />[09:56] <pwoegere> Compudj: If you also pass the call_site to func_exit to you will have useful func_exit events even when you don't have the corresponding func_entry<br />[11:40] <Compudj> pwoegere: yes, it's a question of trade-off<br />[11:41] <Compudj> pwoegere: is it worth it to almost double the size of the traces (and thus double the throughput needed) in order to handle the few func_exit events that would happen to be there at trace start without matching func_entry ?<br />[11:41] <Compudj> pwoegere: in my opinion, the saving in trace bandwidth is far more important<br />[12:20] <pwoegere> Compudj: We could use something like "man 3 backtrace" to dump the stack state at record start (attach) time. This would allow to reconstruct the missed stack state.<br />[12:24] <Compudj> pwoegere: it sounds like an excellent idea!<br />[12:24] <Compudj> pwoegere: could you open a feature request on bugs.lttng.org along with this reference ?</p>