LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912023-07-23T16:22:58ZLTTng bugs repository
Redmine LTTng-tools - Bug #1383 (New): The "cpu_id" context (available for filters) is not discoverable b...https://bugs.lttng.org/issues/13832023-07-23T16:22:58ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>I recently tried remembering how to filter by cpu_id when enabling an event, and found that there was not clear way to see that $ctx.cpu_id actually exists from the lttng man pages (other than a random example in the lttng-enable-event man page). AFAIU the man pages rely on lttng add-context --list to let the user discover the available contexts. This is likely because the cpu_id context is not available for the "add-context" command because it would be redundant with the implicit context already sampled with the buffers.</p>
<p>We may want to have some way to let users discovers those contexts which are filter-specific.</p>
<p>My use-case is to try to grab only traces for a few logical cpus (0-3) from my 384 logical cpus machine.</p> LTTng - Bug #1370 (Confirmed): Why "lttng create --live" spawns a local relay daemon but not in ...https://bugs.lttng.org/issues/13702023-04-06T15:33:05ZBin Yuan
<p>The relayd spawned by lttng-create command don't close the file descriptor such like stdout.<br />Why not spawn the relayd with "--daemonize" option.</p> LTTng-tools - Bug #1331 (Feedback): test_unix_socket fails for 64 bit arches on alpine linux but ...https://bugs.lttng.org/issues/13312021-11-02T17:54:39ZDuncan Bellamy
<p>Build log for x86_64:</p>
<p><a class="external" href="https://gitlab.alpinelinux.org/a16bitsysop/aports/-/jobs/523491#L2351">https://gitlab.alpinelinux.org/a16bitsysop/aports/-/jobs/523491#L2351</a></p>
<p>FAIL: test_unix_socket 3 - Sent test payload file descriptors</p>
Log:<br />PERROR - 17:52:06.330866344 [70399/70399]: sendmsg: Out of memory (in lttcomm_send_fds_unix_sock() at unix.c:453)<br />not ok 3 - Sent test payload file descriptors<br />FAIL: test_unix_socket 3 - Sent test payload file descriptors
<ol>
<li> Failed test (test_unix_socket.c:test_high_fd_count() at line 111)<br />PERROR - 17:52:06.331082468 [70399/70399]: Failed to send test payload file descriptors: ret = -1, expected = 1: Out of memory (in test_high_fd_count() at test_unix_socket.c:114)</li>
</ol> Babeltrace - Bug #1293 (New): Use after free in sink.ctf.fs finalizehttps://bugs.lttng.org/issues/12932020-12-02T21:27:04ZSimon Marchisimon.marchi@polymtl.ca
<p>I run this:</p>
<pre>./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'</pre>
<p>and interrupt with with ^C while it's running. I get:</p>
<pre>
➜ babeltrace ./src/cli/babeltrace2 ~/lttng-traces/auto-20200318-221703 -c sink.ctf.fs -p 'path="/tmp/yo"'
^C=================================================================
==1611811==ERROR: AddressSanitizer: heap-use-after-free on address 0x60d000001de8 at pc 0x7faa59a98c13 bp 0x7fff9f10b9b0 sp 0x7fff9f10b9a0
READ of size 8 at 0x60d000001de8 thread T0
#0 0x7faa59a98c12 in bt_trace_get_environment_entry_count /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345
#1 0x7faa5663faed in translate_trace_ctf_ir_to_tsdl /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/translate-ctf-ir-to-tsdl.c:935
#2 0x7faa566496f4 in fs_sink_trace_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-trace.c:499
#3 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
#4 0x7faa5989663a in g_hash_table_remove_all (/usr/lib/libglib-2.0.so.0+0x3d63a)
#5 0x7faa59899d5e in g_hash_table_destroy (/usr/lib/libglib-2.0.so.0+0x40d5e)
#6 0x7faa5662a894 in destroy_fs_sink_comp /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:132 #7 0x7faa5663161b in ctf_fs_sink_finalize /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink.c:1141
#8 0x7faa59a2f50b in finalize_component /home/simark/src/babeltrace/src/lib/graph/component.c:97 #9 0x7faa59a2f87a in destroy_component /home/simark/src/babeltrace/src/lib/graph/component.c:148
#10 0x7faa59a340e2 in bt_object_try_spec_release /home/simark/src/babeltrace/src/lib/object.h:145 #11 0x7faa5987765f (/usr/lib/libglib-2.0.so.0+0x1e65f)
#12 0x7faa59a34ee6 in destroy_graph /home/simark/src/babeltrace/src/lib/graph/graph.c:103
#13 0x7faa59a346af in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#14 0x7faa59a34800 in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335 #15 0x7faa59a3adb4 in bt_graph_put_ref /home/simark/src/babeltrace/src/lib/graph/graph.c:1331
#16 0x55e2ffb90c67 in cmd_run_ctx_destroy /home/simark/src/babeltrace/src/cli/babeltrace2.c:1685 #17 0x55e2ffb95d9e in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2538
#18 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#19 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
#20 0x55e2ffb87fdd in _start (/home/simark/build/babeltrace/src/cli/.libs/lt-babeltrace2+0x1ffdd)
0x60d000001de8 is located 104 bytes inside of 144-byte region [0x60d000001d80,0x60d000001e10)
freed by thread T0 here:
#0 0x7faa59c1c0e9 in __interceptor_free /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:123
#1 0x7faa59a97b4a in destroy_trace /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:143
#2 0x7faa59a90621 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#3 0x7faa59a8ff99 in bt_object_with_parent_release_func /home/simark/src/babeltrace/src/lib/object.h:178
#4 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#5 0x7faa59a8c2f1 in bt_packet_recycle /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:131
#6 0x7faa59a8b329 in bt_object_put_ref_no_null_check /home/simark/src/babeltrace/src/lib/object.h:307
#7 0x7faa59a8b47a in bt_object_put_ref /home/simark/src/babeltrace/src/lib/object.h:335
#8 0x7faa59a8ccc4 in bt_packet_put_ref /home/simark/src/babeltrace/src/lib/trace-ir/packet.c:236
#9 0x7faa56643f48 in fs_sink_stream_destroy /home/simark/src/babeltrace/src/plugins/ctf/fs-sink/fs-sink-stream.c:39
#10 0x7faa598959d1 (/usr/lib/libglib-2.0.so.0+0x3c9d1)
previously allocated by thread T0 here:
#0 0x7faa59c1c639 in __interceptor_calloc /build/gcc/src/gcc/libsanitizer/asan/asan_malloc_linux.cpp:154
#1 0x7faa598a9641 in g_malloc0 (/usr/lib/libglib-2.0.so.0+0x50641)
#2 0x7faa566568b8 in ctf_fs_trace_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1080
#3 0x7faa566572b0 in ctf_fs_component_create_ctf_fs_trace_one_path /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:1183
#4 0x7faa5665be1d in ctf_fs_component_create_ctf_fs_trace /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2097
#5 0x7faa5665dff0 in ctf_fs_create /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2397
#6 0x7faa5665e172 in ctf_fs_init /home/simark/src/babeltrace/src/plugins/ctf/fs-src/fs.c:2431
#7 0x7faa59a39c15 in add_component_with_init_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1048
#8 0x7faa59a3a2fb in add_source_component_with_initialize_method_data /home/simark/src/babeltrace/src/lib/graph/graph.c:1127
#9 0x7faa59a3a3a2 in bt_graph_add_source_component /home/simark/src/babeltrace/src/lib/graph/graph.c:1152
#10 0x55e2ffb94343 in cmd_run_ctx_create_components_from_config_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2252
#11 0x55e2ffb94ff7 in cmd_run_ctx_create_components /home/simark/src/babeltrace/src/cli/babeltrace2.c:2347
#12 0x55e2ffb95825 in cmd_run /home/simark/src/babeltrace/src/cli/babeltrace2.c:2461
#13 0x55e2ffb96a99 in main /home/simark/src/babeltrace/src/cli/babeltrace2.c:2673
#14 0x7faa59696151 in __libc_start_main (/usr/lib/libc.so.6+0x28151)
SUMMARY: AddressSanitizer: heap-use-after-free /home/simark/src/babeltrace/src/lib/trace-ir/trace.c:345 in bt_trace_get_environment_entry_count
</pre> Babeltrace - Bug #1277 (Confirmed): The `ctf` plugin does not support a negative TSDL clock class...https://bugs.lttng.org/issues/12772020-07-23T14:08:36ZSeongab Kim
<p>Hi,</p>
<p>I have a trace which cannot be opened by babeltrace as below, but I can open it with Trace Compass.</p>
<pre>
skim@d54030999178:/mnt/ssd/work/skim/traces$ babeltrace2 ./kernel/
</pre>
<pre>
07-22 07:49:20.099 5264 5264 E PLUGIN/CTF/META/IR-VISITOR get_unary_unsigned@visitor-generate-ir.c:800 [auto-disc-source-ctf-fs] At line 40 in metadata stream: Invalid constant unsigned integer.
07-22 07:49:20.099 5264 5264 E PLUGIN/CTF/META/IR-VISITOR visit_clock_decl_entry@visitor-generate-ir.c:4357 [auto-disc-source-ctf-fs] At line 40 in metadata stream: Unexpected unary expression for clock class's `offset` attribute.
07-22 07:49:20.099 5264 5264 E PLUGIN/CTF/META/IR-VISITOR visit_clock_decl@visitor-generate-ir.c:4532 [auto-disc-source-ctf-fs] At line 40 in metadata stream: Cannot visit clock class's entry: ret=-22
07-22 07:49:20.099 5264 5264 E PLUGIN/CTF/META/IR-VISITOR ctf_visitor_generate_ir_visit_node@visitor-generate-ir.c:4775 [auto-disc-source-ctf-fs] At line 41 in metadata stream: Cannot visit clock class: ret=-22
07-22 07:49:20.099 5264 5264 E PLUGIN/CTF/META/DECODER ctf_metadata_decoder_append_content@decoder.c:337 [auto-disc-source-ctf-fs] Failed to visit AST node to create CTF IR objects: mdec-addr=0x22a0d90, ret=-22
07-22 07:49:20.099 5264 5264 E PLUGIN/SRC.CTF.FS/META ctf_fs_metadata_set_trace_class@metadata.c:128 [auto-disc-source-ctf-fs] Cannot update metadata decoder's content.
07-22 07:49:20.122 5264 5264 E PLUGIN/SRC.CTF.FS ctf_fs_component_create_ctf_fs_trace_one_path@fs.c:1206 [auto-disc-source-ctf-fs] Cannot create trace for `/mnt/ssd/work/skim/traces/kernel`.
07-22 07:49:20.123 5264 5264 W LIB/GRAPH add_component_with_init_method_data@graph.c:977 Component initialization method failed: status=ERROR, comp-addr=0x22a68d0, comp-name="auto-disc-source-ctf-fs", comp-log-level=WARNING, comp-class-type=SOURCE, comp-class-name="fs", comp-class-partial-descr="Read CTF traces from the file sy", comp-class-is-frozen=0, comp-class-so-handle-addr=0x22b0de0, comp-class-so-handle-path="/usr/lib/x86_64-linux-gnu/babeltrace2/plugins/babeltrace-plugin-ctf.so", comp-input-port-count=0, comp-output-port-count=0
07-22 07:49:20.123 5264 5264 E CLI cmd_run_ctx_create_components_from_config_components@babeltrace2.c:2301 Cannot create component: plugin-name="ctf", comp-cls-name="fs", comp-cls-type=1, comp-name="auto-disc-source-ctf-fs"
07-22 07:49:20.123 5264 5264 E CLI cmd_run@babeltrace2.c:2480 Cannot create components.
ERROR: [Babeltrace CLI] (babeltrace2.c:2480)
Cannot create components.
CAUSED BY [Babeltrace CLI] (babeltrace2.c:2301)
Cannot create component: plugin-name="ctf", comp-cls-name="fs", comp-cls-type=1, comp-name="auto-disc-source-ctf-fs"
CAUSED BY [libbabeltrace2] (graph.c:977)
Component initialization method failed: status=ERROR, comp-addr=0x22a68d0, comp-name="auto-disc-source-ctf-fs", comp-log-level=WARNING,
comp-class-type=SOURCE, comp-class-name="fs", comp-class-partial-descr="Read CTF traces from the file sy", comp-class-is-frozen=0,
comp-class-so-handle-addr=0x22b0de0, comp-class-so-handle-path="/usr/lib/x86_64-linux-gnu/babeltrace2/plugins/babeltrace-plugin-ctf.so",
comp-input-port-count=0, comp-output-port-count=0 CAUSED BY [auto-disc-source-ctf-fs: 'source.ctf.fs'] (fs.c:1206)
Cannot create trace for `/mnt/ssd/work/skim/traces/kernel`.
</pre>
<p>I'm using below version.</p>
<pre>
skim@d54030999178:~/work/tmp$ babeltrace2 -V
Babeltrace 2.0.4 "Amqui"
Amqui (/_mkwi_/) is a town in eastern Qu_bec, Canada, at the base of the Gasp_ peninsula in Bas-Saint-Laurent. Located at the confluence of the Humqui and Matap_dia Rivers, its proximity to woodlands makes it a great destination for outdoor activities such as camping, hiking, and mountain biking.
</pre>
<p>Here is the test result which Philippe Proulx requested.</p>
<pre>
skim@d54030999178:~/ssd_work/traces$ babeltrace2 -o ctf-metadata ./kernel | grep -A10 '^clock {'
</pre>
<pre>
clock {
name = "monotonic";
uuid = "e00bcef2-1ef1-4f02-a241-8561834511fd";
description = "Monotonic Clock";
freq = 1000000000; /* Frequency, in Hz */
/* clock value offset from Epoch is: offset * (1/freq) */
offset = -48;
};
</pre> Babeltrace - Bug #1254 (New): Trace with non-monotonic clocks make babeltrace2 aborthttps://bugs.lttng.org/issues/12542020-04-07T20:08:37ZSimon Marchisimon.marchi@polymtl.ca
<p>The trace as attachment here: <a class="external" href="https://www.eclipse.org/lists/tracecompass-dev/msg01505.html">https://www.eclipse.org/lists/tracecompass-dev/msg01505.html</a><br />... and the trace here: <a class="external" href="https://filebin.net/8bbv15rl60da6s9g/example.tgz?t=o3y9sgrz">https://filebin.net/8bbv15rl60da6s9g/example.tgz?t=o3y9sgrz</a></p>
<p>... both make babeltrace2 abort with:</p>
<pre>
$ ./src/cli/babeltrace2 /home/simark/Downloads/une-trace
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Babeltrace 2 library postcondition not satisfied; error is:
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Clock snapshots are not monotonic
04-07 15:43:15.359 2011726 2011726 F LIB/MSG-ITER call_iterator_next_method@iterator.c:815 Aborting...
[1] 2011726 abort (core dumped) ./src/cli/babeltrace2 /home/simark/Downloads/une-trace
</pre>
<p>This should at least be reported as an error.</p> Babeltrace - Bug #1234 (Feedback): src.text.dmesg: some kernel ring buffer lines can be wrongly s...https://bugs.lttng.org/issues/12342020-02-17T21:54:00ZPhilippe Proulxeeppeliteloop@gmail.com
<p>The lines of the <code>dmesg</code> command start with a time. The lines are supposed to be in order of time, but some of them can be at the wrong place.</p>
<p><code>flt.utils.muxer</code> does not like this and complains that event messages are not sorted by their default clock snapshot value.</p>
<p>It is, in fact, a <code>src.text.dmesg</code> bug because a message iterator must emit messages in order of time.</p>
<p>If the input is a file, one solution would be to sort the lines first (if not too large), and then emit the messages in this order.</p>
<p>We could also, in all scenarios, skip the lines with a time that is before the last event message's time and warn accordingly.</p> LTTng - Bug #1209 (New): Tracking a PID after start of session result in error on lttng-sessiondhttps://bugs.lttng.org/issues/12092019-11-18T03:34:17ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Against lttng-tools/lttng-ust master:</p>
<pre>
lttng-sessiond
lttng create
lttng enable-event -u -a
lttng start
start app (get PID of said app)
lttng track -u --pid $PID
</pre>
<p>Yield on the sessiond error stdout:<br /><pre>
Error: Error starting tracing for app pid: 2574 (ret: -1024)
</pre></p>
<p>When digging a bit moire it seems that the app is told to enable tracing even if it is already started.</p>
<p>This seems to be a side effect of calling ust_app_global_update from ust_global_update_all from trace_ust_track_pid and taking the true path for the following path:</p>
<pre>
void ust_app_global_update(struct ltt_ust_session *usess, struct ust_app *app)
{
assert(usess);
assert(usess->active);
DBG2("UST app global update for app sock %d for session id %" PRIu64,
app->sock, usess->id);
if (!app->compatible) {
return;
}
if (trace_ust_pid_tracker_lookup(usess, app->pid)) {
/*
* Synchronize the application's internal tracing configuration
* and start tracing.
*/
ust_app_synchronize(usess, app);
ust_app_start_trace(usess, app);
} else {
ust_app_global_destroy(usess, app);
}
}
</pre>
<p>We end up "starting" tracing even when it is already the case, the app return -16 on the enable session call.</p>
<pre>
libust[14705/14707]: Message Received "Enable" (128), Handle "session" (1) (in print_cmd() at lttng-ust-comm.c:463)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: Return value: -16 (in handle_message() at lttng-ust-comm.c:1083)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14706]: Info: sessiond not accepting connections to global apps socket (in ust_listener_thread() at lttng-ust-comm.c:1546)
libust[14705/14706]: Waiting for global apps sessiond (in wait_for_sessiond() at lttng-ust-comm.c:1427)
libust[14705/14707]: message successfully sent (in send_reply() at lttng-ust-comm.c:641)
Error: Error starting tracing for app pid: 14705 (ret: -1024)
ok 5 - Track command with opts: 0 -u --vpid 14705
</pre>
<p>I did not validate but this might have been introduced by 88e3c2f5610b9ac89b0923d448fee34140fc46fb if not already present in the past.</p> LTTng - Bug #1207 (Confirmed): Tools 2.11 fails on destroy for lttng-modules 2.9https://bugs.lttng.org/issues/12072019-11-05T22:19:08ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>The following test from lttng-ivc is failing: test_modules_base_tracing[lttng-modules-2.9-lttng-tools-2.11]</p>
<p>This can be reproduced with head of lttng-ivc and running:</p>
<pre>
sudo tox -- -k test_modules_base_tracing[lttng-modules-2.9-lttng-tools-2.11
</pre>
<p>Relevant part so far from lttng-sessiond verbose mode:<br /><pre>
DEBUG1 - 16:20:40.713886603 [10006/10153]: Begin destroy session trace (id 0) (in cmd_destroy_session() at cmd.c:3176)
DEBUG1 - 16:20:40.713892241 [10006/10153]: Rotate kernel session trace started (session 0) (in kernel_rotate_session() at kernel.c:1445)
DEBUG1 - 16:20:40.713896285 [10006/10153]: Rotate kernel channel 1, session trace (in kernel_rotate_session() at kernel.c:1460)
DEBUG1 - 16:20:40.713899898 [10006/10153]: Consumer rotate channel key 1 (in consumer_rotate_channel() at consumer.c:1694)
DEBUG1 - 16:20:40.713913220 [10164/10173]: Incoming command on sock (in consumer_thread_sessiond_poll() at consumer.c:3434)
DEBUG1 - 16:20:40.713925589 [10164/10173]: Consumer rotate channel 1 (in lttng_kconsumer_recv_cmd() at kernel-consumer.c:1124)
DEBUG1 - 16:20:40.713932035 [10164/10173]: Consumer sample rotate position for channel 1 (in lttng_consumer_rotate_channel() at consumer.c:4064)
Error: Failed to sample snapshot position during channel rotation
Error: Rotate channel failed
DEBUG1 - 16:20:40.713947543 [10164/10173]: Consumer rotate ready streams in channel 1 (in lttng_consumer_rotate_ready_streams() at consumer.c:4400)
DEBUG1 - 16:20:40.713951227 [10006/10153]: Consumer ret code -121 (in consumer_recv_status_reply() at consumer.c:208)
DEBUG1 - 16:20:40.713953876 [10164/10173]: received command on sock (in consumer_thread_sessiond_poll() at consumer.c:3450)
DEBUG1 - 16:20:40.713964554 [10006/10153]: Sending consumer close trace chunk command: relayd_id = -1, session_id = 0, chunk_id = 0, close command = "none" (in consumer_close_trace_chunk() at consumer.c:1970)
DEBUG1 - 16:20:40.713976173 [10164/10173]: Incoming command on sock (in consumer_thread_sessiond_poll() at consumer.c:3434)
DEBUG1 - 16:20:40.713985029 [10164/10173]: Consumer close trace chunk command: relayd_id = (none), session_id = 0, chunk_id = 0, close command = none (in lttng_consumer_close_trace_chunk() at consumer.c:4677)
DEBUG1 - 16:20:40.713999876 [10164/10173]: received command on sock (in consumer_thread_sessiond_poll() at consumer.c:3450)
Error: Failed to perform a quiet rotation as part of the destruction of session "trace": Unknown error code
DEBUG1 - 16:20:40.714017305 [10006/10153]: Tearing down kernel session (in kernel_destroy_session() at kernel.c:1199)
DEBUG1 - 16:20:40.714021825 [10006/10153]: [trace] Closing session fd 66 (in trace_kernel_destroy_session() at trace-kernel.c:689)
DEBUG1 - 16:20:40.714026143 [10006/10153]: [trace] Closing metadata stream fd 82 (in trace_kernel_destroy_session() at trace-kernel.c:699)
DEBUG1 - 16:20:40.714030026 [10006/10153]: [trace] Closing metadata fd 81 (in trace_kernel_destroy_metadata() at trace-kernel.c:662)
DEBUG1 - 16:20:40.714038288 [10006/10153]: [trace] Closing channel fd 73 (in trace_kernel_destroy_channel() at trace-kernel.c:616)
DEBUG1 - 16:20:40.714042757 [10006/10153]: [trace] Closing stream fd 77 (in trace_kernel_destroy_stream() at trace-kernel.c:544)
DEBUG1 - 16:20:40.714046626 [10006/10153]: [trace] Closing stream fd 76 (in trace_kernel_destroy_stream() at trace-kernel.c:544)
DEBUG1 - 16:20:40.714050245 [10006/10153]: [trace] Closing stream fd 75 (in trace_kernel_destroy_stream() at trace-kernel.c:544)
DEBUG1 - 16:20:40.714053833 [10006/10153]: [trace] Closing stream fd 65 (in trace_kernel_destroy_stream() at trace-kernel.c:544)
DEBUG1 - 16:20:40.714057474 [10006/10153]: [trace] Closing event fd 74 (in trace_kernel_destroy_event() at trace-kernel.c:570)
</pre></p>
<p>See attached tar.gz for more context.</p> LTTng - Bug #1195 (Feedback): Userspace tracing issuehttps://bugs.lttng.org/issues/11952019-08-23T04:53:06ZParvataraddy shivaraj
<p>I have compiled lttng tools, modules and lttng ust for android(aarch64). Lttng kernel space tracing is working properly but userspace tracing is not working. It's generating the metadata no trace events. Is that I am missing anything while compiling. please find the attached session logs for more details. I have used bellow commands to generate userspace tracing.</p> LTTng - Bug #1192 (New): Web documentation improvement: describe use of CREATE_TRACE_POINTS for l...https://bugs.lttng.org/issues/11922019-08-05T14:36:42ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Following discussion with David Goulet who is currently instrumenting the tor project with lttng-ust, it appears that the following use-case should be documented more clearly in the web documentation:</p>
<ul>
<li>Instrumentation of applications with LTTng-UST</li>
</ul>
<p>A tracepoint provider header should be included <em>once</em> per application within a compile unit after a #define CREATE_TRACE_POINTS.</p>
<p>It can be included multiple times throughout other compile units of the applications, but make sure the CREATE_TRACE_POINTS macro is not defined before those include, otherwise LTTng-UST will complain about multiple tracepoint probe definitions.</p> LTTng-tools - Bug #1105 (New): The trigger API should warn the client when an unsupported conditi...https://bugs.lttng.org/issues/11052017-05-12T19:11:03ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Clients should be warned whenever they register a trigger that won't be evaluated. For the moment, the main use case is warning a client if a buffer usage condition is used and the lttng-modules version being used is older than 2.10.</p> LTTng-tools - Bug #1102 (In Progress): Trigger conditions are not evaluated on subscriptionhttps://bugs.lttng.org/issues/11022017-05-11T22:43:22ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Trigger conditions are not evaluated on subscription of a client. In the case of buffer usage conditions, this provides no way for clients to know the current state of the buffers.</p> LTTng-tools - Bug #1059 (Confirmed): The save and load commands do not use the same default home ...https://bugs.lttng.org/issues/10592016-08-25T13:28:24ZPhilippe Proulxeeppeliteloop@gmail.com
<p>The <code>save</code> command does not consider the <code>LTTNG_HOME</code> environment variable (nor <code>HOME</code>) because it uses <code>utils_get_user_home_dir()</code>, whereas the <code>load</code> command uses <code>utils_get_home_dir()</code>.</p>
<p>Therefore if <code>LTTNG_HOME</code> is set (or if <code>$HOME</code> has a different value than the entry in <code>/etc/passwd</code>), a <code>save</code> and <code>load</code> sequence does not find the session configuration file.</p>
<p>To remain consistent, I think the value of <code>utils_get_home_dir()</code> should be sent from the client to the session daemon at save time, so that, from the user's perspective, both commands are synchronized on the same environment variables.</p>
<p>Use case: this bug makes the save/load operations impossible to do in a virtual environment.</p> LTTng-tools - Bug #561 (Confirmed): Under certain conditions, a user-space trace may overwrite it...https://bugs.lttng.org/issues/5612013-06-13T18:14:30ZDaniel U. Thibaultdaniel.thibault@drdc-rddc.gc.ca
<p>Suppose we do this:</p>
<pre>
$ sudo -H lttng create asession
$ sudo -H lttng enable-event -a -u
$ sudo -H lttng start
</pre>
<p>And suppose we have an application that has been instrumented with some user-space tracepoint provider. Suppose the application's main loop is something like this (borrowed from easy-ust):</p>
<pre>
int main(int argc, char **argv)
{
int i = 0;
char themessage[20]; //Can hold up to "Hello World 9999999\0"
void *libtp_handle;
libtp_handle = dlopen("./libtp.so", RTLD_LAZY);
fprintf(stderr, "sample starting\n");
for (i = 0; i < 10000; i++) {
if ((i == 3333) && (libtp_handle)) dlclose(libtp_handle);
if (i == 6666) libtp_handle = dlopen("./libtp.so", RTLD_LAZY);
sprintf(themessage, "Hello World %u", i);
tracepoint(sample_component, event, themessage);
usleep(1);
}
fprintf(stderr, "sample done\n");
if (libtp_handle) return dlclose(libtp_handle);
return 0;
}
</pre>
<p>The trace produced will capture two separate processes: the first one for the app's first 3333 loops, the second for the app's last 3333 loops. This is because the app will register itself as a user-space event source, then withdraw its registration only to later re-register.</p>
<p>As it happens, most of the time the two third-runs will be a second apart, resulting in the trace holding two pid subdirectories: say <code>sample-17541-20130613-135148</code> and <code>sample-17541-20130613-135149</code>. But now and again both processes will be within the same one-second window, and the trace will thus contain only one pid subdirectory, say <code>sample-17541-20130613-135151</code> ---the problem is that the app's first 3333 loops were written to disk and then the last 3333 loops were written to the same file.</p>
<p>This only gets worse if the dlopen/dlclose calls are more tightly packed in time.</p>
<p>The bug boils down to this: once a tracing session detects a new process client, lttng should detect path collisions and correct for them. One solution would be to have a trace's path be:</p>
<pre>
tracepath/ust/pid/process_name-VPID-yyyymmdd-hhmmss[-n]/
</pre>
<p>In my example, the first 3333 loops would go to <code>tracepath/ust/pid/sample-17541-20130613-135151</code> and the last 3333 loops to <code>tracepath/ust/pid/sample-17541-20130613-135151-1</code></p>
<p>I suppose a similar problem can happen with per-uid traces.</p>