LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912020-05-11T08:07:22ZLTTng bugs repository
Redmine LTTng-tools - Bug #1266 (Resolved): 2.12 fails to compile with C++ code when using session clear ...https://bugs.lttng.org/issues/12662020-05-11T08:07:22ZShuo Yang
<p>The below newly introduced header files:</p>
<ul>
<li><a class="external" href="https://github.com/lttng/lttng-tools/blob/stable-2.12/include/lttng/clear.h">https://github.com/lttng/lttng-tools/blob/stable-2.12/include/lttng/clear.h</a></li>
<li><a class="external" href="https://github.com/lttng/lttng-tools/blob/stable-2.12/include/lttng/clear-handle.h">https://github.com/lttng/lttng-tools/blob/stable-2.12/include/lttng/clear-handle.h</a></li>
</ul>
<p>miss the closing:</p>
<pre><code class="c syntaxhl" data-language="c"><span class="cp">#ifdef __cplusplus
</span><span class="err">}</span>
<span class="cp">#endif
</span></code></pre>
<p>section such that 2.12 fails to compile with C++ code when using session clear feature.</p>
<p>There might be some other headers also without the closing section that I didn't spot, please fix them altogether. Thanks!</p> LTTng-tools - Bug #1241 (Resolved): lttng_destroy_session_no_wait always return LTTNG_ERR_INVALID...https://bugs.lttng.org/issues/12412020-03-03T20:14:46ZShuo Yang
<p>The implementation of "lttng_destroy_session_no_wait" (<a class="external" href="https://github.com/lttng/lttng-tools/blob/stable-2.11/src/lib/lttng-ctl/lttng-ctl.c#L2051">https://github.com/lttng/lttng-tools/blob/stable-2.11/src/lib/lttng-ctl/lttng-ctl.c#L2051</a>) always returns "LTTNG_ERR_INVALID" in 2.11.</p>
<pre><code class="c syntaxhl" data-language="c"><span class="kt">int</span> <span class="nf">lttng_destroy_session_no_wait</span><span class="p">(</span><span class="k">const</span> <span class="kt">char</span> <span class="o">*</span><span class="n">session_name</span><span class="p">)</span>
<span class="p">{</span>
<span class="k">enum</span> <span class="n">lttng_error_code</span> <span class="n">ret_code</span><span class="p">;</span>
<span class="n">ret_code</span> <span class="o">=</span> <span class="n">lttng_destroy_session_ext</span><span class="p">(</span><span class="n">session_name</span><span class="p">,</span> <span class="nb">NULL</span><span class="p">);</span>
<span class="k">return</span> <span class="n">ret_code</span> <span class="o">==</span> <span class="n">LTTNG_OK</span> <span class="o">?</span> <span class="n">ret_code</span> <span class="o">:</span> <span class="o">-</span><span class="n">ret_code</span><span class="p">;</span>
<span class="p">}</span>
</code></pre>
<p>It calls "lttng_destroy_session_ext" with the second argument "_handle" as NULL, which causes LTTNG_ERR_INVALID to be returned. (<a class="external" href="https://github.com/lttng/lttng-tools/blob/stable-2.11/src/lib/lttng-ctl/destruction-handle.c#L407">https://github.com/lttng/lttng-tools/blob/stable-2.11/src/lib/lttng-ctl/destruction-handle.c#L407</a>)</p>
<pre><code class="c syntaxhl" data-language="c"> <span class="k">if</span> <span class="p">(</span><span class="o">!</span><span class="n">session_name</span> <span class="o">||</span> <span class="o">!</span><span class="n">_handle</span><span class="p">)</span> <span class="p">{</span>
<span class="n">ret_code</span> <span class="o">=</span> <span class="n">LTTNG_ERR_INVALID</span><span class="p">;</span>
<span class="k">goto</span> <span class="n">error</span><span class="p">;</span>
<span class="p">}</span>
</code></pre>
<p>Thus calling to "lttng_destroy_session_no_wait" won't actually destroy the session.</p> LTTng-CI - Bug #1140 (Invalid): lttng-ust-java-tests_master_buildhttps://bugs.lttng.org/issues/11402017-11-29T21:49:01ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Logging of lttng-sessiond should be kept as an artifact for debugging purposes.</p>
<p>Make sure to use a process cleaner ether before or after de job. If the job is aborted the sessiond will be left alive.</p> LTTng-tools - Bug #1071 (Resolved): lttng-mi XSD does not take new 2.9 override options into accounthttps://bugs.lttng.org/issues/10712016-10-26T16:38:35ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>The machine interface schema of the lttng client has not been updated to reflect the new options introduced to the save/load commands.</p> LTTng-tools - Bug #1044 (Resolved): Listing the snapshot output after deleting an outputhttps://bugs.lttng.org/issues/10442016-07-11T19:35:19ZBruno Roybruno.roy@ericsson.com
<p>LTTng version : lttng (LTTng Trace Control) 2.9.0-pre - Codename TBD (I don't have the commit, but this error is present in 2.8.1 and 2.7.2 also).<br />urcu version : 0.10~pre+bzr1197+pack28+201606291832~ubuntu16.04.1</p>
<p>Here are the command I did (started with no tracing session) :<br /><code>$ lttng create foo --snapshot<br />Default snapshot output set to: /home/bruno/lttng-traces/foo-20160711-152523<br />Snapshot mode set. Every channel enabled for that session will be set to mmap output, and default to overwrite mode.<br />$ lttng snapshot list-output<br />Snapshot output list for session foo<br /> [1] snapshot-1: /home/bruno/lttng-traces/foo-20160711-152523 (max-size: 0)<br />$ lttng snapshot del-output 1<br />Snapshot output id 1 successfully deleted for session foo<br />$ lttng snapshot list-output<br />Snapshot output list for session foo<br /> None<br />$ lttng list<br />Error: No session daemon is available<br />Error: Command error</code></p>
<p>On the 2.8.1 version I also got this error message : <code>lttng-sessiond: main.c:4180: process_client_msg: Assertion `!rcu_read_ongoing()' failed.</code><br />End result : No session daemon, and the session(s) are destroyed.</p> LTTng-tools - Bug #1006 (Resolved): Enabling an application context (both JUL and log4j) results ...https://bugs.lttng.org/issues/10062016-03-17T18:52:05ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Enabling an application context in both the log4j and jul domains results in a confirmation message of the form<br /><pre>
UST context $app.myprovider:myshortcontext added to all channels
</pre></p>
<p>The domain is printed by checking opt_kernel, while the exact domain should be checked.</p> LTTng-tools - Bug #1005 (Resolved): New trace can't generate in persistent memory file systemhttps://bugs.lttng.org/issues/10052016-03-16T03:01:59Zjia fangfang.jia@windriver.com
<p>Using the new feature Recording trace data on persistent memory file systems in LTTng 2.7.</p>
<p>After the device restart due to system crash, we can read the crash log in the specific shm-path directory. When we want to trace the new log, the following debug log reported and the new data can't be traced to the specific shm-path.</p>
<p>DEBUG3 - 20:07:54.353680 [286/291]: mkdir() recursive /shm/lttng/mysession-20160302-182255/ust/uid/0/32-bit with mode 504 for uid 0 and gid 0 (in run_as_mkdir_recursive() at runas.c:468)<br />DEBUG1 - 20:07:54.353726 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />DEBUG3 - 20:07:54.354141 [286/291]: open() /shm/lttng/mysession-20160302-182255/ust/uid/0/32-bit/metadata with flags C1 mode 384 for uid 0 and gid 0 (in run_as_open() at runas.c:498)<br />DEBUG1 - 20:07:54.354202 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />PERROR - 20:07:54.354724 [286/291]: <ins><strong>Opening metadata file: File exists (in ust_registry_session_init() at ust-registry.c:606)</strong></ins><br />DEBUG3 - 20:07:54.354801 [286/291]: rmdir_recursive() /shm/lttng/mysession-20160302-182255 with for uid 0 and gid 0 (in run_as_rmdir_recursive() at runas.c:524)<br />DEBUG1 - 20:07:54.354847 [286/291]: Using run_as worker (in run_as() at runas.c:449)<br />DEBUG3 - 20:07:54.355554 [287/287]: Attempting rmdir /shm/lttng/mysession-20160302-182255 (in utils_recursive_rmdir() at utils.c:1247)<br />DEBUG3 - 20:07:54.356905 [286/291]: Buffer registry per UID destroy with id: 0, ABI: 32, uid: 0 (in buffer_reg_uid_destroy() at buffer-registry.c:641)</p>
<p>For detail info, please see my attachment.</p> LTTng-tools - Bug #1002 (Resolved): lttng snapshot on an empty tracing session results in an unkn...https://bugs.lttng.org/issues/10022016-03-14T19:22:20ZJérémie Galarneaujeremie.galarneau@efficios.com
<p>Recording a snapshot on a session which has recorded no events results in the following output in MI output mode:</p>
<pre>
$ lttng --mi=xml snapshot record | xmllint --format -
Error: Unknown error code
Error: Command error
<?xml version="1.0" encoding="UTF-8"?>
<command xmlns="http://lttng.org/xml/ns/lttng-mi" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://lttng.org/xml/ns/lttng-mi http://lttng.org/xml/schemas/lttng-mi/3/lttng-mi-3.0.xsd" schemaVersion="3.0">
<name>snapshot</name>
<output>
<snapshot_action>
<name>record</name>
<output/>
</snapshot_action>
</output>
<success>false</success>
</command>
</pre><br />(Unknown Error Code + Command error)
<p>while the human output results in the following output:<br /><pre>
$ lttng snapshot record
Warning: No data available in snapshot
</pre></p>
<p>Also, the commands should not result in an error (warning is fine) since the user has no control on whether or not applications (or the kernel) have produced any event between the start of the tracing session and the recording of the snapshot.</p> LTTng-tools - Bug #970 (Resolved): snapshot written in 2 folders if it takes more than 1 second t...https://bugs.lttng.org/issues/9702015-11-03T22:59:57ZJulien Desfossezjdesfossez@efficios.com
<p>With "lttng snapshot record" on a session with UST et kernel buffers, if the writing of one domain takes longer than 1 second, another folder is created when writing the data of the second domain.<br />Reproduced with master.</p>
<p>lttng create --snapshot<br />lttng enable-channel -k bla --subbuf-size 8M --num-subbuf 8<br />lttng enable-event -k -a -c bla<br />lttng enable-event -u -a<br />lttng start<br />... wait for the kernel ring-buffer to be full...<br />lttng snapshot record</p>
<p>If the disk takes longer than 1 second to write 64MB, the resulting snapshot looks like this:</p>
<p>ls /home/julien/lttng-traces/auto-20151103-174410<br />snapshot-1-20151103-174443-0/kernel/.....<br />snapshot-1-20151103-174446-0/ust.....</p>
<p>This is confusing because we expect all the traces of one snapshot to be in the same folder, especially when we record lots of them it becomes a mess to link the ones that belong together.</p> LTTng-tools - Bug #936 (Resolved): Disable all event usthttps://bugs.lttng.org/issues/9362015-09-09T21:21:52ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>Disable all events on ust does not work.</p>
<pre>
lttng create mysession
lttng enable-event test -u
lttng enable-event test2 -u
lttng disable-event -u -a
lttng list mysession
killall lttng-sessiond
</pre>
<p>Result:</p>
<pre>
Spawning a session daemon
Session mysession created.
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171434
UST event test created in channel channel0
UST event test2 created in channel channel0
Error: UST event not found
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171434
=== Domain: UST global ===
Buffer type: per UID
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 131072
number of subbufers: 4
switch timer interval: 0
read timer interval: 0
trace file count: 0
trace file size (bytes): 0
output: mmap()
Events:
test2 (type: tracepoint) [enabled]
test (type: tracepoint) [enabled]
</pre><br />Looks like it works when using enable-event -u -a:<br /><pre>
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171544
All UST events are enabled in channel channel0
All UST events are disabled in channel channel0
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171544
=== Domain: UST global ===
Buffer type: per UID
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 131072
number of subbufers: 4
switch timer interval: 0
read timer interval: 0
trace file count: 0
trace file size (bytes): 0
output: mmap()
Events:
* (type: tracepoint) [disabled]
</pre><br />The behaviour on kernel/python/jul/log4j is to disable all event.
<pre>
Session mysession created.
Traces will be written in /home/jonathan/lttng-traces/mysession-20150909-171705
Kernel event test created in channel channel0
All Kernel events are disabled in channel channel0
Tracing session mysession: [inactive]
Trace path: /home/jonathan/lttng-traces/mysession-20150909-171705
=== Domain: Kernel ===
Channels:
-------------
- channel0: [enabled]
Attributes:
overwrite mode: 0
subbufers size: 262144
number of subbufers: 4
switch timer interval: 0
read timer interval: 200000
trace file count: 0
trace file size (bytes): 0
output: splice()
Events:
test (loglevel: TRACE_EMERG (0)) (type: tracepoint) [disabled]
</pre> LTTng-tools - Bug #925 (Resolved): Disabling a kernel event disables all kernel eventshttps://bugs.lttng.org/issues/9252015-09-01T19:41:49ZJérémie Galarneaujeremie.galarneau@efficios.comLTTng - Bug #914 (Resolved): lttng: Trace is going on even though event has been disabledhttps://bugs.lttng.org/issues/9142015-09-01T01:44:26Zjia fangfang.jia@windriver.com
<p>Hi guys</p>
<p>Issue: My trace log is still going after I disable the events.<br />My detail log is as attachment<br />Please help me, Thanks.</p>
<p>lttng version is :<br />lttng version 2.7.0-pre - Gaia -v2.6.0-rc1-242-g60f7035.</p>
<p>PC is Ubuntu14.04</p>
<p>I use my app "hello" as <a class="external" href="http://lttng.org/docs/v2.6/#doc-tracing-your-own-user-application">http://lttng.org/docs/v2.6/#doc-tracing-your-own-user-application</a> told. But I modified my hello.c is as attachment.</p>
<p>To reproduce my issue:<br />1/ lttng create mysession</p>
<p>2/ lttng enable-event hello_world:my_first_tracepoint --filter '$ctx.procname == "./hello*"' --session mysession -u -c channel</p>
<p>3/ lttng enable-event hello_world:my_first_tracepoint --session mysession -u -c channel</p>
<p>4/ ./hello and press Enter to start tracepoint</p>
<p>5/ lttng disable-event --session mysession --channel channel --userspace hello_world:my_first_tracepoint</p>
<p>6/ lttng start and about 4sec later type lttng stop<br />7/ lttng view</p>
<p>Then you can view the trace log is going though you have disabled the events with filter</p> LTTng-tools - Bug #912 (Resolved): Standalone versioning for mi api (xml)https://bugs.lttng.org/issues/9122015-08-27T19:44:51ZJonathan Rajotte Julienjonathan.rajotte-julien@efficios.com
<p>The mi should have its own versioning as it is an api.</p>
<p>Representation of concept might force a break on backward compatibility.</p>
<p>But the mi should always try to minimize the need of a version bump.</p> LTTng-tools - Bug #882 (Resolved): destroying one session stops tracing on all other sessionshttps://bugs.lttng.org/issues/8822015-03-02T19:01:10ZAnand Neelianand.neeli@gmail.com
<p>With lttng 2.6.0 with 0.8.6 liburcu see following issue: <br />On a multisession setup with relayd, Destroying one session is showing errors on the console and tracing of all other sessions is stopped.</p>
<p>steps to recreate are as follows<br />1) Create multiple sessions with relayd. (in below logs i have created 2 sessions)<br />2) Destroy one session and then tracing on all the sessions stops</p>
<p>(Have not check this with single session, could be happening with single session also)</p>
<p>logs<br />---------<br />node-a # lttng list<br />Available tracing sessions:<br /> 1) mys5 (tcp4://128.0.0.4:5342/ [data: 5343]) [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]<br /> Live timer interval (usec): 2000000</p>
<pre><code>2) mysession (tcp4://128.0.0.4:5342/ [data: 5343]) [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]<br /> Live timer interval (usec): 2000000</code></pre>
<p>Use lttng list <session_name> for more details</p>
<p>node-a # lttng list mys5<br />Tracing session mys5: [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]</p>
<p>=== Domain: UST global ===</p>
<p>Buffer type: per PID</p>
<p>Channels:<br />-------------<br />- myc5: [enabled]</p>
<pre><code>Attributes:<br /> overwrite mode: 0<br /> subbufers size: 4096<br /> number of subbufers: 4<br /> switch timer interval: 0<br /> read timer interval: 0<br /> trace file count: 2<br /> trace file size (bytes): 2000000<br /> output: mmap()</code></pre>
<pre><code>Events:
* (type: tracepoint) [enabled] [has exclusions]</code></pre>
<p>node-a # lttng list mysession<br />Tracing session mysession: [active]<br /> Trace path: tcp4://128.0.0.4:5342/ [data: 5343]</p>
<p>=== Domain: UST global ===</p>
<p>Buffer type: per PID</p>
<p>Channels:<br />-------------<br />- mychannel: [enabled]</p>
<pre><code>Attributes:<br /> overwrite mode: 0<br /> subbufers size: 4096<br /> number of subbufers: 4<br /> switch timer interval: 0<br /> read timer interval: 0<br /> trace file count: 2<br /> trace file size (bytes): 2000000<br /> output: mmap()</code></pre>
<pre><code>Events:
* (type: tracepoint) [enabled] [has exclusions]</code></pre>
<p>node-a # lttng destroy mys5 <<<<<<<<<<<<<<<<<<<< destroying session here<br />Error: Pushing metadata<br />Error: Handling metadata request<br />Error: Health error occurred in thread_manage_consumer<br />Error: Pushing metadata<br />Error: Handling metadata request<br />Error: Health error occurred in thread_manage_consumer<br />PERROR - 04:36:15.258880 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />PERROR - 04:36:15.259520 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.260076 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />Session mys5 destroyed<br />node-a # PERROR - 04:36:15.106369 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106410 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106437 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106474 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106490 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106500 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106509 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106519 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106528 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106537 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106546 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106555 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106564 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106573 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106581 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106590 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106599 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106608 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106616 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106625 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106636 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106645 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106654 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106663 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106672 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106681 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106690 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106699 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106708 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond<br />PERROR - 04:36:15.106717 [1651/1656]: sendmsg: Broken pipe (in lttcomm_send_unix_sock() at unix.c:218)<br />Error: Asking metadata to sessiond</p>
<p>node-a # ps aux | grep lttng<br />root 1622 0.1 0.0 495020 1160 ? Ssl 04:34 0:00 lttng-relayd -o /var/tmp/lttng-traces -d<br />root 1631 0.0 0.1 854020 6360 ? Ssl 04:34 0:00 lttng-sessiond --consumerd32-path /usr/lib/lttng/libexec/lttng-consumerd --consumerd32-libdir /usr/lib/ --consumerd64-path /usr/lib64/lttng/libexec/lttng-consumerd --consumerd64-libdir /usr/lib64/ -b --no-kernel<br />root 1643 0.0 0.0 546356 3424 ? Sl 04:34 0:00 lttng-consumerd -u --consumerd-cmd-sock /var/run/lttng/ustconsumerd64/command --consumerd-err-sock /var/run/lttng/ustconsumerd64/error --group tracing<br />root 1651 0.0 0.0 64068 1852 ? Sl 04:34 0:00 lttng-consumerd -u --consumerd-cmd-sock /var/run/lttng/ustconsumerd32/command --consumerd-err-sock /var/run/lttng/ustconsumerd32/error --group tracing</p> LTTng-tools - Feature #808 (Won't fix): Add --all to start command.https://bugs.lttng.org/issues/8082014-06-19T19:57:16ZJonathan Rajottejoraj@efficios.com
<p>Could be nice to be able to start all created session at once.</p>