LTTng bugs repository: Issueshttps://bugs.lttng.org/https://bugs.lttng.org/themes/lttng/favicon/a.ico?14249722912023-07-12T14:04:04ZLTTng bugs repository
Redmine Babeltrace - Bug #1382 (Resolved): stable-2.0 branch fails to buildhttps://bugs.lttng.org/issues/13822023-07-12T14:04:04ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>stable-2.0 branch at commit 375847ee0df2f fails to build on my debian laptop:</p>
<pre>
make[2]: Entering directory '/home/compudj/git/babeltrace/src/ctf-writer'
CC trace.lo
In file included from object-pool.h:54,
from clock-class.h:29,
from trace.c:43:
In function 'bt_ctf_object_set_parent',
inlined from 'bt_ctf_object_set_parent' at object.h:120:6,
inlined from 'bt_ctf_trace_common_add_stream_class' at trace.c:1243:3:
object.h:141:26: error: null pointer dereference [-Werror=null-dereference]
141 | if (child->parent) {
| ~~~~~^~~~~~~~
object.h:141:26: error: null pointer dereference [-Werror=null-dereference]
</pre>
<p>with gcc version 12.2.0 (Debian 12.2.0-14)</p> Babeltrace - Bug #1376 (Resolved): Babeltrace master make install rebuilds objectshttps://bugs.lttng.org/issues/13762023-05-25T14:38:53ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>When doing the following:</p>
<p>as user:</p>
<p>cloning a pristine babeltrace master branch,<br />./bootstrap<br />./configure<br />make -j16</p>
<p>then as root (going to root with su):<br />make install</p>
<p>I notice that the "make install" rebuilds some objects, which is unexpected, e.g.:</p>
<p>src/lib/babeltrace2.o is now owned by "root:root".</p> LTTng-tools - Bug #1313 (Resolved): lttng-modules warnings and test hang in tools/clear/test_kernelhttps://bugs.lttng.org/issues/13132021-04-29T15:11:47ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>I observe the following behavior once in a while when running the kernel "clear" tests repeatedly:</p>
<p>1) warning in get_subbuf ioctl, which seems to point to lack of stream user-space locking in the consumer daemon:</p>
<p>That was with lttng-tools commit:</p>
<p>commit 60860e547ce31ea629e846e00b66342425474b8d<br />Author: Jérémie Galarneau <<a class="email" href="mailto:jeremie.galarneau@efficios.com">jeremie.galarneau@efficios.com</a>><br />Date: Fri Apr 23 21:28:58 2021 -0400</p>
<pre><code>Update version to v2.13.0-rc1</code></pre>
<p>and lttng-modules commit:</p>
<p>commit 2dc781e02eb156a76554ada092a181ab2916db57<br />Author: Mathieu Desnoyers <<a class="email" href="mailto:mathieu.desnoyers@efficios.com">mathieu.desnoyers@efficios.com</a>><br />Date: Wed Apr 28 16:26:20 2021 -0400</p>
<pre><code>Refactoring: context callbacks</code></pre>
<pre>
517716.168856] ------------[ cut here ]------------
[517716.171559] WARNING: CPU: 1 PID: 19313 at /home/efficios/git/lttng-modules/src/lib/ringbuffer/ring_buffer_frontend.c:1263 lib_ring_buffer_get_subbuf+0x24f/0x260 [lttng_lib_ring_buffer]
[517716.180096] Modules linked in: lttng_test(O) lttng_probe_x86_exceptions(O) lttng_probe_x86_irq_vectors(O) lttng_probe_writeback(O) lttng_probe_workqueue(O) lttng_probe_vmscan(O) lttng_probe_udp(O) lttng_probe_timer(O) lttng_probe_sunrpc(O) lttng_probe_statedump(O) lttng_probe_sock(O) lttng_probe_skb(O) lttng_probe_signal(O) lttng_probe_scsi(O) lttng_probe_sched(O) lttng_probe_regulator(O) lttng_probe_regmap(O) lttng_probe_rcu(O) lttng_probe_random(O) lttng_probe_printk(O) lttng_probe_power(O) lttng_probe_net(O) lttng_probe_napi(O) lttng_probe_module(O) lttng_probe_kmem(O) lttng_probe_jbd2(O) lttng_probe_irq(O) lttng_probe_i2c(O) lttng_probe_gpio(O) lttng_probe_ext4(O) lttng_probe_compaction(O) lttng_probe_btrfs(O) lttng_probe_block(O) lttng_counter_client_percpu_32_modular(O) lttng_counter_client_percpu_64_modular(O) lttng_counter(O) lttng_ring_buffer_event_notifier_client(O) lttng_ring_buffer_metadata_mmap_client(O) lttng_ring_buffer_client_mmap_overwrite(O)
[517716.180815] lttng_ring_buffer_client_mmap_discard(O) lttng_ring_buffer_metadata_client(O) lttng_ring_buffer_client_overwrite(O) lttng_ring_buffer_client_discard(O) lttng_tracer(O) lttng_statedump(O) lttng_wrapper(O) lttng_uprobes(O) lttng_clock(O) lttng_kprobes(O) lttng_lib_ring_buffer(O) lttng_kretprobes(O) [last unloaded: lttng_wrapper]
[517716.213228] CPU: 1 PID: 19313 Comm: lttng-consumerd Tainted: G O 5.11.2 #80
[517716.215573] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
[517716.220341] RIP: 0010:lib_ring_buffer_get_subbuf+0x24f/0x260 [lttng_lib_ring_buffer]
[517716.222579] Code: 50 f0 ff 00 0f 0b 49 03 5f 28 44 8b 85 80 00 00 00 49 8b 77 30 45 85 c0 48 89 d9 0f 85 4f ff ff ff e9 25 ff ff ff f0 ff 45 00 <0f> 0b b8 f0 ff ff ff e9 a6 fe ff ff 0f 1f 44 00 00 0f 1f 44 00 00
[517716.227610] RSP: 0018:ffffbafd09023e88 EFLAGS: 00010202
[517716.229147] RAX: 0000000000000000 RBX: ffff9906ed069a00 RCX: ffff9905c4e4a400
[517716.231186] RDX: ffffdafcffc4ec90 RSI: 0000000000200000 RDI: ffffdafcffc4e9f0
[517716.233265] RBP: ffff9905c4e4a400 R08: 0000000000300000 R09: 0000000000200000
[517716.235284] R10: 0000000000200000 R11: 0000000000000000 R12: 0000000000000000
[517716.237333] R13: 0000000000000000 R14: 000000000000005e R15: 0000000000000000
[517716.239360] FS: 00007ff327fff700(0000) GS:ffff9905a7a40000(0000) knlGS:0000000000000000
[517716.241634] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[517716.243262] CR2: 000055b91cf07730 CR3: 000000076f45e003 CR4: 00000000001706e0
[517716.245316] Call Trace:
[517716.246281] lib_ring_buffer_ioctl+0x181/0x300 [lttng_lib_ring_buffer]
[517716.248301] lttng_stream_ring_buffer_ioctl+0x1a3/0x200 [lttng_tracer]
[517716.252621] __x64_sys_ioctl+0x8e/0xd0
[517716.253931] do_syscall_64+0x33/0x80
[517716.255016] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[517716.256448] RIP: 0033:0x7ff3372f46d7
[517716.257586] Code: b3 66 90 48 8b 05 b1 47 2d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 81 47 2d 00 f7 d8 64 89 01 48
[517716.262472] RSP: 002b:00007ff327ffe2c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[517716.264634] RAX: ffffffffffffffda RBX: 00007ff310002740 RCX: 00007ff3372f46d7
[517716.266674] RDX: 0000000000000000 RSI: 000000000000f605 RDI: 000000000000005e
[517716.268713] RBP: 00007ff327ffe310 R08: 00007ff310002870 R09: a002000000000000
[517716.270732] R10: 000055992b0a6530 R11: 0000000000000246 R12: 000055992c47da70
[517716.272768] R13: 00007ff318005e80 R14: 00007ff310002740 R15: 000055992b0a6528
[517716.274800] irq event stamp: 4526705
[517716.275897] hardirqs last enabled at (4526713): [<ffffffff9016a474>] console_unlock+0x4b4/0x5b0
[517716.278320] hardirqs last disabled at (4526722): [<ffffffff9016a3d0>] console_unlock+0x410/0x5b0
[517716.280759] softirqs last enabled at (4526658): [<ffffffff9120030f>] __do_softirq+0x30f/0x432
[517716.285125] softirqs last disabled at (4526653): [<ffffffff91001052>] asm_call_irq_on_stack+0x12/0x20
[517716.287648] ---[ end trace 506e55b312b731bf ]---
</pre>
<p>Alongside with that warning, there were 2 tests that failed with a make check as root:</p>
<p>FAIL: tools/clear/test_kernel 727 - Read a total of 1 events, expected 6<br />FAIL: tools/clear/test_kernel 728 - Destroy session bVV9ffZJ1CoPEgd1</p>
<p>When re-running tools/clear/test_kernel in a loop, I notice that the test hangs once in a while at:</p>
<ol>
<li>Test kernel streaming live clear</li>
<li>Parameters: tracing_active=0, clear_twice=0<br />ok 350 - Create session uJke1Ag0zql0MAtC with uri:net://localhost and opts: --live<br />ok 351 - Enable kernel event lttng_test_filter_event for session uJke1Ag0zql0MAtC<br />ok 352 - Start tracing for session uJke1Ag0zql0MAtC<br />ok 353 - Stop lttng tracing for session uJke1Ag0zql0MAtC<br />ok 354 - Clear session uJke1Ag0zql0MAtC</li>
</ol>
<p>Once it happened on the first run, then the next it took 36 runs to hang.</p> Userspace RCU - Bug #1311 (Invalid): test_urcu hang in CI armhfhttps://bugs.lttng.org/issues/13112021-04-22T15:03:45ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>On armhf, we got a hang of test_urcu:</p>
<p>urcu_1_seconds.tap 58 - ./test_urcu 2 2 1 -d 2 -b 32768</p>
<p>We should try running this test in a loop on this worker to figure out if we can reproduce. We should grab info about environment (kernel version).</p> LTTng-UST - Bug #1306 (Resolved): Detect probe providers built against old lttng-ust (.so.0) in l...https://bugs.lttng.org/issues/13062021-04-13T20:00:35ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>We should ensure the new lttng-ust (.so.1) refuses old probe providers. Likewise for old tracepoint instrumentation.</p>
<p>We should also check whether it might be possible for an application and its shared libraries to end up linking against both .so.0 and .so.1 within the same process, for instance if only some of those are rebuilt. If this can happen, we should provide some mechanism to detect this.</p> LTTng-tools - Bug #1292 (Resolved): new lttng_pgrep utils.sh test helper introduces errors in nor...https://bugs.lttng.org/issues/12922020-11-30T18:35:38ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Since introduction of commit:</p>
<pre>
commit 7cb78e2f73ef7bc0cfedef707f47f1c229bb4c43
Author: Jonathan Rajotte <jonathan.rajotte-julien@efficios.com>
Date: Fri May 22 10:36:46 2020 -0400
Fix: tests: `pgrep -f` flags unrelated process as lttng-sessiond
</pre>
<p>I notice the following errors in the console output when running tests individually by hand:</p>
<pre>
# Killing (signal SIGTERM) lttng-sessiond and lt-lttng-sessiond pids: 20962 20963
./tests/regression/tools/trigger/start-stop//../../../../utils/utils.sh: line 103: /proc/20963/cmdline: No such file or directory
</pre> LTTng-UST - Bug #1286 (Resolved): session daemon should validate credentials received from applic...https://bugs.lttng.org/issues/12862020-10-12T19:16:13ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Looking at ustctl_recv_reg_msg() , I notice that the session daemon fails to validate the pid and uid credentials it receives from the application, thus trusting them blindly. This means a non-root application could theoretically impersonate a root application from a tracing perspective, and thus access root tracing buffers in a per-uid configuration, which is unwanted. I remember that initially we had no validation of the pid provided by the application because original lttng 2.0 only supported per-pid buffers and had per application tracing buffers only, so it did not cause any issue other than mislabeling the trace directory. However, now that the buffers can be shared between processes belonging to the same uid, this needs to be validated by the session daemon, and it's not.</p>
<p>So the quick fix here would be to validate on the session daemon side that the credentials provided by the application match those from a sessiond perspective through unix socket credentials (getsockopt(2) SO_PEERCRED on Linux and LOCAL_PEERCRED on BSD). That would however mean that sessiond would refuse applications that come from separate namespaces if the credentials don't match.</p>
<p>Tweaking liblttng-ust-comm/lttng-ust-comm.c:ustcomm_send_reg_msg() to send dummy credentials shows that the session daemon indeed trusts the application blindly.</p> LTTng - Bug #1284 (Resolved): UST consumer daemon should handle SIGBUShttps://bugs.lttng.org/issues/12842020-10-09T18:28:11ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>There is an issue with the security model of lib ring buffer (lttng-ust) vs SIGBUS handling by consumer daemon. We do not trap SIGBUS in the consumer daemon. An application using ftruncate on a ring buffer shm could cause the consumer to be killed with SIGBUS.</p> LTTng-modules - Bug #1280 (Resolved): _IOR should be _IOW for a few commands in lttng-modules ABIhttps://bugs.lttng.org/issues/12802020-08-11T20:14:39ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Recently added commands are passing data from user-space to the kernel. According to Documentation/userspace-api/ioctl/ioctl-number.rst, this means:</p>
<pre>
_IO an ioctl with no parameters
_IOW an ioctl with write parameters (copy_from_user)
_IOR an ioctl with read parameters (copy_to_user)
_IOWR an ioctl with both write and read parameters.
</pre>
<p>those should use _IOW, rather than _IOR. A quick review comes up with this list of offenders:</p>
<p>- LTTNG_KERNEL_SESSION_SET_NAME<br />- LTTNG_KERNEL_SESSION_SET_CREATION_TIME<br />- LTTNG_KERNEL_SESSION_TRACK_ID<br />- LTTNG_KERNEL_SESSION_UNTRACK_ID<br />- LTTNG_KERNEL_SESSION_LIST_TRACKER_IDS</p>
<p>Another weird one is this:</p>
<p>- LTTNG_KERNEL_SESSION_TRACK_PID<br />- LTTNG_KERNEL_SESSION_UNTRACK_PID</p>
<p>Which takes a int32_t as _IOR, which it receives by directly casting the argument as int32_t, rather than using it as a pointer as we could expect.</p>
<p>Fixing this without introducing an ABI break is non-trivial, because changing _IOR to _IOW really changes the ioctl number AFAIU. So we need to be smart about fixing this without introducing an ABI break.</p> LTTng-tools - Bug #1271 (Resolved): testsuite should use bt2 python importshttps://bugs.lttng.org/issues/12712020-05-29T16:26:27ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>The test regression/kernel/validate_select_poll_epoll.py is the single user of the babeltrace python import.</p>
<p>We should move this to the babeltrace2 python API and introduce a dependency on babeltrace2 for the python tests within lttng-tools.</p> LTTng-tools - Bug #1270 (Resolved): testsuite should use babeltrace2 binary if foundhttps://bugs.lttng.org/issues/12702020-05-28T20:46:40ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>On a fresh system install, installing all lttng projects (master branches) and babeltrace (master branch) yield to lttng-tools make check failure because it is hardwired to use the "babeltrace" binary. However, babeltrace master branch now installs the "babeltrace2" binary.</p>
<p>One way to fix this would be to use the "babeltrace2" binary if found, else fallback on the "babeltrace" binary.</p> LTTng-modules - Bug #1245 (Resolved): file descriptor statedump should iterate over all processes...https://bugs.lttng.org/issues/12452020-03-10T15:42:20ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>If we look at the output of lsof, we observe that it prints file descriptors for all processes/threads, not just processes.</p>
<p>Currently the LTTng-modules statedump simply iterates over all processes in the system and assumes all threads share the same file descriptor table, which is only true if threads were created with clone CLONE_FILES.</p>
<p>Directly invoking clone without the CLONE_FILES creates threads which belong to the same process, but have their own file descriptor table.</p>
<p>Therefore, model-wise, we cannot assume that all threads in a process have the same fd table content.</p>
<p>Fixing this would involve changing the statedump to iterate on all processes/threads, and dump the fd tables for each tid.</p> LTTng-UST - Bug #1238 (Resolved): AddressSanitizer detects an global buffer overflow when we iter...https://bugs.lttng.org/issues/12382020-02-17T23:46:06ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>Building lttng-ust with:</p>
<p>CC="clang-9" CFLAGS="-g -O0 -fsanitize=address" LDFLAGS="-fsanitize=address -fno-omit-frame-pointer" ./configure && make</p>
<p>then running tests/hello/hello :</p>
<p>=================================================================<br />5772ERROR: AddressSanitizer: global-buffer-overflow on address 0x7f1e9d589270 at pc 0x7f1e9d16103f bp 0x7fff2e875ff0 sp 0x7fff2e875fe8<br />READ of size 4 at 0x7f1e9d589270 thread T0<br /> #0 0x7f1e9d16103e in hashlittle /home/efficios/git/lttng-ust/liblttng-ust/./jhash.h:129:15<br /> #1 0x7f1e9d160322 in jhash /home/efficios/git/lttng-ust/liblttng-ust/./jhash.h:256:9<br /> #2 0x7f1e9d1657a2 in add_callsite /home/efficios/git/lttng-ust/liblttng-ust/tracepoint.c:422:9<br /> #3 0x7f1e9d15d518 in lib_register_callsites /home/efficios/git/lttng-ust/liblttng-ust/tracepoint.c:552:3<br /> #4 0x7f1e9d15ca91 in tracepoint_register_lib /home/efficios/git/lttng-ust/liblttng-ust/tracepoint.c:903:2<br /> #5 0x7f1e9d4a124e in _<em>tracepoints</em>_ptrs_init /home/efficios/git/lttng-ust/liblttng-ust/../include/lttng/tracepoint.h:485:3<br /> #6 0x7f1e9d4a2d8d in lttng_ust_statedump_init /home/efficios/git/lttng-ust/liblttng-ust/lttng-ust-statedump.c:645:2<br /> <a class="issue tracker-1 status-5 priority-5 priority-high2 closed" title="Bug: Double PID registering and unregistering race (Resolved)" href="https://bugs.lttng.org/issues/7">#7</a> 0x7f1e9d409a85 in lttng_ust_init /home/efficios/git/lttng-ust/liblttng-ust/lttng-ust-comm.c:1846:2<br /> #8 0x7f1e9d7e0732 (/lib64/ld-linux-x86-64.so.2+0x10732)<br /> <a class="issue tracker-1 status-5 priority-3 priority-lowest closed" title="Bug: LTTng-UST java jni wrapper does not build with OpenJDK (Resolved)" href="https://bugs.lttng.org/issues/9">#9</a> 0x7f1e9d7d10c9 (/lib64/ld-linux-x86-64.so.2+0x10c9)</p>
<p>0x7f1e9d589270 is located 48 bytes to the left of global variable '__tp_strtab_lttng_ust_lib___build_id' defined in './ust_lib.h:55:1' (0x7f1e9d5892a0) of size 23<br /> '__tp_strtab_lttng_ust_lib___build_id' is ascii string 'lttng_ust_lib:build_id'<br />0x7f1e9d589273 is located 0 bytes to the right of global variable '__tp_strtab_lttng_ust_lib___load' defined in './ust_lib.h:42:1' (0x7f1e9d589260) of size 19<br /> '__tp_strtab_lttng_ust_lib___load' is ascii string 'lttng_ust_lib:load'<br />SUMMARY: AddressSanitizer: global-buffer-overflow /home/efficios/git/lttng-ust/liblttng-ust/./jhash.h:129:15 in hashlittle<br />Shadow bytes around the buggy address:<br /> 0x0fe453aa91f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br /> 0x0fe453aa9200: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br /> 0x0fe453aa9210: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br /> 0x0fe453aa9220: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br /> 0x0fe453aa9230: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00<br />=>0x0fe453aa9240: 00 00 00 00 00 00 00 00 00 00 00 00 00 00<sup><a href="#fn03">03</a></sup>f9<br /> 0x0fe453aa9250: f9 f9 f9 f9 00 00 07 f9 f9 f9 f9 f9 00 00 00 01<br /> 0x0fe453aa9260: f9 f9 f9 f9 00 00 05 f9 f9 f9 f9 f9 00 00 00 02<br /> 0x0fe453aa9270: f9 f9 f9 f9 00 00 00 05 f9 f9 f9 f9 00 00 00 05<br /> 0x0fe453aa9280: f9 f9 f9 f9 00 00 00 07 f9 f9 f9 f9 00 00 00 05<br /> 0x0fe453aa9290: f9 f9 f9 f9 00 00 00 f9 f9 f9 f9 f9 00 00 07 f9<br />Shadow byte legend (one shadow byte represents 8 application bytes):<br /> Addressable: 00<br /> Partially addressable: 01 02 03 04 05 06 07 <br /> Heap left redzone: fa<br /> Freed heap region: fd<br /> Stack left redzone: f1<br /> Stack mid redzone: f2<br /> Stack right redzone: f3<br /> Stack after return: f5<br /> Stack use after scope: f8<br /> Global redzone: f9<br /> Global init order: f6<br /> Poisoned by user: f7<br /> Container overflow: fc<br /> Array cookie: ac<br /> Intra object redzone: bb<br /> ASan internal: fe<br /> Left alloca redzone: ca<br /> Right alloca redzone: cb<br /> Shadow gap: cc<br />5772ABORTING</p> LTTng-tools - Bug #1191 (Invalid): extras/ subdir excluded from dist tarball on ./configure --dis...https://bugs.lttng.org/issues/11912019-08-05T14:31:41ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>We should add extras to the top level DIST_SUBDIRS, because it is not considered for the tarball if configured with extras disabled:</p>
<p>if BUILD_EXTRAS<br />SUBDIRS += extras<br />endif</p>
<p>When this kind of conditional is used, we should always make sure the target is also picked up by a DIST target.</p> LTTng - Bug #1183 (Resolved): LTTng-UST and modules ring buffers timestamp_end value may not incl...https://bugs.lttng.org/issues/11832019-04-29T20:10:47ZMathieu Desnoyersmathieu.desnoyers@efficios.com
<p>In the following scenario, lttng-ust and lttng-modules ring buffer code may not strictly respect the guarantee about packet header [timestamp_begin, timestamp_end] range including timestamps of all events contained in the packet.</p>
<p>Indeed, if we have the prior-to-last event (with timestamp T) written in the packet interrupted or preempted before it increments the commit counter, and then the last event for the packet with timestamp (T+1) written and committed, it will be up to the commit counter increment of event with timestamp T to perform the packet delivery, and therefore set the value for the timestamp_end.</p>
<p>However, that timestamp is taken from the event reservation, and will not cover T+1.</p>
<p>This issue was introduced in LTTng 2.2:</p>
<pre>
commit 969771a1536069d8f3f05e4836f5ef746d9b9a11
Author: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Date: Sun Nov 24 04:11:16 2013 -0500
Fix: eliminate timestamp overlap between packets
By using the timestamp sampled at space reservation when the packet is
being filled as "end timestamp" for a packet, we can ensure there is no
overlap between packet timestamp ranges, so that packet timestamp end <=
following packets timestamp begin.
Overlap between consecutive packets becomes an issue when the end
timestamp of a packet is greater than the end timestamp of a following
packet, IOW a packet completely contains the timestamp range of a
following packet. This kind of situation does not allow trace viewers
to do binary search within the packet timestamps. This kind of situation
will typically never occur if packets are significantly larger than
event size, but this fix ensures it can never even theoretically happen.
The only case where packets can still theoretically overlap is if they
have equal begin and end timestamps, which is valid.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
</pre>