Project

General

Profile

Actions

Bug #764

closed

Perf UST in a VM can make the traced application crash

Added by Julien Desfossez almost 11 years ago. Updated over 10 years ago.

Status:
Resolved
Priority:
Normal
Target version:
Start date:
03/16/2014
Due date:
% Done:

0%

Estimated time:

Description

When enabling some perf counters context in UST in a virtual machine (where all the perf counters are not available), we can make the target application crash.
Example in a KVM with Ubuntu precise 12.04 :
$ lttng create
$ lttng enable-event -u -a
$ lttng add-context -u -t perf:thread:cache-misses
$ lttng start
$ ./tests/utils/testapp/gen-ust-events/gen-ust-events 1 1
Aborted (core dumped)

(gdb) bt full
#0 0x00007f2f357d7425 in raise () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#1 0x00007f2f357dab8b in abort () from /lib/x86_64-linux-gnu/libc.so.6
No symbol table info available.
#2 0x00007f2f35d9f54d in add_thread_field (perf_thread=<optimized out>,
perf_field=<optimized out>) at lttng-context-perf-counters.c:231
oldmask = {__val = {0, 18446744073709551615 <repeats 15 times>}}
ret = <optimized out>
thread_field = <optimized out>
newmask = {__val = {18446744067267100671, 18446744073709551615 <repeats 15 times>}}
#3 get_thread_field (field=0x7f2f2c008f50) at lttng-context-perf-counters.c:251
perf_thread = <optimized out>
thread_field = <optimized out>
#4 wrapper_perf_counter_read (field=<optimized out>) at lttng-context-perf-counters.c:261
perf_field = 0x7f2f2c008f50
perf_thread_field = 0x1f58640
#5 0x00007f2f35d9f620 in perf_counter_record (field=<optimized out>, ctx=0x7fff6eae1400,
chan=0x7f2f2c008d20) at lttng-context-perf-counters.c:272
value = <optimized out>
#6 0x00007f2f35d87eb0 in ctx_record (bufctx=0x7fff6eae1400, chan=0x7f2f2c008d20,
ctx=0x7f2f2c008fb0) at lttng-ring-buffer-client.h:94
No locals.
#7 0x00007f2f35da1d6e in ctx_record (ctx=<optimized out>, chan=0x7f2f2c008d20,
bufctx=0x7fff6eae1400) at lttng-ring-buffer-client.h:91
No locals.
#8 lttng_write_event_header_slow (event_id=1, ctx=0x7fff6eae1400, config=<optimized out>)
at lttng-ring-buffer-client.h:294
event = 0x1f584d0
config = 0x7f2f35fc6700
#9 lttng_write_event_header (event_id=<optimized out>, ctx=0x7fff6eae1400,
config=0x7f2f35fc6700) at lttng-ring-buffer-client.h:228
lttng_chan = 0x7f2f2c008d20
event = 0x1f584d0
#10 lttng_event_reserve (ctx=0x7fff6eae1400, event_id=<optimized out>)
at lttng-ring-buffer-client.h:583
ret = 738232832
func = "lttng_event_reserve"
#11 0x000000000040121b in event_probe__tptptest (tp_data=0x1f584d0, anint=0, netint=0,
values=0x7fff6eae14f0, text=0x7fff6eae1510 "test", textlen=<optimized out>, doublearg=2,
floatarg=2222) at ././tp.h:22
__event = 0x1f584d0
__chan = 0x7f2f2c008d20
__ctx = {chan = 0x7f2f2c008a00, priv = 0x1f584d0, handle = 0x7f2f2c008da0,
data_size = 99, largest_align = 1, cpu = 0, buf = 0x7f2f36342000, slot_size = 120,
buf_offset = 81, pre_offset = 68, tsc = 2062361541160, rflags = 1, padding1 = 0,
ip = 0x400e5c, padding2 = '\000' <repeats 11 times>}
---Type <return> to continue, or q <return> to quit---
__dynamic_len_idx = 0
__stackvar = {
_dynamic_len = {4, 4, 5, 139840750280704, 139840750280720, 16,
72340172838076673, 139840730935097, 139840750280704, 139840730937568, 0, 0},
__filter_stack_data = "\004\000\000\000\000\000\000\000\004\000\000\000\000\000\000\000\005\000\000\000\000\000\000\000\000\340>6/\177\000\000\020\340>6/\177\000\000\020\000\000\000\000\000\000\000\001\001\001\001\001\001\001\001\071\257\027\065/\177\000\000\000\340>6/\177\000\000\340\270\027\065/\177", '\000' <repeats 113 times>}
__ret = <optimized out>
#12 0x0000000000400e5c in __tracepoint_cb_tp
_tptest (floatarg=2222, doublearg=2, textlen=4,
text=0x7fff6eae1510 "test", values=0x7fff6eae14f0, netint=0, anint=0) at tp.h:22
__tp_cb = <optimized out>
__tp_data = <optimized out>
__tp_probe = 0x1f585f0
#13 main (argc=<optimized out>, argv=<optimized out>) at gen-ust-events.c:74
i = <optimized out>
netint = 0
values = {1, 2, 3}
text = "test\000\000\000\000\000"
nr_iter = <optimized out>
nr_usec = <optimized out>
tmp_file_path = <optimized out>


Files

sessiond-logs.txt (37.4 KB) sessiond-logs.txt Julien Desfossez, 04/21/2014 01:36 PM
ust-debug.txt (6.69 KB) ust-debug.txt Julien Desfossez, 04/21/2014 01:36 PM

Updated by Julien Desfossez over 10 years ago

As of UST commit 496b67e0b746dab2830e3c2d3f3fb99e1be37e9d, the traced application does not crash anymore, but we receive this message from sessiond and the trace is empty :
Error: UST app create channel context failed for app (pid: 24376) with ret -1024

Complete sessiond/consumerd and UST debug logs attached.

Actions #2

Updated by Mathieu Desnoyers over 10 years ago

This behavior (error reported and channel not activated) is expected since the session daemon receives an error from the application when sending the command "add context".

Actions #3

Updated by Mathieu Desnoyers over 10 years ago

  • Status changed from New to Resolved
Actions

Also available in: Atom PDF