Project

General

Profile

Actions

Feature #852

closed

babeltrace python bindings do not make visible the "hostname" field

Added by craig harmer over 9 years ago. Updated about 4 years ago.

Status:
Invalid
Priority:
Normal
Assignee:
-
Category:
-
Target version:
Start date:
10/29/2014
Due date:
% Done:

0%

Estimated time:

Description

I'm using babeltrace 1.2.2.

The python bindings for babeltrace do not make the "hostname" field available, even though its present in the trace file.

Here is the first event in the tracefile as printed by babeltrace:

[17:09:08.849944204] (+?.?????????) ch3 pcs_paxosclient:sync_publish_entry: { cpu_id = 6 }, { vpid = 12882, vtid = 12882, procname = "md_test", pthread_id = 139715261524352 }, { }

I wrote a python script that gets all the fields in each scope and prints the contents of each field. It also prints event.get("hostname") directly for good measure. The output for the first event looks like:

[1414541348849944204]: pcs_paxosclient:sync_publish_entry:
hostname is: None
scope 3 field 'vpid' <type 'long'>: '12882'
field 'vtid' <type 'long'>: '12882'
field 'procname' <type 'str'>: 'md_test'
field 'pthread_id' <type 'long'>: '139715261524352'
scope 2 field 'id' <type 'str'>: 'extended'
field 'v' <type 'dict'>: '{'timestamp': 15985732118075L, 'id': 169L}'
scope 1 field 'timestamp_begin' <type 'long'>: '15985672518003'
field 'timestamp_end' <type 'long'>: '16011668489057'
field 'content_size' <type 'long'>: '8388272'
field 'packet_size' <type 'long'>: '8388608'
field 'events_discarded' <type 'long'>: '0'
field 'cpu_id' <type 'long'>: '6'
scope 0 field 'magic' <type 'long'>: '3254525889'
field 'uuid' <type 'list'>: '[76L, 42L, 65L, 101L, 66L, 68L, 79L, 161L, 179L, 59L, 78L, 234L, 84L, 76L, 22L, 68L]'


Files

viewtrace (3.02 KB) viewtrace script to pretty-print a trace (under development) craig harmer, 10/29/2014 08:59 PM
Actions #1

Updated by craig harmer over 9 years ago

I've attached the script I wrote to generate the above output. The part that generated the output looks like:

    for event in traces.events:
            if len(event.field_list_with_scope(babeltrace.CTFScope.EVENT_CONTEXT) ) > 0:
                    fields = "" 
                    for field in event.field_list_with_scope(babeltrace.CTFScope.EVENT_CONTEXT):
                            fields += " " + field
                    print("NOTICE: event has event_context: [{}] {} scope={} fields: {}".format(
                                    event.timestamp, event.name, scope, fields) )

            print("[{}]: {}:".format(event.timestamp, event.name) )
            print("hostname is:", event.get("hostname") )
            for scope in [babeltrace.CTFScope.EVENT_FIELDS,
                          babeltrace.CTFScope.STREAM_EVENT_CONTEXT,
                          babeltrace.CTFScope.STREAM_EVENT_HEADER,
                          babeltrace.CTFScope.STREAM_PACKET_CONTEXT,
                          babeltrace.CTFScope.TRACE_PACKET_HEADER]:

                    scope_str = "scope " + str(scope)
                    for field in event.field_list_with_scope(scope):
                            print("{:8} field '{}' {}: '{}'".format(
                                            scope_str, field,
                                            type(event.field_with_scope(field, scope) ),
                                            event.field_with_scope(field, scope) ) )
                            scope_str = "" 


and a prettier version of the output (with the leading white space retained) is:
[1414541348849944204]: pcs_paxosclient:sync_publish_entry:
hostname is: None
scope 3  field 'vpid' <type 'long'>: '12882'
         field 'vtid' <type 'long'>: '12882'
         field 'procname' <type 'str'>: 'md_test'
         field 'pthread_id' <type 'long'>: '139715261524352'
scope 2  field 'id' <type 'str'>: 'extended'
         field 'v' <type 'dict'>: '{'timestamp': 15985732118075L, 'id': 169L}'
scope 1  field 'timestamp_begin' <type 'long'>: '15985672518003'
         field 'timestamp_end' <type 'long'>: '16011668489057'
         field 'content_size' <type 'long'>: '8388272'
         field 'packet_size' <type 'long'>: '8388608'
         field 'events_discarded' <type 'long'>: '0'
         field 'cpu_id' <type 'long'>: '6'
scope 0  field 'magic' <type 'long'>: '3254525889'
         field 'uuid' <type 'list'>: '[76L, 42L, 65L, 101L, 66L, 68L, 79L, 161L, 179L, 59L, 78L, 234L, 84L, 76L, 22L, 68L]'
         field 'stream_id' <type 'long'>: '0'

Actions #2

Updated by craig harmer over 9 years ago

I thought I'd be able to track this down myself, but the C API and the python bindings to it seem to
be beyond my comprehension.

However, I did prove that the hostname field is parsed out of the trace's metadata file and the
information is collected in the resulting parse tree by setting a breakpoint in
ctf_env_declaration_visit() and verifying that it is called to collect the hostname, which seems to
prove "hostname" was picked up in the parse tree (and the parse tree seems to somehow drive the
fields available for each trace event).

More detail on that below.

Stack traceback:

(gdb) bt
#0  ctf_env_declaration_visit (fd=0x7ffff731b880, depth=<optimized out>, node=0xaa59ce, 
    trace=0xa8f700) at ctf-visitor-generate-io-struct.c:2731
#1  0x00007ffff694463e in ctf_env_visit (fd=0x7ffff731b880, depth=depth@entry=1, 
    node=0xa8e260, node@entry=0xaa634a, trace=0xa8f700)
    at ctf-visitor-generate-io-struct.c:2905
#2  0x00007ffff6944aae in ctf_visitor_construct_metadata (fd=0x7ffff731b880, 
    depth=depth@entry=0, node=0xa000e0, trace=trace@entry=0xa8f700, 
    byte_order=<optimized out>) at ctf-visitor-generate-io-struct.c:3036
#3  0x00007ffff692bf90 in ctf_trace_metadata_read (td=td@entry=0xa8f700, 
    metadata_fp=metadata_fp@entry=0x0, scanner=scanner@entry=0x8f63d0, 
    append=append@entry=0) at ctf.c:1300
#4  0x00007ffff692f407 in ctf_open_trace_read (metadata_fp=0x0, 
    packet_seek=0x7ffff692d590 <ctf_packet_seek>, flags=0, 
    path=0xa6c66c "/home/charmer/lttng-traces/escale-session-20141028-170835/ust/uid/1307/64-bit", td=0xa8f700) at ctf.c:2111
#5  ctf_open_trace (
    path=0xa6c66c "/home/charmer/lttng-traces/escale-session-20141028-170835/ust/uid/1307/64-bit", flags=0, packet_seek=0x7ffff692d590 <ctf_packet_seek>, metadata_fp=0x0)
    at ctf.c:2206
#6  0x00007ffff5c01a06 in bt_context_add_trace (ctx=ctx@entry=0x9b5e20, 
    path=path@entry=0xa6c66c "/home/charmer/lttng-traces/escale-session-20141028-170835/ust/uid/1307/64-bit", format_name=format_name@entry=0x7ffff7e7220c "ctf", packet_seek=0, 
    stream_list=stream_list@entry=0x0, metadata=<optimized out>) at context.c:95
#7  0x00007ffff6b6c1b4 in _wrap__bt_context_add_trace (self=<optimized out>, 
    args=<optimized out>) at babeltrace_wrap.c:3985
#8  0x00000000004ac5ce in PyEval_EvalFrameEx ()
#9  0x00000000004acde0 in PyEval_EvalFrameEx ()
...

(gdb) frame
#0  ctf_env_declaration_visit (fd=0x7ffff731b880, depth=<optimized out>, node=0xaa59ce, 
    trace=0xa8f700) at ctf-visitor-generate-io-struct.c:2731
2731            if (!left)

(gdb) l
2726        case NODE_CTF_EXPRESSION:
2727        {
2728            char *left;
2729    
2730            left = concatenate_unary_strings(&node->u.ctf_expression.left);
2731            if (!left)
2732                return -EINVAL;
2733            if (!strcmp(left, "vpid")) {
2734                uint64_t v;
2735    

(gdb) p left
$88 = 0x95bd00 "hostname" 

and a few breakpoints later i can show that trace->hostname is set to "ch3":

(gdb) c
Continuing.

Breakpoint 25, ctf_env_declaration_visit (fd=0x7ffff731b880, depth=<optimized out>, 
    node=0xaa5c88, trace=0xa8f700) at ctf-visitor-generate-io-struct.c:2731
2731            if (!left)

(gdb) up
#1  0x00007ffff694463e in ctf_env_visit (fd=0x7ffff731b880, depth=depth@entry=1, 
    node=0xa8e2a0, node@entry=0xaa634a, trace=0xa8f700)
    at ctf-visitor-generate-io-struct.c:2905
2905            ret = ctf_env_declaration_visit(fd, depth + 1, iter, trace);

(gdb) p *trace
$90 = {
  parent = {
    path = "/home/charmer/lttng-traces/escale-session-20141028-170835/ust/uid/1307/64-bit", '\000' <repeats 4018 times>, 
    ctx = 0x0, 
    handle = 0x0, 
    collection = 0x0, 
    clocks = 0xa2a980, 
    single_clock = 0x9d34b0
  }, 
  root_declaration_scope = 0xa00220, 
  declaration_scope = 0x9196a0, 
  definition_scope = 0x0, 
  streams = 0xa8e100, 
  metadata = 0xa98b10, 
  metadata_string = 0xab85c0 "typealias integer { size = 8; align = 8; signed = false; } := uint8_t;\ntypealias integer { size = 16; align = 8; signed = false; } := uint16_t;\ntypealias integer { size = 32; align = 8; signed = false"..., 
  metadata_packetized = 1, 
  callsites = 0xa2a640, 
  event_declarations = 0xa8e120, 
  packet_header_decl = 0x9595a0, 
  scanner = 0x0, 
  restart_root_decl = 0, 
  major = 1, 
  minor = 8, 
  uuid = "L*AeBDO\241\263;N\352TL\026D", 
  byte_order = 1234, 
  env = {
    vpid = -1, 
    procname = '\000' <repeats 127 times>, 
    hostname = "ch3", '\000' <repeats 124 times>, 
    domain = "ust", '\000' <repeats 124 times>, 
    sysname = '\000' <repeats 127 times>, 
    release = '\000' <repeats 127 times>, 
    version = '\000' <repeats 127 times>
  }, 
  field_mask = 15, 
  dir = 0xa90ad0, 
  dirfd = 9, 
  flags = 0
}

Actions #3

Updated by Jérémie Galarneau over 9 years ago

  • Tracker changed from Bug to Feature
  • Status changed from New to Confirmed

This is indeed a limitation of the current API; the environment fields are not made available to users. I am currently performing a refactor of Babeltrace's internals which will make this possible.

This should be supported in the next version.

Actions #4

Updated by craig harmer over 9 years ago

thank you. let me know if there's something i can do to help with this.

Actions #5

Updated by craig harmer over 9 years ago

fixing this bug will only be useful if bug #790 is fixed as well.

bug #790 will manifest when adding traces from different hosts to the same collection because they have different clocks. it can (i think) be avoided if the "--clock-force-correlate" option is used, but there is no way to set that option via the python bindings (at least none that i can see).

Actions #6

Updated by Srinivas Manem over 8 years ago

Is there any update on this feature ? Please let me know.
(This is indeed a limitation of the current API; the environment fields are not made available to users)

Actions #7

Updated by Jonathan Rajotte Julien about 4 years ago

  • Status changed from Confirmed to Invalid

State of babeltrace moved a lot since.

Closing this ticket as invalid. Reopen it if it stills apply to Babeltrace 2.

Actions

Also available in: Atom PDF