Skip to main content

Tracing syscalls with trace-cmd and bpftrace

In my ongoing quest to understand more how GNU Linux works under the hood, I took upon myself a courageous assignment to figure out what a simple go program does from a kernel's point of view - which syscalls are called and what data is passed down there

Here is the program

package main
import (
"log"
"os"
"strconv"
"syscall"
)
func main() {
emptyFile, err := os.Create("empty.txt")
if err != nil {
log.Fatal(err)
}
emptyFile.Close()
var currentTime syscall.Timeval;
syscall.Gettimeofday(&currentTime)
println("Hello world! "+strconv.FormatInt(currentTime.Sec, 10))
}
view raw main.go hosted with ❤ by GitHub

So a simple hello world, which prints out the famous words and creates an empty file. To figure out which syscalls are triggered, I decided to use trace-cmd package. It is a frontend to ftrace which helps for example in filtering out the syscalls related to a single binary or a PID.

After some intense googling, watching a couple of youtube videos, and fiddling around, I ran this command.

sudo trace-cmd record -p function_graph --max-graph-depth 3 -e syscalls -g do_syscall_64 -F ./gohello

It records a list of kernel functions practically in the order they are called with max depth and some filtering. It shows only syscalls and not, for example, system interrupts that clutter the output and are not in this exercise's scope. I'll not go into too much detail since, frankly, I lack the competence to decipher what all the parameters mean.

The output looks like this

cpus=1
gohello-3501 [000] 39140349.534544: sys_exit_write: 0x1
gohello-3501 [000] 39140349.534561: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.534562: funcgraph_entry: | syscall_trace_enter() {
gohello-3501 [000] 39140349.534562: sys_enter_execve: filename: 0x7ffdbdffd848, argv: 0x7ffdbdffbcc0, envp: 0x7ffdbdffbcd0
gohello-3501 [000] 39140349.534563: funcgraph_exit: 0.874 us | }
gohello-3501 [000] 39140349.534563: funcgraph_entry: | SyS_execve() {
gohello-3501 [000] 39140349.534564: funcgraph_entry: 1.178 us | getname();
gohello-3501 [000] 39140349.534565: funcgraph_entry: ! 338.228 us | do_execveat_common.isra.34();
gohello-3501 [000] 39140349.534905: funcgraph_exit: ! 341.439 us | }
gohello-3501 [000] 39140349.534905: funcgraph_entry: | syscall_slow_exit_work() {
gohello-3501 [000] 39140349.534906: sys_exit_execve: 0x0
gohello-3501 [000] 39140349.534907: funcgraph_exit: 1.049 us | }
gohello-3501 [000] 39140349.534907: funcgraph_entry: | exit_to_usermode_loop() {
gohello-3501 [000] 39140349.534907: funcgraph_entry: 3.493 us | task_work_run();
gohello-3501 [000] 39140349.534911: funcgraph_entry: 0.071 us | mem_cgroup_handle_over_high();
gohello-3501 [000] 39140349.534912: funcgraph_exit: 4.624 us | }
gohello-3501 [000] 39140349.534912: funcgraph_exit: ! 351.032 us | }
gohello-3501 [000] 39140349.534949: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.534949: funcgraph_entry: | syscall_trace_enter() {
gohello-3501 [000] 39140349.534950: sys_enter_arch_prctl: option: 0x00001002, arg2: 0x00576550
gohello-3501 [000] 39140349.534950: funcgraph_exit: 1.029 us | }
gohello-3501 [000] 39140349.534951: funcgraph_entry: | SyS_arch_prctl() {
gohello-3501 [000] 39140349.534951: funcgraph_entry: 0.374 us | do_arch_prctl_64();
gohello-3501 [000] 39140349.534952: funcgraph_exit: 0.920 us | }
gohello-3501 [000] 39140349.534952: funcgraph_entry: | syscall_slow_exit_work() {
gohello-3501 [000] 39140349.534953: sys_exit_arch_prctl: 0x0
gohello-3501 [000] 39140349.534953: funcgraph_exit: 0.447 us | }
gohello-3501 [000] 39140349.534954: funcgraph_exit: 4.572 us | }
gohello-3501 [000] 39140349.535058: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.535058: funcgraph_entry: | syscall_trace_enter() {
gohello-3501 [000] 39140349.535059: sys_enter_sched_getaffinity: pid: 0x00000000, len: 0x00002000, user_mask_ptr: 0x7ffd2d8d45a8
gohello-3501 [000] 39140349.535059: funcgraph_exit: 0.685 us | }
gohello-3501 [000] 39140349.535060: funcgraph_entry: | SyS_sched_getaffinity() {
gohello-3501 [000] 39140349.535060: funcgraph_entry: 0.650 us | __kmalloc_node();
gohello-3501 [000] 39140349.535061: funcgraph_entry: 0.697 us | sched_getaffinity();
gohello-3501 [000] 39140349.535063: funcgraph_entry: 0.434 us | __check_object_size();
gohello-3501 [000] 39140349.535064: funcgraph_entry: 0.114 us | kfree();
gohello-3501 [000] 39140349.535064: funcgraph_exit: 4.272 us | }
gohello-3501 [000] 39140349.535064: funcgraph_entry: | syscall_slow_exit_work() {
gohello-3501 [000] 39140349.535065: sys_exit_sched_getaffinity: 0x8
gohello-3501 [000] 39140349.535065: funcgraph_exit: 0.357 us | }
gohello-3501 [000] 39140349.535066: funcgraph_exit: 7.165 us | }
gohello-3501 [000] 39140349.535067: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.535067: funcgraph_entry: | syscall_trace_enter() {
gohello-3501 [000] 39140349.535067: sys_enter_openat: dfd: 0xffffff9c, filename: 0x00561aa0, flags: 0x00000000, mode: 0x00000000
gohello-3501 [000] 39140349.535067: funcgraph_exit: 0.372 us | }
gohello-3501 [000] 39140349.535068: funcgraph_entry: | SyS_openat() {
gohello-3501 [000] 39140349.535068: funcgraph_entry: + 21.088 us | do_sys_open();
gohello-3501 [000] 39140349.535090: funcgraph_exit: + 21.673 us | }
gohello-3501 [000] 39140349.535090: funcgraph_entry: | syscall_slow_exit_work() {
gohello-3501 [000] 39140349.535090: sys_exit_openat: 0x3
gohello-3501 [000] 39140349.535091: funcgraph_exit: 0.386 us | }
gohello-3501 [000] 39140349.535091: funcgraph_exit: + 24.228 us | }
gohello-3501 [000] 39140349.535092: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.535092: funcgraph_entry: | syscall_trace_enter() {
gohello-3501 [000] 39140349.535093: sys_enter_read: fd: 0x00000003, buf: 0x7ffd2d8d6594, count: 0x00000014
gohello-3501 [000] 39140349.535093: funcgraph_exit: 0.351 us | }
gohello-3501 [000] 39140349.535093: funcgraph_entry: | SyS_read() {
gohello-3501 [000] 39140349.535093: funcgraph_entry: 0.126 us | __fdget_pos();
gohello-3501 [000] 39140349.535094: funcgraph_entry: 6.717 us | vfs_read();
gohello-3501 [000] 39140349.535101: funcgraph_exit: 7.832 us | }
gohello-3501 [000] 39140349.535101: funcgraph_entry: | syscall_slow_exit_work() {
gohello-3501 [000] 39140349.535102: sys_exit_read: 0x8
gohello-3501 [000] 39140349.535102: funcgraph_exit: 0.282 us | }
gohello-3501 [000] 39140349.535102: funcgraph_exit: + 10.270 us | }
gohello-3501 [000] 39140349.535103: funcgraph_entry: | do_syscall_64() {
gohello-3501 [000] 39140349.535103: funcgraph_entry: | syscall_trace_enter() {
......
......
......
view raw trace-cmd-output.txt hosted with ❤ by GitHub

With this single program, there is a lot of output, thousands of lines even with the filtering. The example shows one interesting syscall, the SyS_write which is actually the call to write the "Hello world" plus timestamp to stdout. 

We can actually take a closer look at the full stack of calls related to SyS_write

sudo trace-cmd record -p function_graph -g SyS_write  -F ./gohello

I tried to look for clock_gettime or suchlike kernel calls to see which kernel function is used to get the current time but turns out, there are things called vDSO shared objects which map parts of the kernel memory to be directly accessible to usespace eliminating the need for a syscall. The go runtime reads the timestamp registry directly. Check the implementation here.



The file operation calls show in the above output. It all starts by reading the file path "./example.txt", opening files, creating file descriptors, piping data to it, and closing the file and fd. More on the functions can be found on their corresponding man pages.


Ok, cool, we now know which are the most exciting syscalls. Obviously, there are more system calls, like a bunch of SyS_mmaps for mapping virtual memory pages, etc... but the next step is to take a look inside a selected few calls. For that, I use eBPF, or rather a frontend for it called bpftrace.

I have watched a number of presentations about the fascinating topic of eBPF and have played with bpftrace occasionally and I admit this is one of the main reasons of writing this post. 

First up, lets see the file writing operations. We can take a look at the file open for example. Step one, find the tracepoint

sudo bpftrace -l "*openat"

Ok we have a tracepoint, next let's check which arguments we can trace


We can see the filename in the trace point arguments. The tracing call can be done with for example this one line bpf program

sudo bpftrace -e 'tracepoint:syscalls:sys_enter_openat { printf("%s %s\n", comm, str(args->filename)); }' -c ./gohello


In the output, you can see all the calls made to sys_openat during the program execution, and voila, there is example.txt

An interesting finding is that readlinkat is not related to the file path of example.txt but is a link to self.


We can also trace the sys_write calls, and whoo boy, are there plenty of those. We can attempt to filter those a bit with a PID constraint

sudo bpftrace -e 'tracepoint:syscalls:sys_enter_write /pid > 7000/ { printf("<%s>\n", str(args->buf, args->count)); }'  -c ./gohello | less



And there you have it, a hello world amid all the mess!



Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

Canyon Precede:ON 7

I bought or technically leased a Canyon Precede:ON 7 (2022) electric bike last fall. This post is about my experiences with it after riding for about 2000 km this winter. The season was a bit colder than usual, and we had more snow than in years, so I properly put the bike through its paces. I've been cycling for almost 20 years. I've never owned a car nor used public transport regularly. I pedal all distances below 30km in all seasons. Besides commuting, I've mountain biked and raced BMX, and I still actively ride my road bike during the spring and summer months. I've owned a handful of bikes and kept them until their frames failed. Buying new bikes or gear has not been a major part of my hobby, and frankly, I'm quite sceptical about the benefits of updating bikes or gear frequently. I've never owned an E-bike before, but I've rented one a couple of times. The bike arrived in a hilariously large box. I suppose there's no need to worry about damage durin...

Emit structured Postgres data change events with wal2json

A common thing I see in an enterprise system is that when an end-user does some action, say add a user, the underlying web of subsystems adds the user to multiple databases in separate transactions. Each of these transactions may happen in varying order and, even worse, can fail, leaving the system in an inconsistent state. A better way could be to write the user data to some main database and then other subsystems like search indexes, pull/push the data to other interested parties, thus eliminating the need for multiple end-user originating boundary transactions. That's the theory part; how about a technical solution. The idea of this post came from the koodia pinnan alla podcast about event-driven systems and CDC . One of the discussion topics in the show is emitting events from Postgres transaction logs.  I built an utterly simple change emitter and reader using Postgres with the wal2json transaction decoding plugin and a custom go event parser. I'll stick to the boring ...