Skip to main content

Flame graph from a Scala app

Apologies for the large SVG




I got inspired by a Devoxx talk about flame graphs and how they can visualize what is happening on a JVM process.

Getting a graph is actually quite simple. You only need a recent enough Java 8 JDK, a running subject JVM process running on linux, perf which is part of kernel utils in most distributions and a couple of simple profiling tools which are open source. Detailed infomation can be found in a blog post by Nitsan Wakart here http://psy-lob-saw.blogspot.fi/2017/02/flamegraphs-intro-fire-for-everyone.html

So what is in that SVG.

It is illustrating what was happening on a Scala app I'm running on a DigitalOcean pod sampled 100 times a second during a 40 second period. The bars describe call stacks and the topmost item is always the one running in the CPU. More details how to read the graph can be found here http://www.brendangregg.com/flamegraphs.html

In this case the stacks are divided to threads. The leftmost stuff (thread) contains a lot of GC operations although upon closer inspection you can see that is happening on other threads as well. Then right to that thread there are a lot of sample points spent on JVM methods related to JIT (at least i assume that from the method names which contain C1 and C2 keywords). As you can see quite a lot of the total samples were related to JIT compiling which most probably tells that the application was still warming up so to speak.

On the other hand when looking at the application code there is already some in-lining visible marked with aqua color. I'm not 100% sure on how to interpret those stacks though. I assume that the methods call on top of each other bounded by green (scala) method calls are in-lined to each other.

What can I learn from this flame graph? I'm certainly not an expert in JVM nor in performance tuning but some of my suspicions where I could improve the performance are visible. I could reduce the amount of disk IO by not reading the configs from disk each time. Also I could look into caching the results of XML transform operations which this particular application does a lot. In another graph where the treads are squashed together, that becomes more visible. Obviously I knew those operations are relatively slow beforehand and also i could have just used visualvm or some other more familiar tool to see that as well.

Finding hotspots in the Scala code is not necessarily the most interesting part of this graph. What I found fascinating is the visibility of the whole linux process with kernel calls included. Interestingly enough in the graph the embedded RocksDB stack presents itself as a surprisingly minor CPU consumer. Have to say it is also surprising that almost 40% of the calls are not any code related to the application code itself.

https://jompanakumpana.fi/flames.svg


Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...