Skip to main content

Elixir - first impressions

As any developer heading towards a burn-out, I spent my summer vacation learning a new programming language. I chose Elixir because it is a functional language with actor-like programming style. I thought that at least the latter feature would help me get started.

Actually, the first thing I ended up figuring out was how the Erlang runtime works. I was interested in how the Erlang processes work in relation to the operating system. It turned out that the concurrency model is not based on spawning multiple user level threads but rather on Erlang runtime abstractions which isolate the running code to the Erlang processes which can communicate with each other via message passing. Erlang runtime has a scheduler which can run multiple processes concurrently on the runtime with a limited set of OS user level threads (number of available CPU threads).

This is hardly surprising after reading about high performing applications written in Erlang. Threads can be expensive especially in the case when the execution switches between threads. Context switches can, for example, lead to increased processor cache misses (or even drop the whole cache) which can cause significant performance losses since the cache is probably the most important thing for CPU performance.

The syntax itself is elegant. The only thing I did not really like is the weak dynamic typing. I do understand the dynamic typing though as it can get rid of some clutter from the code. To prevent unexpected type errors Elixir provides guards (function argument type checks) and a solid pattern matching structure.

Concurrency is the way to go with Elixir and it is taught in all the official documentation. The standard library provides some abstractions like GenServer for the processes and message passing between them. I like the approach, I know from experience that writing raw send and receive methods is cumbersome and can be very confusing to developers not familiar with the actor model. Also, testing is much easier using, for example, GenServer due to its functional implementation of keeping state. This makes the state so much easier to understand at given points of execution than for example in Akka actors where the state changes are based mostly on imperative style side effects (of course you can do functional state in Akka actors as well).

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...