Skip to main content

Ousterhout's law


Back in the day, everyone was using Winamp. It is a music player with the user interface of a mixing console. The bloody thing had an equalizer on the front page!

As an amateur music producer, I know that EQ is a powerful tool but making changes that sound good is difficult, to say the least. Mixing engineers spend considerable effort with the artist to balance the frequency ranges to arrive at the desired musical outcome. I bet the 15-year-old me butchered a lot of songs with the thing.

Now we are using Spotify, a player with basically a search bar and a play button.

I ran into something called Ousterhout's Law on the Operating Systems: Three Easy Pieces book. Here is a quote from the book

TIP: AVOID VOO-DOO CONSTANTS (OUSTERHOUT’S LAW) Avoiding voo-doo constants is a good idea whenever possible. Unfortunately, as in the example above, it is often difficult. One could try to make the system learn a good value, but that too is not straightforward. The frequent result: a configuration file filled with default parameter values that a seasoned administrator can tweak when something isn’t quite working correctly. As you can imagine, these are often left unmodified, and thus we are left to hope that the defaults work well in the field. This tip brought to you by our old OS professor, John Ousterhout, and hence we call it Ousterhout’s Law.

A classic example of this kind of configuration is the JVM GC parameters. Firstly, you have to pick your garbage collector and after that, there are dozens of parameters. The number of possible permutations can overwhelm humans and has forced some of the big boys like Netflix and Twitter to create tools that by empirical methods make adjustments automatically based on the measured results. 

I'm sure all of them are needed when you want to squeeze the last millisecond out from the trading algorithm behemoth. Yet most users are like me with Spotify, I really want just to press play. The one-knob tuner in the go garbage collector is enough for me!

I often wonder which way a knob or a setting should pointed at in the pieces of software I make. If I do this, the app is faster but may produce incorrect results - sometimes. Should I add a user setting that reduces the amount of visible stuff to help performance at the cost of reduced information density?

The answer is mostly no. I really can't force the decision on the user. As the product designer, I have to make the decisions and based on how the users behave and feel, hide the configuration file under the hood.

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...