Skip to main content

Posts

Showing posts from 2025

Making dashboards more usable for LLMs

I’ve been looking into agentic workflows to act as an operations assistant for the SaaS I'm working on. A big part of that work is getting the assistant to make sense of all the alert and monitoring data that pours in every day. Passing a bunch of raw time-series data to an LLM generally doesn’t work that well. You need to tell the LLM to aggregate the data and give it the means to do so. Using aggregates will often lead to better insights from the LLM. This is a well-known fact to anyone who has tinkered with this (at least at the time of writing this). Humans, of course, like to build visualizations and dashboards to solve this issue (yes, yes, dashboards are often useless, but complaining about that is another blog post). LLMs can analyze them as well and in fact are pretty good at that, so the aggregate can be something both humans and LLMs can digest. I’ve been tinkering with the idea of appending some LLM-only content to a dashboard—for example, additional context, specific d...

"You are a friendly breadwinner"

A recent blog post by Pete Koomen about how we still lack truly "AI-native" software got me thinking about the kinds of applications I’d like to see. As the blog post says, AI should handle the boring stuff and leave the interesting parts for me. I listed down a few tasks I've dealt with recently and wrote some system prompts for potential agentic AIs: Check that the GDPR subprocessor list is up to date. Also, ensure we have a signed data processing agreement in place with the necessary vendors. Write a summary of what you did and highlight any oddities or potentially outdated vendors. Review our product’s public-facing API. Ensure the domain objects are named consistently. Here's a link to our documentation describing the domain. Conduct a SOC 2 audit of our system and write a report with your findings. Send the report to Slack. Once you get approval, start implementing the necessary changes. These could include HR-related updates, changes to cloud infras...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Async with Django

Working with Django in ASGI mode can benefit applications with long requests where the synchronous mode may be a bit wasteful, as you would likely need a ton of workers—especially if those workers are processes (e.g., Gunicorn). Your app may also run some fancy new LLM framework which is often async only. That said, async mode isn't a silver bullet for scaling. In Django 5.1, you still need threads to offload blocking database operations, which remain synchronous by design. Django provides async query wrappers like  aget , acount , and aall , but these are just syntactic sugar for offloading the operation to a worker thread via asgiref.sync_to_async. You can use the asgiref package's  sync_to_async utility to move blocking executions to the worker. By default, that is a ThreadPoolExecutor with one worker. This thread is shared by all sync_to_async calls during the request context, effectively meaning that each request spawns one worker thread for all sync work. So, to summar...