Skip to main content

Posts

Is your next SaaS product just a textarea and a canvas?

There has been a lot of talk about SaaS platforms whose front page essentially just has a textarea for collecting the user's intent, and what comes after that is a canvas for the outcome. Call these agentic or AI native startups, but I think there might be something in this pattern. Most famous examples include Lovable and ChatGPT, but obviously, these will spread to other domains like legal, and I happen to know that some medical startup is also working on this. So what's the big deal here? Firstly, we can eliminate the endless form filling and table layouts. Not that there is that much bad in those - we are all used to them, and they play a part in making websites familiar and easy to use. It’s more than the UIs become more personalized in the sense that you don’t need to squeeze all users through a funnel with forms that have tons of fields and tables or visualizations with dozens of variables. The textarea approach flips this. Instead of asking use...
Recent posts

Making dashboards more usable for LLMs

I’ve been looking into agentic workflows to act as an operations assistant for the SaaS I'm working on. A big part of that work is getting the assistant to make sense of all the alert and monitoring data that pours in every day. Passing a bunch of raw time-series data to an LLM generally doesn’t work that well. You need to tell the LLM to aggregate the data and give it the means to do so. Using aggregates will often lead to better insights from the LLM. This is a well-known fact to anyone who has tinkered with this (at least at the time of writing this). Humans, of course, like to build visualizations and dashboards to solve this issue (yes, yes, dashboards are often useless, but complaining about that is another blog post). LLMs can analyze them as well and in fact are pretty good at that, so the aggregate can be something both humans and LLMs can digest. I’ve been tinkering with the idea of appending some LLM-only content to a dashboard—for example, additional context, specific d...

"You are a friendly breadwinner"

A recent blog post by Pete Koomen about how we still lack truly "AI-native" software got me thinking about the kinds of applications I’d like to see. As the blog post says, AI should handle the boring stuff and leave the interesting parts for me. I listed down a few tasks I've dealt with recently and wrote some system prompts for potential agentic AIs: Check that the GDPR subprocessor list is up to date. Also, ensure we have a signed data processing agreement in place with the necessary vendors. Write a summary of what you did and highlight any oddities or potentially outdated vendors. Review our product’s public-facing API. Ensure the domain objects are named consistently. Here's a link to our documentation describing the domain. Conduct a SOC 2 audit of our system and write a report with your findings. Send the report to Slack. Once you get approval, start implementing the necessary changes. These could include HR-related updates, changes to cloud infras...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

Async with Django

Working with Django in ASGI mode can benefit applications with long requests where the synchronous mode may be a bit wasteful, as you would likely need a ton of workers—especially if those workers are processes (e.g., Gunicorn). Your app may also run some fancy new LLM framework which is often async only. That said, async mode isn't a silver bullet for scaling. In Django 5.1, you still need threads to offload blocking database operations, which remain synchronous by design. Django provides async query wrappers like  aget , acount , and aall , but these are just syntactic sugar for offloading the operation to a worker thread via asgiref.sync_to_async. You can use the asgiref package's  sync_to_async utility to move blocking executions to the worker. By default, that is a ThreadPoolExecutor with one worker. This thread is shared by all sync_to_async calls during the request context, effectively meaning that each request spawns one worker thread for all sync work. So, to summar...

Careful with externalTrafficPolicy

On a project I am working on is hosted in an EKS cluster with the NGINX ingress controller (the one maintained by Kubernetes). It is deployed using it's official official Helm chart, which I realized, after a lengthy debugging session, was a mistake. The initial setup I aimed to improve had several flaws. Firstly, we were using the AWS Classic Load Balancer in front of the nginx ingress in the cluster, which has been deprecated for some time (years?). Continuing to use it makes little sense to us. The second issue was that we were only running one(!) nginx pod, which is quite sketchy since the exposed web services had essentially no high availability.  I switched to the Network Load Balancer (NLB), which was straightforward - I just needed to change the ingress-nginx service annotation to specify the load balancer type as NLB: service.beta.kubernetes.io/aws-load-balancer-type: nlb However, increasing the replica count turned out to be tricky. When I bumped it up to two, I began to ...

Vanta is a pretty good tool

  On a project I've been working on, I've been preparing for SOC 2 Type II certification. My responsibilities have mostly been on the engineering/IT side, ensuring that our SaaS is deployed and developed according to SOC 2 processes. This isn't something a developer would willingly or enthusiastically take on, right? I can’t believe I’m saying this out loud, but actually ... it hasn’t been that bad. The biggest reason for this has definitely been Vanta . Vanta has distilled the rather hard-to-decipher process descriptions into actionable items. As far as I know, SOC 2 isn’t a one-size-fits-all (SaaS provider) certification; it differs according to the stack - which makes sense. Plugging in all our services, from cloud providers to issue trackers, spits out a tailored task list. The task list can even include literal Terraform code examples, which you can copy-paste with minor changes. You could kind of get a similar list from AWS Audit Manager or AWS Security ...