Skip to main content

Making dashboards more usable for LLMs

I’ve been looking into agentic workflows to act as an operations assistant for the SaaS I'm working on. A big part of that work is getting the assistant to make sense of all the alert and monitoring data that pours in every day.

Passing a bunch of raw time-series data to an LLM generally doesn’t work that well. You need to tell the LLM to aggregate the data and give it the means to do so.

Using aggregates will often lead to better insights from the LLM. This is a well-known fact to anyone who has tinkered with this (at least at the time of writing this).

Humans, of course, like to build visualizations and dashboards to solve this issue (yes, yes, dashboards are often useless, but complaining about that is another blog post). LLMs can analyze them as well and in fact are pretty good at that, so the aggregate can be something both humans and LLMs can digest.

I’ve been tinkering with the idea of appending some LLM-only content to a dashboard—for example, additional context, specific details, or even some do’s and don’ts for the analysis.

Humans can ignore that stuff, but LLMs can use it to yield better results.

Additional context for the LLM added to the bottom of the chart

I’m not really sure if this is a good idea, though. You should, of course, just include this information in the prompts.

But what if you have a lot of different graphs and no clear "just analyze this" prompt that applies to all?

Also, adding it directly to the dashboard (and therefore to the screenshot the agent will capture via a tool call) can decouple the analysis agents containing the prompts from the dashboards, which may prove useful in some organizations where the infrastructure or "dashboard-builder" team is too slow to react to the AI agent team, or vice versa.

 

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

RocksDB data recovery

I recently needed to do some maintenance on a RocksDB key-value store. The task was simple enough, just delete some keys as the db served as a cache and did not contain any permanent data. I used the RocksDB cli administration tool ldb to erase the keys. After running a key scan with it, I got this error Failed: Corruption: Snappy not supported or corrupted Snappy compressed block contents So a damaged database. Fortunately, there's a tool to fix it, and after running it, I had access to the db via the admin tool. All the data was lost though. Adding and removing keys worked fine but all the old keys were gone. It turned out that the corrupted data was all the data there was. The recovery tool made a backup folder, and I recovered the data by taking the files from the backup folder and manually changing the CURRENT file to point to the old MANIFEST file which is apparently how RocksDB knows which sst (table) files to use. I could not access the data with the admin tool, ...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker