I’ve been looking into agentic workflows to act as an operations assistant for the SaaS I'm working on. A big part of that work is getting the assistant to make sense of all the alert and monitoring data that pours in every day.
Passing a bunch of raw time-series data to an LLM generally doesn’t work that well. You need to tell the LLM to aggregate the data and give it the means to do so.
Using aggregates will often lead to better insights from the LLM. This is a well-known fact to anyone who has tinkered with this (at least at the time of writing this).
Humans, of course, like to build visualizations and dashboards to solve this issue (yes, yes, dashboards are often useless, but complaining about that is another blog post). LLMs can analyze them as well and in fact are pretty good at that, so the aggregate can be something both humans and LLMs can digest.
I’ve been tinkering with the idea of appending some LLM-only content to a dashboard—for example, additional context, specific details, or even some do’s and don’ts for the analysis.
Humans can ignore that stuff, but LLMs can use it to yield better results.
Additional context for the LLM added to the bottom of the chart |
I’m not really sure if this is a good idea, though. You should, of course, just include this information in the prompts.
But what if you have a lot of different graphs and no clear "just analyze this" prompt that applies to all?
Also, adding it directly to the dashboard (and therefore to the screenshot the agent will capture via a tool call) can decouple the analysis agents containing the prompts from the dashboards, which may prove useful in some organizations where the infrastructure or "dashboard-builder" team is too slow to react to the AI agent team, or vice versa.
Comments
Post a Comment