Skip to main content

HACKS.md

The most valuable comments I find in any given codebase look like this:

Hack! This thing is weird because of this and that reason. I tried to implement a more elegant solution, but due to X and Y constraints, I failed.
Hack! This is weird because there is a bug in library X that we depend on. See https://github.com/library/issues/420
Note! I tried options A, B, and C and decided to do this weird thing because, while it looks wrong, it turned out to be the best solution at the time of writing.

These comments do not explain what the code does. They explain why the code looks the way it does. They bring into light historical context, failed attempts, and external constraints that are otherwise invisible.

We all occasionally fail to communicate our intent to the next developer. That is normal and unavoidable. What matters is leaving a clear mark when something non-obvious or hacky is done on purpose.

Increasingly, the “next developer” is a metal-headed clanker: an LLM. LLMs are actually quite good at reading and understanding these comments. The problem is not comprehension. The problem is behavior. They tend to remove these comments, or when they introduce new hacks—either because they were instructed to or because they did so autonomously—they fail to leave the valuable "hack warning" comment traces.

Anyone who works with LLM-generated code has seen the opposite failure mode as well.

By default, LLMs are extremely verbose with comments. They happily pollute a codebase with low-value noise: restating what the code already says, line by line. Writing comments appears to be a behavior that is surprisingly hard to disable.

To add insult to injury, they often overwrite or delete the few comments that actually matter, the why comments, while adding commentary explaining trivial stuff that is self-evident from the code itself. The result is strictly worse than either a well-commented or an uncommented codebase: the signal is gone, the noise remains.

This led me to wonder whether hack documentation should live somewhere else entirely. Should there be a HACKS.md? Or would it be enough to give the agent explicit instructions about how to treat this class of comments?

Since there is no HACKS.md, I added a short, explicit rule to AGENTS.md:

If a hack or workaround is required, leave a comment explaining why it was done. Do not remove these comments unless you have confirmed they are no longer valid. Prefix such comments with NB; (nota bene).

Comments

Popular posts from this blog

I'm not a passionate developer

A family friend of mine is an airlane pilot. A dream job for most, right? As a child, I certainly thought so. Now that I can have grown-up talks with him, I have discovered a more accurate description of his profession. He says that the truth about the job is that it is boring. To me, that is not that surprising. Airplanes are cool and all, but when you are in the middle of the Atlantic sitting next to the colleague you have been talking to past five years, how stimulating can that be? When he says the job is boring, it is not a bad kind of boring. It is a very specific boring. The "boring" you would want as a passenger. Uneventful.  Yet, he loves his job. According to him, an experienced pilot is most pleased when each and every tiny thing in the flight plan - goes according to plan. Passengers in the cabin of an expert pilot sit in the comfort of not even noticing who is flying. As someone employed in a field where being boring is not exactly in high demand, this sounds pro...

PydanticAI + evals + LiteLLM pipeline

I gave a tech talk at a Python meetup titled "Overengineering an LLM pipeline". It's based on my experiences of building production-grade stuff with LLMs I'm not sure how overengineered it actually turned out. Experimental would be a better term as it is using PydanticAI graphs library, which is in its very early stages as of writing this, although arguably already better than some of the pipeline libraries. Anyway, here is a link to it. It is a CLI poker app where you play one hand against an LLM. The LLM (theoretically) gets better with a self-correcting mechanism based on the evaluation score from another LLM. It uses the annotated past games as an additional context to potentially improve its decision-making. https://github.com/juho-y/archipylago-poker

"You are a friendly breadwinner"

A recent blog post by Pete Koomen about how we still lack truly "AI-native" software got me thinking about the kinds of applications I’d like to see. As the blog post says, AI should handle the boring stuff and leave the interesting parts for me. I listed down a few tasks I've dealt with recently and wrote some system prompts for potential agentic AIs: Check that the GDPR subprocessor list is up to date. Also, ensure we have a signed data processing agreement in place with the necessary vendors. Write a summary of what you did and highlight any oddities or potentially outdated vendors. Review our product’s public-facing API. Ensure the domain objects are named consistently. Here's a link to our documentation describing the domain. Conduct a SOC 2 audit of our system and write a report with your findings. Send the report to Slack. Once you get approval, start implementing the necessary changes. These could include HR-related updates, changes to cloud infras...