The most valuable comments I find in any given codebase look like this: Hack! This thing is weird because of this and that reason. I tried to implement a more elegant solution, but due to X and Y constraints, I failed. Hack! This is weird because there is a bug in library X that we depend on. See https://github.com/library/issues/420 Note! I tried options A, B, and C and decided to do this weird thing because, while it looks wrong, it turned out to be the best solution at the time of writing. These comments do not explain what the code does. They explain why the code looks the way it does. They bring into light historical context, failed attempts, and external constraints that are otherwise invisible. We all occasionally fail to communicate our intent to the next developer. That is normal and unavoidable. What matters is leaving a clear mark when something non-obvious or hacky is done on purpose. Increasingly, the “next developer” is a metal-headed clanker: an LLM. ...
There has been a lot of talk about SaaS platforms whose front page essentially just has a textarea for collecting the user's intent, and what comes after that is a canvas for the outcome. Call these agentic or AI native startups, but I think there might be something in this pattern. Most famous examples include Lovable and ChatGPT, but obviously, these will spread to other domains like legal, and I happen to know that some medical startup is also working on this. So what's the big deal here? Firstly, we can eliminate the endless form filling and table layouts. Not that there is that much bad in those - we are all used to them, and they play a part in making websites familiar and easy to use. It’s more than the UIs become more personalized in the sense that you don’t need to squeeze all users through a funnel with forms that have tons of fields and tables or visualizations with dozens of variables. The textarea approach flips this. Instead of asking use...