When I started at my current job 3ish years ago, we had nothing but a few PowerPoint slides and a Figma prototype. I recall talking with one of the founders, whom I had worked with some 10 years ago, and pointing out that the very next step after PowerPoint and Figma was still the same as it had always been: translating those slides and mockups into CRUDs, forms, and other basic building blocks. Sure, all the new fancy tools and frameworks (we had jQuery then) help, but I still had to tinker with very basic primitives like tables and forms. I could not just say, "here is a table, here is a form." Well, yes, technically I could, but I still needed to write a lot of code to make a working form or table. This was around the time of GPT-3, Copilot beta, and very early Cursor. I did attempt to use them, but they could only perform very basic tasks and were pretty much useless for frontend work. We, being one of the first-wave "ChatGPT wrapper" startups has naturally pus...
The most valuable comments I find in any given codebase look like this: Hack! This thing is weird because of this and that reason. I tried to implement a more elegant solution, but due to X and Y constraints, I failed. Hack! This is weird because there is a bug in library X that we depend on. See https://github.com/library/issues/420 Note! I tried options A, B, and C and decided to do this weird thing because, while it looks wrong, it turned out to be the best solution at the time of writing. These comments do not explain what the code does. They explain why the code looks the way it does. They bring into light historical context, failed attempts, and external constraints that are otherwise invisible. We all occasionally fail to communicate our intent to the next developer. That is normal and unavoidable. What matters is leaving a clear mark when something non-obvious or hacky is done on purpose. Increasingly, the “next developer” is a metal-headed clanker: an LLM. ...