⟵ Home

Leveraging LLMs for what LLMs are worth

December 08, 2025 ∙ 4 minute read

Those who work with me know for sure of two important points:

  1. I despise LLMs. I don’t think they should replace or even try to replace people.
  2. I am a big fan of well-written RFCs and specifications.

The first point is not a surprise; being a part of this industry before the advent of LLMs gave me enough resources to be able to compare how things used to work, and how they work now. The field is clearly divided, though, but I see it as as binary as ever: either engineers love it, or despise it. I feel that people are still in awe of the possibility to have something else capable of, for instance, to write unit tests for a feature, but they fail to understand that the one who wrote the implementation must also be the one who writes the tests, as albeit manual, the process is still important, since it forces one to use what they implemented. In this step, several extremely relevant points can still be observed:

Those are just a glimpse of what writing both tests and documentation can provide to the one who is implementing it, but the list goes on. Now, delegating the job to a tool whose reasoning is not even visible blinds implementors from those nuances.

I also see how this is deeply affecting those who are entering the field. For instance, when mentoring anyone, the most important thing I keep repeating on and on is: Read and understand error messages, and do not ignore warnings. I see often people just acknowledging message boxes on UIs, hitting OK without even reading it, and then asking themselves “Why isn’t this working?”; the same applies to developers. One must be able to understand what an error message means, and how to read a stack trace. Now, imagine someone in university, or in an internship blindly writing code, and asking an LLM to interpret any errors and fix them. What will this person be able to accomplish without it?

But this is not only affecting newcomers. I often see interesting cases coming from people who are part of the field for a while:

Sometimes I feel like people can’t even read and maintain their own code, let alone code generated by an LLM, copy-and-pasted into the codebase; I don’t need to say that’s dangerous not only on a security point-of-view, but on a business continuation one.

This ties directly into my second point about well-written specifications, which exist precisely to prevent this kind of ambiguity and dependency on individual authors.

The second point is also not a surprise. I’m a big fan of LaTeX, and a bigger fan of IETF and everything they achieved. I work with the premise that every RFC and specification must be written in a fashion that allows anyone reading it to fully understand what it is about, and even more important: to be able to implement it no matter the language or technology they want to use. This way, even if I leave the company I’m working on, the documentation is there, all the design decisions are still there, and are both navigable and clear to the point where they don’t need me, the original author, to be able to update or reimplement it: everything is within a single documentation unit.

Now, where can LLMs help us here? Not designing documentation, not writing code, but reviewing the documents. It turns out that LLMs are great reviewers, since they can keep a lot in their context, and are also able to comprehend documents in a matter of seconds. However, there’s an important pitfall in leveraging hosted LLMs (like ChatGPT or Claude): one must be absolutely sure of their data policies, and even more important, that the company will not use anything you provided to train future versions. So that is how I use LLMs: for reviewing what I write.

Given their ability to quickly cross-examine a whole document in seconds, I mostly use it as part of my writing process. First, I draft the document, put everything I need, and make the first review. When I’m happy with the result, it’s time to get a critic. I allow the LLM to read it, and ask for feedback on sections that need development, for points that are not clear or ambiguous, for contradictions and typos. And they are excellent at this. They can pinpoint which section makes a contradictory statement against another, potentially incomplete definitions, examples or small details that are present in an ABNF section, but explained incorrectly in the body of text, along other small nits and inconsistencies that can truly elevate the level of the document. But do notice how it didn’t replace the creation process, it is not replacing the brain, only augmenting it to the point where one would forget that a small sentence ten sections earlier didn’t match what’s written in the end of the document.

That’s what people should be using LLMs for. For augmenting our abilities. Not replacing them. My partner wrote a few months ago a post with a provocative title: LLMs are the Lobotomy of the 21st Century, and time and time again I fear that it may be true.

This post diverged from what I’m used to writing, but I had this on my head for so long, that I thought it was worth putting out.

For now, let’s continue using our brains as we should, and avoid atrophying them; if we use LLMs thoughtfully, they can strengthen (rather than weaken) our engineering practices.