AI is inherently conservative. I don’t mean this in the political sense: here, ‘conservative’ has a lowercase c. Rather, I mean that, because of how they’re architected, large language models favor and perpetuate long-established frameworks and ideas over upstarts. This has implications for how you use AIs to develop your information infrastructure.

I won’t go into how LLMs work here. The main point is that models are trained from existing data. On day one, they only “know” about what’s in the training corpus. The more information there is about a particular topic, the better the model will do with queries about that topic.

Naturally, the corpus includes more information about older mainstream subjects than newer niche subjects. As a result, models do better on older stuff. For example, it’s more likely an LLM has been trained on the full text of Bleak House than on a more recent novel. An LLM-generated summary of the former will likely be more accurate than the latter.

Yes, newer chatbots include research modes that let them search the web. But even so, LLMs tend do better when the core model “knows” more about the subject. Of course, this applies to more than just prose: LLMs also produce better answers to questions about older, more established ideas in other domains as well.

A few real-world examples

A few months ago, I asked the then-new ChatGPT o3 “reasoning” model for help troubleshooting my old-school Panasonic Micro Four-Thirds camera. The device was unresponsive after I’d plugged it into an unfamiliar charger. I wanted to know if there was anything I could do to unbrick it.

At the time, folks were saying o3 had near-AGI capabilities. In my case, it produced a textbook hallucination: long, authoritative, and repeatedly mistaken explanations of how to disassemble the camera and which parts to order. Fortunately, I had enough sense to doubt its recommendations.

The problem? The parts and procedure it suggested were indeed for a Panasonic Micro Four-Thirds camera, just not my model. My sense is that when dealing with a niche product within a niche category, the LLM didn’t have enough to go on. It tried the best it could, but the result was worse than saying “I don’t know.”

I’ve also experienced this issue when using LLMs for software development. Recently, I asked both Claude and ChatGPT for help in implementing a workflow in Langflow, a relatively new system for developing agentic applications. Both chatbots suggested I try nonexistent features or produced broken code. (Yes, even though I put GPT 5 into “thinking” mode.)

In this case, both Claude and ChatGPT were likely hampered by the fact that 1) Langflow is relatively new and 2) it has a visual (rather than text-based) interface. Interactions with the chatbots consisted of me pasting screenshots of the dev environment and the LLMs offering instructions of which UI elements to ‘wire up.’ Less than ideal.

Conversely, for me, both LLMs have succeeded brilliantly at writing Emacs Lisp config files, Unix shell scripts, and Python applications. Not only are these text-based platforms, but they’ve been used widely for a long time. There are decades’ worth of material online on how to solve problems with Python, Elisp, and Bash.

But this isn’t just about longevity. I’ve had poor experiences using LLMs with another long-lived programming language: AppleScript. My guess is there aren’t enough examples in the training corpus of how to solve problems with AppleScript. Basically, LLMs do better with systems supported by lots of Stack Overflow posts.

Entrenching established technologies

And here’s where we come to the word ‘perpetuate’ in the opening. Although I don’t have traffic stats, my sense is LLMs are replacing the Stack Overflows of the world. As more developers turn to LLMs for answers, they eschew the types of interactions — blog posts, forum questions, etc. — that would produce the next generation’s training corpus.

This will place new systems at a disadvantage. We’ll get worse suggestions for a new development framework or language if the LLMs don’t know enough about it. And LLMs won’t know about it if people don’t post about it — which they won’t do if LLMs are answering all their one-off dev questions.

A corollary: applications written with older languages and frameworks will be easier to develop and maintain than those created using newer systems. As a result, established programming languages such as Python, Perl, Lisp, etc. will become more entrenched. A vicious (?) cycle ensues.

I added the question mark because I’m not convinced this is a bad thing. Standing on the shoulders of giants is a time-honored way to build higher and faster. (Especially if the giants are unlikely to jerk you around. Open source technologies like Python, Elisp, and Perl are trustworthy and predictable.)

Implications for tech choices

The idea that AI entrenches incumbents has counter-intuitive implications. First, even though LLMs themselves are an exciting new technology, you should favor older, established, mainstream technologies over newer, unproven alternatives — especially when you use LLMs to develop software.

Even amazing new features (e.g., Langflow’s visual dev environment) must be weighed against older systems’ overwhelming advantages in an AI-augmented world. Put simply, new applications built atop established technologies will be easier to develop and maintain — and not just by humans, but also by AIs and AI/human centaurs.

Second, Lindy is in effect here. You don’t want to build atop technologies that might soon become obsolete. Ironically, in an AI-augmented world, older technologies stand a better chance of sticking around. New entrants have a formidable disadvantage because LLMs don’t know as much about them — and perhaps never will.

Third, there’s a reason why ‘language’ is one of the Ls in LLM: these systems are trained on text and fare best when dealing with text queries about natively text-based systems such as novels and Python code. You’ll get better results when solving problems in systems that use plain text (e.g., a directory full of .py files) rather than fancy UIs.

Toward a conservative approach to AI

LLMs are one of the most disruptive technologies in our lifetime. But structurally, they’re inherently conservative. Their training leads them to favor established frameworks and ideas. Often, this leaves new or niche technologies and ideas at a disadvantage.

This has implications for your tech choices, especially when you’re using AI to develop information systems. Favoring older, more established programming languages and frameworks will lead to more efficient development, easier maintenance, and better outcomes.

As is often the case with innovation, the challenge is balancing novel approaches with the reliability of proven solutions. The goal is harnessing the power, speed, and scale of LLMs while building on solid foundations. Ultimately, building wisely with AI calls for adopting the new while recognizing the value of the old.