For as long as we’ve had computers, they’ve produced predictable outputs. But AI – in the form of large language models – represents a new kind of unpredictable computing. The key to implementing useful AI solutions is making the most of both paradigms.
One of the oldest known computers is the Antikythera mechanism, an ancient device for calculating astronomical events. Given certain inputs, it computed positions based on logic hard-coded in its gears.
Traditional software is kind of like that: it determines what to do based on pre-defined conditions. You give the computer input and get predictable outcomes.
If a program produces unexpected results, it’s either because the programmer introduced randomness or because there are bugs. Both can be replicated by mirroring the exact conditions that led to the outcome. Because of this, traditional computation is deterministic.
Modern AI, such as large language models, represents a new computing paradigm. If you’ve used ChatGPT or Claude, you know you seldom get the same results given the same input.
Unlike traditional programs, LLMs don’t follow explicit instructions. Instead, they generate responses by weighting probabilities across a vast network of linguistic relationships. There can be many paths to possible likely responses. This is a new kind of probabilistic computing.
Much of what we value about computers is due to their predictability. That’s one reason why so many people find LLMs baffling or objectionable: probabilistic behavior breaks our mental models for how computers work.
Probabilistic computing is good for some tasks but not others. Brainstorming is a good use case since you’re explicitly asking for divergent thinking. On the flip side, math requires deterministic approaches. LLMs can do it by offloading computations to deterministic systems like Wolfram Alpha.
Prompt engineering is an attempt to constrain probabilistic processing to make LLMs behave more predictably. But it only goes so far: you can’t force LLMs to behave like traditional programs.
A better approach is building deterministic software that uses AI at particular junctures for specific tasks. An example is my approach to re-categorizing blog posts: a deterministic program iterates through files, offloading pattern matching to an LLM. The LLM is used only for stuff probabilistic systems do well – the inverse of the Wolfram Alpha approach.
This new paradigm offers unprecedented opportunities. But taking advantage of probabilistic systems requires adding some determinism to the mix. You can’t ask ChatGPT to re-organize a website, but you can build scaffolding using traditional approaches that take advantage of what each does best.
If you work with content, it behooves you to learn how to combine AI’s probabilistic approach with the traditional deterministic approach. That’s what I’ll be teaching in my hands-on workshop at the IA Conference in Philadelphia in late April. Join me there to learn how to do it.