Like many other people, I’m grappling with how to best use AI. I’ve already run a few successful experiments around organizing and making sense of information. I’ve also used it to support my personal note-taking. In both cases, I’m using large language models to augment my thinking — but not to think for me.

“AI” is a misnomer for the current technologies. The phrase both oversells LLMs’ capabilities and doesn’t do them justice. The word “intelligence” raises expectations LLMs can’t meet: they have no theory of mind and seem incapable of deep conceptual reasoning.

That’s only a problem if you expect LLMs to be “intelligent” — which a lot of people do, given how they’ve been packaged and sold. The blame lies partly in the tools’ dominant UI paradigm, i.e., chatbots. When we interact with LLMs via chat, we attribute to them capabilities they don’t actually have. (Not a new problem; see ELIZA for a late-1960s precedent.)

We’re far from replacing humans for many tasks now being delegated to large language models. That doesn’t mean LLMs can’t be helpful; it just means they’re being used wrong. The key is applying them to tasks they’re well-suited to, such as analyzing, synthesizing, and manipulating data, and not for making important decisions on behalf of people.

An essential distinction is whether AIs are helping humans produce things and experiences, or whether they’re used to generate the end product or experience directly. In many cases, it’s more feasible and desirable to use AIs to support humans than to generate products and experiences directly. Even if an LLM did a perfect job, the process of writing a book or designing a navigation structure can be as (or more) valuable than the output. In these cases, skipping the process entails a big loss. (A favorite Neil Peart lyric: “The point of the journey is not to arrive.”)

I see confusion about this key point in the disciplines I track closely: user experience design and personal knowledge management. In both, many people value the output (product) more highly than the process of getting to the output. That’s a mistake. Often, the process has as much (and in the case of PKM, more) value than the final product. Let’s examine how this applies to each discipline.

The point of personal knowledge management isn’t capturing and managing information; it’s living better by thinking better. Notes are a medium for thinking, not its replacement. (Another way to put it: the person with the most notes doesn’t “win” at the end.)

And yet, while researching Duly Noted, I came across people who seemed confused about the difference between building knowledge repositories and building knowledge. They adopt tools like Obsidian or Roam assuming the tools would “think” for them by sparking unexpected insights or connecting ideas in fantastic new ways.

This mindset often ends in disappointment. But more importantly, it misses the point. Notes are evidence that thinking happened — but they are not the thinking itself. If you only get the outcome (i.e., a set of notes and connections between them), you haven’t learned. The tools and their output aren’t the point; the point is how they let you think and learn better.

To make this clearer, imagine an AI agent that parses all your books and outputs a five-bullet summary of each. The result is a comprehensive overview of your library. That might be useful: it may let you understand the collection differently or help you decide what to read next. But the overview won’t give you the insights, knowledge, pleasure, and sense of achievement you’ll get from reading the books yourself. If all you want is an overview, fine. But if what you want is to learn, having the AI “read” the books in your stead won’t cut it.

Something similar happens in UX. Here, AI can be used in two ways: as a production tool (i.e., generating end user experiences on-the-fly) or as an aid to design. In both cases, it’s possible to confuse the outcome with the process. I’m currently most interested in AI as an aid to design, so let’s focus on that.

Imagine a (near-)future website redesign project. The team uses an AI-powered tool that parses a set of web pages and produces a new navigation structure. The (human) operator’s role is limited to feeding the tool content, saving the team lots of time and money.

Great, right? Well, only if you assume the primary value of the nav design process is its output. (I.e., the new nav structure.) But anyone who’s worked on such a project knows that redesigning nav entails lots of conversations with stakeholders, users, and subject matter experts. The project is a rare opportunity for people across the org to collaborate on a deeply introspective undertaking.

Which is to say, information architecture is a great MacGuffin for alignment. Yes, there’s value in the final nav, but there’s at least as much value in the conversations that happen toward designing the thing. Everyone comes out the other end with a different understanding of who they are as an organization, how they provide value, and for whom. Eschewing this process entails a great loss.

That isn’t to say AI has no role to play. For example, the AI-produced nav structure could serve as a first draft for critique and improvement, expediting the process. AI-powered tools could also save costs by reducing tedious, resource-intensive steps such as content audits. But these activities are in service to the process, not a replacement for it.

I’m excited to use AI in these capacities, both in my IA consulting work and for personal knowledge management. There are many scenarios in which AI can make work faster, cheaper, and more fun. But it’s a mistake to think it’ll get you straight to the “final” output with no downsides. The current hype is leaving many people confused about this. But I’m convinced that, in time, we’ll find more effective ways to put these tools to productive use.