Recently, I wrote about using AI to solve taxonomy drift — the all-too-common problem of lists of terms (tags, categories) falling out of sync with the content they describe. A response to that post raised an important distinction worth clarifying: the difference between creating taxonomies and applying them.

First, a bit of context. I’m talking specifically about taxonomies for organizing web content. CMSs like Drupal and WordPress allow authors to tag content items, helping users find information later. But both content and taxonomies evolve over time, and tagging consistently is a challenge — especially for small, resource-strapped teams.

In my earlier post, I suggested AI can help with this challenge. But what, exactly, should AI be doing? Here’s where confusion arises.

The approach I’m working with uses AI to tag content with terms from a predefined taxonomy. This is different from having AI generate new taxonomy terms. Put simply: I’m using AI to apply taxonomies, not create them. These are different challenges.

AIs are better than humans at spotting patterns in text at scale, which makes them useful for applying terms consistently. But humans are still better at the skills needed to define taxonomies — deciding what terms should be included and how they relate to each other.

A taxonomy isn’t just a set of words and phrases; it’s also a model that reflects how users understand the domain, aligns with an organization’s strategic goals, and fits within broader cultural norms. Defining this model requires, among other things, judgment, contextual awareness, and an understanding of strategic priorities and organizational politics. Doing these things well is beyond the grasp of current AI systems.

Which isn’t to say AI can’t help – of course it can. LLMs excel at processing large volumes of content, finding patterns, summarizing, and outlining. In this capacity, they can be an invaluable aid to taxonomists and information architects. But there’s another way AI can help – one that’s both more profound and more exciting.

Taxonomies aren’t static or defined in the abstract. Gaps become apparent only during application. You might realize you’re missing important terms when you start tagging content at scale. Or you might find that part of the model isn’t quite right and needs adjustment. The problem is that tagging content at scale takes time, so the effects of these tweaks aren’t immediately visible.

This isn’t how other creative disciplines work. When painters daub oil on a canvas, they can see how colors spread and combine in real time. They alter the mix and strokes to achieve particular effects. They can do this because they get immediate feedback on the effects of their actions.

Taxonomists have lacked this level of feedback at scale — until now. AI makes it possible to see how distinctions apply across large sets of content much faster. It helps reveal gaps, inconsistencies, and opportunities for refinement at a different scale and speed, pointing to a new way of working.

So, even though defining and applying taxonomies are different tasks, AI will likely blur the line between them. And that might be a good thing — so long as humans are still wielding the brushes and palettes.