Karma police, arrest this man
He talks in maths
He buzzes like a fridge
He’s like a detuned radio
— Radiohead, Karma Police

Have you ever met someone who “talks in maths”? You know, someone who uses undecipherable jargon? It’s frustrating. The person comes across as aloof or worse. They can make you feel inadequate and disrespected. No wonder Thom Yorke wants them arrested.

It’s not just people who speak in maths. Products and websites do it too. You may have run across one that used terminology you didn’t understand; they’re especially common in government and in highly regulated industries such as finance and healthcare.

It’s bad when jargon seeps into content. But it’s worse when it also infects navigation structures and taxonomies. Unclear choices don’t just make it difficult for people to move around the site, they also keep them from creating good mental models about the space.

‘Usability’ has long been a key concern for people who design interactive systems. Designers have publications, conferences, and professional associations dedicated to making systems more usable. Their focus has mostly been on ergonomics: ensuring people can interact with the system to complete certain tasks. But there’s more to usability than improving UI interactions.

Principles such as Fitts’s law have enabled us to design more usable systems. Modern design systems have internalize these principles, so it’s easier than ever to design ‘usable’ systems — at least from this ‘ergonomic’ angle. The exceptions (e.g., open source applications with idiosyncratic UIs) prove the rule.

But a product can implement ‘good’ UI patterns and still show cryptic choices to users. Which is to say, designers must also aim for semantic clarity. Does the user understand the intent behind a particular element or label or is it ambiguous? Does the UI allow users to build good mental models?

The goal is reducing cognitive load. Users should understand what they’re looking at as simply as possible. You wouldn’t make customers in a physical retail store jump through hoops to buy stuff, so why do it in interactive systems? As Steve Krug’s memorable title puts it, Don’t Make Me Think.

Note that I’m not talking about reducing friction, since friction can be good. Consider a case where the user is about to take an irreversible or dangerous action: they should pause and think before pressing that button. Such situations call for a bit of friction. What you don’t want is unnecessary friction.

I’m also not talking about making things ‘intuitive.’ This is one of the least helpful distinctions in UI design. As the old saw has it, for humans, the only intuitive interface is the nipple. Everything else must be learned.

To this end, you want semantic labels that meet people where they are. If they know a lot about the domain, then by all means, go ahead and use jargon. (Otherwise, advanced users might reject ‘dumbed down’ systems.) But don’t inadvertently foist unclear language on users.

Understandability has very real business consequences. Reducing cognitive load is a prerequisite to engagement and conversion. People won’t buy or use things they don’t understand. Poor understandability affects brand value.

For example, yesterday, I tried pairing my phone to a Kenwood car radio. After a couple of minutes, I started thinking the product’s designers must have assumed users would have a manual at hand when doing this common task.

The radio’s menu system and processes were so poorly considered — and the experience of using the system so frustrating — that I will never consider buying a Kenwood product in the future. The radio sounds good. (Yes, I finally got it to work — although I still don’t understand how!) But that’s table stakes. The experience sucked.

And it sucked despite a workable display and relatively nice, tactile buttons. I.e., the system’s usability problems are not (just) in its ergonomics but in its understandability. The radio spoke in maths. Its semantic structures — the combination of labels and how they related to each other to enable particular tasks — confounded my expectations.

Which is to say, even though the interaction mechanics could be improved, the underlying problem was with the system’s information architecture, at least as manifested in this UI. (People never experience IA in the abstract; UI affects how they perceive the IA.)

Fortunately, we know how to architect systems to make them more understandable:

  1. Use clear, recognizable terms. This principle might seem trite. After all, who would want to design a system that uses obscure terminology? But there are forces working against clarity. For one, product teams tend to internalize jargon that is unfamiliar to their customers. Over time, internal folks forget these terms don’t register with most folks. For another, marketing teams often push for using novel proprietary terms in UIs. In either case, you must strive to understand the domain from your users’ perspective.
  2. Provide good hierarchies and taxonomies. It’s not enough to use clear labels: relationships between them also matter. For example, users expect some concepts will ‘contain’ other concepts in the domain. They also expect a particular options to be present; exceptions will cause them to doubt themselves or the system. Placing one phrase next to another changes how users understand both. The whole is greater than the sum of the parts.
  3. Provide visual cues and feedback. Yes, I’m talking about UI. Like I said, nobody experiences IA in the abstract. When users click or tap on an option, they must clearly perceive the effects of that action. These effects must be predictable, if not with the first interaction, at least when done repeatedly. Some interactions will fail; communicate so gracefully.
  4. Balance unfamiliar ‘teaching’ elements with more recognizable ones. Some tasks require unfamiliar terminology. Alternatively, you may want to induct users into a particular conceptual model. (E.g., Apple brands Apple Watch widgets as ‘complications.’) These are ‘didactic’ elements in the architecture: they aim to teach the user about the system. Don’t go overboard with these; balance them out (and surround them) with more familiar concepts.
  5. Simplify.

The key idea is that you are not your users. To design understandable systems, you must first understand how your users understand the system’s domain. In particular, you must grok their mental models:

  • What terms do users use when talking about this space?
  • What distinctions do they make?
  • How do they expect concepts to relate to each other?
  • What choices do users expect to have?
  • Etc.

Do the research. Interview people. That said, don’t expect them to spell out their mental models. This stuff is too meta; most people don’t go around thinking about how they think about things. Instead, look for indirect approaches.

For example, when working on product’s conceptual model, ask users to sketch a new version of its settings screen. You likely won’t get any surprising insights about its UI, but participants will unwittingly reveal how they expect the system should work.

When you have a clear idea of users’ mental model about the domain and a clear system conceptual model (i.e., the concepts it must expose to users so they can accomplish their tasks,) then you can bridge the two. It’s a translation challenge.

Then, test and iterate. Do it again and again and again. Don’t assume things are clear; let users demonstrate so through successful interactions. Tweak and tweak again. Again, the usability disciplines have given us wonderful methodologies to hash these things out.

Anything can be made more understandable. And more understandable systems are more used and loved. But it’s not easy. The biggest obstacle is familiarity: you must overcome the notion that your view of the system is ‘correct.’ The only correct model is the one users can work with — even if only as a step toward better understanding.