What I Learned about Labeling from Making Hot Cocoa

I love hot cocoa. A friend taught me a great recipe: cocoa powder + maple syrup + homemade cashew cream + hot water. I add a pinch of cayenne pepper for bite. (Cashew cream: soak unroasted/unsalted cashews overnight in water, then liquefy them in a Vitamix.)

Before you try to make this, you need to be aware of an important distinction. In American grocery stores, you’ll find two kinds of cocoa: cocoa mix and cocoa powder. They’re not the same.

Based on the selection of brands and varieties, cocoa mix seems to be more popular. You’ll find it in the same aisle as coffee and tea — i.e., the store assumes that if you want to drink cocoa, you want cocoa mix.

It’s a safe assumption. If you want a cup of hot cocoa, the mix is more convenient: it includes powdered sweetener, creamer, and (in some cases) frills such as freeze-dried marshmallows. You simply add hot water, et voilà — a sweet cup of cocoa.

That’s not what you want for this recipe. Instead, you want cocoa powder, which is just the primary ingredient without the extra stuff.

It may seem subtle, but this distinction matters. Here’s how Wikipedia describes cocoa powder:

Continue reading

Don’t Eat the Menu

Once, I was almost killed while walking in downtown Oakland. I’d waited for the light to change, so I could cross the street. After the crossing light came on, I started to walk. Just then, a car sped through the intersection, missing me by inches.

I’d done everything “right”: I was paying attention (i.e., not looking at my phone), using the crosswalk, and had waited until the light said it was OK for me to go… and I still almost got hit. What happened?

I was “eating the menu,” a phrase I picked up from Antony De Mello and J. Francis Stroud’s book Awareness. It appears in the context of a rhetorical question:

Continue reading

Everything in its Right Place

Cutlery

At its core, information architecture is about making meaningful distinctions. We set aside things from each other, categorize, group, sort them, etc., to find and understand them more easily. We do this all the time — and not just with digital information.

For example, you’ll find a particular pair of socks more quickly if your sock drawer is organized than if you dump them there in a loose mess. And categorizing and archiving your receipts up front can save you headaches come tax time.

Continue reading

Meeting the User

Early in my career, a support incident taught me a lesson about mental models. Here’s what happened: I was contracted to create a small promotional app for executive assistants who used Windows PCs. Many didn’t have CD drives, so the app was designed to fit on a floppy disk.

To install the app, users would slide the disk into their computer and double-click on a file called something like INSTALL.EXE. Then they’d follow the onscreen prompts. The disk included printed instructions that spelled out the process.

Shortly after we released the app, I got a message from the client. A user was having trouble installing the app. Would I mind taking a look? So I drove to the user’s office and asked her to show me what she was doing. What I saw blew me away.

Continue reading

Shipping the Org Chart

While reorganizing my library a few weeks ago, I came across a handout from a 2003 workshop by my friend Lou Rosenfeld titled Enterprise Information Architecture: Because Users Don’t Care About Your Org Chart.

Lots of ideas quickly become obsolete in tech. But after 18 years, the idea that users don’t care about your org chart is still relevant. Teams still ship systems that reflect their internal structures. IA is still crucial to addressing the issue.

Few teams set out to design inwardly-focused systems. Instead, they inadvertently arrive at solutions that feel “natural” — i.e., that mirror their structures. Subtly, the systems they design come to reflect distinctions inherent in their orgs.

Continue reading

Design as an Effective Agent of Change

As software continues to eat the world, digital systems’ conceptual structures matter more than ever. It’s easy to nudge users towards particular choices by making them more prominent. We can use this power for good or bad.

For example, are we helping people eat healthier? Or addicting them to unnecessary services? Alas, choices aren’t always as clear. And even in “clear” cases, we may not be the best arbiters of “good.” Often, the lines between good and bad are blurry.

For example, some retailers tweak search results towards commercial goals. Is that wrong? It depends. Are customers still seeing relevant results? Will they benefit? Same with navigation: It’s easy to bury “undesirable” choices deep in menus.

Continue reading

How to Work with Tension in Design

The ultimate purpose of a design project is to change something. It might be kickstarting sales, making stuff more findable, or addressing a competitive challenge. Whatever it is, the project exists because someone wants something to be different.

Changes reveal tensions. Often, teams are invested in the status quo. For example, sales may want product to introduce new features, while product wants a simpler experience. More capabilities increase complexity, so the two are in tension.

Projects are rife with such tensions — and they often go unacknowledged. Not surprising, since dealing with tensions can be uncomfortable. If you’ve ever been in a meeting with a surly stakeholder, you know how awkward these situations can be.

Continue reading

Clarity vs. Confidence: Starting Conceptual Models Right

Few things are as powerful as a good model of a complex domain. A clear representation of the domain’s key elements and their relationships creates alignment. The model becomes a shared point of reference and shorthand for decision-making.

Good models eschew some complexity. But complex domains aren’t simple. A model that aims to encompass a domain’s full complexity will likely fail at building shared understanding. But a model that over-simplifies won’t be useful.

Continue reading

The Key to Understanding Why Things Happen

When a systems thinker encounters a problem, the first thing he or she does is look for data, time graphs, the history of the system. That’s because long term behavior provides clues to the underlying system structure. And structure is the key to understanding not just what is happening, but why.

— Donella H. Meadows, Thinking in Systems

Every year, I introduce systems students to the iceberg model. The model is a helpful way of understanding situations by looking ‘beneath the surface’ of the things we experience, to the structures and mental models they manifest.

In case you’re unfamiliar with the iceberg model, it’s a framework that encourages you to think about situations at four levels:

  1. Events, or the tangible manifestations of the situation; the things we can see, hear, and record — “just the facts.”
  2. Patterns we perceive in events; outcomes that happen not just once but manifest time and again.
  3. Structures that may be causing the patterns we perceive; these could include rules, regulations, incentives, etc.
  4. Mental models that bring these structures into being.

Notice the fourth level is more abstract than the first: we can ascertain events, but we must hypothesize mental models. There’s also a causal relationship between levels: mental models elicit structures that elicit patterns of events.

As a result, events are easier to grok than mental models. But as with pace layers, the deep levels are where the true power lies. A change at the level you can see has less impact than tweaking the mental models that bring it forth. The ability to change minds is an incredibly powerful lever.

The iceberg model is helpful when doing research. Research produces lots of data points: Google Analytics and search logs tell you about usage, landscape analyses tell you about competitors and analogs, user interviews tell you about intent, etc.

But research doesn’t stop with data. Insights only emerge once you spot patterns in data. If lots of people enter the same term into the search box and do not get good results, that tells you something important about your system.

But you can go deeper still. Patterns only tell you what is happening, not why. You should at least have a hypothesis about why things are happening. This calls for understanding the underlying structures and the mental models that enable them.

Collaborating on these levels can be uncomfortable since the work is speculative. Acknowledge the awkwardness upfront. Allow the team to speculate. You’re not making anything normative yet, just understanding why things might be happening.

Knowing causes helps produce better outcomes. You might not know causes precisely, but you can test hypotheses. Ultimately, a better understanding of the system’s structures and underlying mental models will lead to more skillful interventions.

Cover image: NOAA’s National Ocean Survey (CC BY 2.0)


Subscribe to my newsletter

If you find this post useful, you may also like my newsletter. Every other Sunday, I share ideas and resources about strategic design, systems thinking, and information architecture. Join us!

Processing…
Success! You're on the list.