Elegant Simplicity

When defining design principles for a project, someone in the design team will invariably suggest “simplicity.” The drive towards simplicity is understandable: simplicity is often cast as a desirable trait. Simple sells. Simplicity is the ultimate sophistication. Keep it simple, stupid.

But simplicity per se isn’t a good principle. Things can be simple and also inadequate — if you leave out the wrong things. Some things are inherently complex; reducing them to a simpler state can compromise their usefulness or sacrifice their essence.

In most cases what you want isn’t plain simplicity but a simplicity that is appropriate to the problem at hand. You want elegant simplicity: to do the most with the minimum resources (or components) necessary to achieve a particular outcome.

Elegant simplicity is graceful. It embodies efficiency and skill. It’s also hard, since it requires that you understand the system you’re working on and its intended outcomes. Once you do, you can ask questions: What’s essential here? Which components are critical? Where do I focus my efforts?

Appeals for elegant simplicity abound. Saint-Exupéry: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” Lao Tse: “To attain knowledge, add things everyday. To attain wisdom, remove things every day.” (Attributed to) Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

These aren’t calls for us to hack about arbitrarily at problems. Instead, they speak to intelligent use of materials and ideas; to understanding the point beyond which simplification compromises desired outcomes. It’s a central principle for good design — and for life.

How to Understand a Complex Subject

Sometimes you need to understand a complex subject. When first getting into it, you’re faced lots of new concepts and ideas, unfamiliar language, unexpected connections between terms, etc. There’s lots of information to digest. Where do you start? How do you make sense of it all?

Understanding complex subjects is a meta-skill: a skill that helps you become better at acquiring other skills. When you hone your ability to understand, learning new things becomes easier. Improving your sense-making skills is a powerful boost for your effectiveness.

Concept mapping is the best practice I’ve found for making sense of complex subjects. A concept map is a visual representation of the relationships between concepts that affect a particular problem or domain. In contrast to a linear exposition of the subject, a concept map lets you pick the starting point for your investigation and allows you to see details in the context of the big picture. A well-crafted map achieves the goal Richard Saul Wurman laid out for information architects: to help others find their own paths to knowledge.

The best conceptual mapper I know is Hugh Dubberly. The Dubberly Design Office website has an entire section dedicated to showcasing their beautiful and insightful maps. These maps are inspiring — and also a bit intimidating. But concept maps mustn’t be elaborate or polished to be valuable.

A post on the DDO blog shows you how to create your own concept maps. I use this approach with my students and in my professional work; it’s the best way I’ve found to understand complex subjects.

Upcoming IA Workshops — Spring 2019

Information architecture is more important today than ever before. However, many digital designers working today don’t realize there’s an area of practice dedicated to structuring information to make it easier to find and understand. That’s why I created my Information Architecture Essentials workshop. It’s a great way to get started towards making more relevant and valuable digital products and services.

I’ll be teaching the workshop in Zurich in February as part of World IA Day Switzerland, and in Orlando in March as part of the IA Conference. Can’t join us for either of those events? Well, I can also lead bespoke instances of the IA Essentials workshop to help internal design teams come up to speed quickly on information architecture. Please get in touch if you’d like to have me teach the workshop at your organization.

The Role of Structure in Digital Design

Andy Fitzgerald, in A List Apart:

design efforts that focus on creating visually effective pages are no longer sufficient to ensure the integrity or accuracy of content published on the web. Rather, by focusing on providing access to information in a structured, systematic way that is legible to both humans and machines, content publishers can ensure that their content is both accessible and accurate in these new contexts, whether or not they’re producing chatbots or tapping into AI directly.

Digital designers have long considered user interfaces to be the primary artifacts of their work. For many, the structures that inform these interfaces have been relegated to a secondary role — that is, if they’ve been considered at all.

Thanks to the revolution sparked by the iPhone, today we experience information environments through a variety of device form factors. Thus far, these interactions have mostly happened in screen-based devices, but that’s changing too. And to top things off, digital experiences are becoming ever more central to our social fabric.

Designing an information environment in 2019 without considering its underlying structures — and how they evolve — is a form of malpractice.

Conversations with Robots: Voice, Smart Agents & the Case for Structured Content

TAOI: Adding More Context to Tweets

The architecture of information:

According to a report on The Verge, Twitter will soon start testing new ways of displaying tweets that should give them more context. Some features clarify messages’ positions in conversations using reply threads:

I’m more intrigued by two other features: availability indicators and context tags. The former are green bubbles next to the user’s name that indicate whether s/he is online and using the app at any given time. (Much like other chat systems do.) The latter are tags that allow users to indicate what a tweet refers to. Having a bit more context on what a tweet is about should help avoid non-sequiturs. (I assume it would also make it easier to filter out things you don’t want to bother with.)

Image: Twitter
Image: Twitter

Features like these should drive engagement in Twitter and add clarity for users; a case of alignment between the company’s goals and those of its users.

Twitter is rolling out speech bubbles to select users in the coming weeks

The Urgent Design Questions of Our Time

George Dyson, from his 2019 EDGE New Year’s Essay:

There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun.

For a long time, the central objects of concern for designers have been interfaces: the touchpoints where users interact with systems. This is changing. The central objects of concern now are systems’ underlying models. Increasingly, these models aren’t artifacts designed in the traditional sense. Instead, they emerge from systems that learn about themselves and the contexts they’re operating in, adapt to those contexts, and in so doing change them.

The urgent design questions of our time aren’t about the usability or fitness-to-purpose of forms; they’re about the ethics and control of systems:

  • Are the system’s adaptation cycles virtuous or vicious?
  • Who determines the incentives that drive them?
  • How do we effectively prototype emergent systems so we can avoid unintended consequences?
  • Where, when, and how do we intervene most effectively?
  • Who intervenes?

Childhood’s End: The digital revolution isn’t over but has turned into something else

A Bold Example of Semantic Pollution

Sometimes language changes slowly and inadvertently. The meaning of words can change over time as language evolves. That’s how many semantic environments become polluted: little by little. But sometimes change happens abruptly and purposefully. This past weekend, AT&T gave us an excellent example of how to pollute a semantic environment in one blow.

Today’s mobile phone networks work on what’s known as 4G technology. It’s a standard that’s widely adopted by the mobile communications industry. When your smartphone connects to a 4G network, you see a little icon on your phone’s screen that says either 4G or LTE. These 4G networks are plenty fast for most uses today.

However, the industry is working on the next generation network technology called — you guessed it — 5G. The first 5G devices are already appearing on the market. That said, widespread rollout won’t be immediate: the new technology requires new hardware on phones, changes to cell towers, and a host of other changes. It’ll likely be a couple of years before the new standard becomes mainstream.

Despite these technical hurdles, last weekend AT&T started issuing updates to some Android phones in their network that change the network label to 5G. Nothing else is different about these devices; their hardware is still the same and they still connect using the same network technology. So what’s the reason for the change? AT&T has decided to label some advanced current-generation technologies “5G E.” When the real 5G comes around, they’ll call that “5G+.”

This seems like an effort to make the AT&T network look more advanced than those of its competitors. The result, of course, is that this change confuses what 5G means. It erodes the usefulness of the term; afterward, it’ll be harder for nontechnical AT&T customers to know what technology they’re using. It’s a bold example of how to co-opt language at the expense of clarity and understanding.

AT&T decides 4G is now “5G,” starts issuing icon-changing software updates

The Role of Paper in Learning

How do you learn a new subject? Let’s say you’re starting work on a new project, one where you have expertise in the craft but not the domain. You’ll be working alongside subject matter experts. Their time is limited; you don’t want to waste it by asking lots of newbie questions. It’s up to you to come up to speed fast so you can ask relevant questions and help structure the problem.

As a strategic designer, I find myself in this situation often. For example, a few years ago I worked on the design of a system that was meant to be used by neurosurgeons and radiologists. While I’d designed user interfaces for complex systems before, I didn’t know much about neurology. Working in this space required that I get up to speed quickly on a complicated subject matter. (No “it ain’t brain surgery!” jokes on this project!)

Over the years I’ve developed techniques for learning that work for me. I’ve written before about the three-stage model I use. To recap: when learning a new subject, I aim to 1) contextualize it, 2) draw out the distinctions in it, and 3) explore its implications. I strive to make each stage actionable: to make things with the new information I’m learning.

What kinds of things? It depends on the stage of the process I’m in. In the very early stages, it’s mostly scribbles, sketches, and various notes-to-self. Further on in the process, I look to share with others — especially with people who know the subject matter. In both cases, I’m looking to establish a feedback loop. Seeing the ideas out in the world changes my relation to them. I’m reminded of the tagline on Field Notes notebooks: “I’m not writing it down to remember it later, I’m writing it down to remember it now.” Spot-on. The act of putting pen to paper changes my relationship to the idea; the act of articulating it nudges me to give it structure and coherence.

I deliberately chose the phrase “putting pen to paper;” this process doesn’t work as well for me with digital tools. I’ve been experimenting for years with digital sketchbooks, but keep coming back to pen-and-paper for speed, reliability, and ease-of-use. That said, digital tools play an essential role. My (continuously evolving) learning ecosystem includes software like OneNote, Ulysses, OmniFocus, and Tinderbox. Recently I’ve also started experimenting with DevonThink. These tools all serve specific needs and do things that paper can’t do.

There’s lots of overlap between these tools, so why have so many of them? It’s tempting to want to cut down the ecosystem to as few tools as possible. But putting everything into a single tool means sacrificing important functionality; the ones that do lots of things don’t do any one of them as well as dedicated tools. For example, OneNote, Tinderbox, and DevonThink can capture reminders, but none of them do it as well as OmniFocus, which is designed for that purpose. (Having OS-level, cross-app search functionality such as macOS’s Spotlight is a boon, since it means not having to remember which app you put stuff into.)

A paper notebook could be the ultimate “does everything” tool. People have been taking notes on paper for many hundreds of years. There are lots of frameworks around that allow you to use plain paper to track commitments (e.g., bullet journals), separate signal from noise (e.g., Cornell notes), etc. Paper is super flexible, so there’s always the temptation to do more with it. But paper is far from perfect for some learning activities. For example, capturing lots of long texts and finding patterns in them (what I’m using DevonThink for) is best done with digital tools.

While the form of my learning ecosystem keeps evolving, it’s increasingly clear what role my paper sketchbook plays: It’s a scratchpad where raw thoughts and ideas emerge. It’s not for capturing long texts. It’s not for sharing with others — not even with future me (i.e., “to remember it later.”) Instead, it’s an extension of my mind; a sandbox where I shape for myself — thus internalizing — the things I’m learning.

In practice, this entails jumping back-and-forth between digital tools and paper. I once aspired to consolidate these steps into a “smart sketchbook” (see here and here) that would allow me to eschew paper. However, I increasingly value the role my physical sketchbook plays in my learning ecosystem. Its limitations are an advantage: using it requires a shift in modality that keeps ideas flowing, vibrant, and malleable.

Framing the Problem

Jon Kolko on problem framing:

The goal of research is widely claimed to be about empathy building and understanding so we can identify and solve problems, and that’s not wrong. But it ignores one of the most important parts of research as an input for design strategy. Research helps produce a problem frame.

A conundrum: The way we articulate design problems implies solutions. At the beginning of a project, we often don’t know enough to communicate the problem well. As a result, we could do an excellent job of solving the wrong thing.

Addressing complex design problems — “solving” them — requires that we define them; that we put a frame around the problem space. This frame emerges from a feedback loop: a round of research leads to some definition, which in turn focuses the next round of research activities, which leads to more definition, etc.

Framing the problem in the way described by Mr. Kolko — by using research to define boundaries and relevant context, and using the resulting insights to guide further research — is a practical way to focus ill-structured problems. It’s an often overlooked part of the design process, and — especially in complex problems — a critical one.

Problem Framing, Not Problem Finding