Elegant Simplicity

When defining design principles for a project, someone in the design team will invariably suggest “simplicity.” The drive towards simplicity is understandable: simplicity is often cast as a desirable trait. Simple sells. Simplicity is the ultimate sophistication. Keep it simple, stupid.

But simplicity per se isn’t a good principle. Things can be simple and also inadequate — if you leave out the wrong things. Some things are inherently complex; reducing them to a simpler state can compromise their usefulness or sacrifice their essence.

In most cases what you want isn’t plain simplicity but a simplicity that is appropriate to the problem at hand. You want elegant simplicity: to do the most with the minimum resources (or components) necessary to achieve a particular outcome.

Elegant simplicity is graceful. It embodies efficiency and skill. It’s also hard, since it requires that you understand the system you’re working on and its intended outcomes. Once you do, you can ask questions: What’s essential here? Which components are critical? Where do I focus my efforts?

Appeals for elegant simplicity abound. Saint-Exupéry: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” Lao Tse: “To attain knowledge, add things everyday. To attain wisdom, remove things every day.” (Attributed to) Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

These aren’t calls for us to hack about arbitrarily at problems. Instead, they speak to intelligent use of materials and ideas; to understanding the point beyond which simplification compromises desired outcomes. It’s a central principle for good design — and for life.

The Urgent Design Questions of Our Time

George Dyson, from his 2019 EDGE New Year’s Essay:

There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun.

For a long time, the central objects of concern for designers have been interfaces: the touchpoints where users interact with systems. This is changing. The central objects of concern now are systems’ underlying models. Increasingly, these models aren’t artifacts designed in the traditional sense. Instead, they emerge from systems that learn about themselves and the contexts they’re operating in, adapt to those contexts, and in so doing change them.

The urgent design questions of our time aren’t about the usability or fitness-to-purpose of forms; they’re about the ethics and control of systems:

  • Are the system’s adaptation cycles virtuous or vicious?
  • Who determines the incentives that drive them?
  • How do we effectively prototype emergent systems so we can avoid unintended consequences?
  • Where, when, and how do we intervene most effectively?
  • Who intervenes?

Childhood’s End: The digital revolution isn’t over but has turned into something else

The Role of Paper in Learning

How do you learn a new subject? Let’s say you’re starting work on a new project, one where you have expertise in the craft but not the domain. You’ll be working alongside subject matter experts. Their time is limited; you don’t want to waste it by asking lots of newbie questions. It’s up to you to come up to speed fast so you can ask relevant questions and help structure the problem.

As a strategic designer, I find myself in this situation often. For example, a few years ago I worked on the design of a system that was meant to be used by neurosurgeons and radiologists. While I’d designed user interfaces for complex systems before, I didn’t know much about neurology. Working in this space required that I get up to speed quickly on a complicated subject matter. (No “it ain’t brain surgery!” jokes on this project!)

Over the years I’ve developed techniques for learning that work for me. I’ve written before about the three-stage model I use. To recap: when learning a new subject, I aim to 1) contextualize it, 2) draw out the distinctions in it, and 3) explore its implications. I strive to make each stage actionable: to make things with the new information I’m learning.

What kinds of things? It depends on the stage of the process I’m in. In the very early stages, it’s mostly scribbles, sketches, and various notes-to-self. Further on in the process, I look to share with others — especially with people who know the subject matter. In both cases, I’m looking to establish a feedback loop. Seeing the ideas out in the world changes my relation to them. I’m reminded of the tagline on Field Notes notebooks: “I’m not writing it down to remember it later, I’m writing it down to remember it now.” Spot-on. The act of putting pen to paper changes my relationship to the idea; the act of articulating it nudges me to give it structure and coherence.

I deliberately chose the phrase “putting pen to paper;” this process doesn’t work as well for me with digital tools. I’ve been experimenting for years with digital sketchbooks, but keep coming back to pen-and-paper for speed, reliability, and ease-of-use. That said, digital tools play an essential role. My (continuously evolving) learning ecosystem includes software like OneNote, Ulysses, OmniFocus, and Tinderbox. Recently I’ve also started experimenting with DevonThink. These tools all serve specific needs and do things that paper can’t do.

There’s lots of overlap between these tools, so why have so many of them? It’s tempting to want to cut down the ecosystem to as few tools as possible. But putting everything into a single tool means sacrificing important functionality; the ones that do lots of things don’t do any one of them as well as dedicated tools. For example, OneNote, Tinderbox, and DevonThink can capture reminders, but none of them do it as well as OmniFocus, which is designed for that purpose. (Having OS-level, cross-app search functionality such as macOS’s Spotlight is a boon, since it means not having to remember which app you put stuff into.)

A paper notebook could be the ultimate “does everything” tool. People have been taking notes on paper for many hundreds of years. There are lots of frameworks around that allow you to use plain paper to track commitments (e.g., bullet journals), separate signal from noise (e.g., Cornell notes), etc. Paper is super flexible, so there’s always the temptation to do more with it. But paper is far from perfect for some learning activities. For example, capturing lots of long texts and finding patterns in them (what I’m using DevonThink for) is best done with digital tools.

While the form of my learning ecosystem keeps evolving, it’s increasingly clear what role my paper sketchbook plays: It’s a scratchpad where raw thoughts and ideas emerge. It’s not for capturing long texts. It’s not for sharing with others — not even with future me (i.e., “to remember it later.”) Instead, it’s an extension of my mind; a sandbox where I shape for myself — thus internalizing — the things I’m learning.

In practice, this entails jumping back-and-forth between digital tools and paper. I once aspired to consolidate these steps into a “smart sketchbook” (see here and here) that would allow me to eschew paper. However, I increasingly value the role my physical sketchbook plays in my learning ecosystem. Its limitations are an advantage: using it requires a shift in modality that keeps ideas flowing, vibrant, and malleable.

Trusting a Software Ecosystem

Digital products aren’t monolithic. They depend on systems and infrastructure that aren’t controlled by the team responsible for the product. (Thinking about these things as “products” over-simplifies them. But I digress…) Consider a mobile app that relies on knowing its user’s location. That functionality won’t likely be developed internally by the app’s creators. Instead, they’ll use frameworks provided by mobile operating systems such as Android and iOS.

These operating systems in turn also leverage complex systems that their creators — Google and Apple, respectively — didn’t build themselves. For example, these companies didn’t create (or operate) the Global Positioning System on which location services depend; the U.S. government did. OS providers trust the providers of these systems; app developers trust the OS providers; users trust app developers. It’s a chain of trust.

When you use a ride-sharing app such as Lyft or Uber, you’re trusting this chain. Most people aren’t aware of it being a chain at all; they experience the app as a singular thing. If something goes wrong, they’ll reach out to the party responsible for the means through which they interact with this complex ecosystem: the developers of the app.

If you’re in such a team, you must understand how your product functions as a system and develop a good sense for its interdependencies. It behooves you to understand the business models that underlie each link in the chain. What drives the providers of these services? What could they do with the information you’re providing them? Are their interests aligned with yours? Are they aligned with those of your app’s users?

Structuring the Problem

Designers are increasingly dealing with more complex problems. The systems we work with often span both digital and physical domains. Requirements and constraints are more abundant and difficult to articulate. More stakeholders are affected. The workings of the system may be opaque to our clients and us.

One of the biggest challenges of working on such projects is that the problem we’re solving for isn’t apparent. This is not out of ill-will or incompetence; some problems are just difficult to pin down. In The Art of Systems Architecting, Mark W. Maier and Eberhardt Rechtin define what they call ill-structured problems:

An “ill-structured” problem is a problem where the statement of the problem depends on the statement of the solution. In other words, knowing what you can do changes your mind about what you want to do. A solution that appears correct based on an initial understanding of the problem may be revealed as wholly inadequate with more experience.

Facing an ill-structured problem is difficult and frustrating. It’s also not uncommon. Complex design projects often start with a vague understanding of what the problem is we’re designing for, or perhaps we’re solving for several problems that appear incompatible. Solutions are often implicit in the way these problems are articulated.

To do a good job, you must clearly understand and articulate the problem(s) you’re seeking to solve. Stating the problem is the starting point for all that follows; it frames the work to be done. Poorly structured problems lead to poorly structured solutions.

Structuring the problem isn’t something you can expect stakeholders to do. It’s up to you, the designer, to ensure the problem is structured correctly. How do you do it? First, you acknowledge that the initial problem statement will be vague and/or poorly structured. You assume your initial understanding of the problem will be flawed. You then move to develop a better understanding as quickly as possible.

This requires iterating through artifacts that allow both designers and stakeholders to grasp new dimensions of the problem so you can set off in the right direction. The forms these artifacts take vary depending on the type of project you’re dealing with. (Concept maps work well for the types of systems I work on.) You want to establish processes that allow these artifacts to evolve towards greater clarity and focus.

This takes courage. Stakeholders and clients want answers, not vague abstractions. The process of clarifying the problem may point away from initial project directions. Because of this, delving in the problem-definition stage of a project can produce tension. But the alternative — getting to a high degree of fidelity/tangibility prematurely — can lead folks to fall in love with solutions to the wrong problems.

Book Notes: “The Evolution of Everything”

The Evolution of Everything: How New Ideas Emerge
By Matt Ridley
HarperCollins, 2015

Designers are called to tackle increasingly complex problems. This requires that we understand how systems function and how they came to have the configurations we experience. I put it this way because complex systems (at least those that stand the test of time) don’t come into the world fully-formed. Instead, they evolve step-by-step from earlier, simpler systems. (See Gall’s Law.) Because of this, it’s essential that we understand the distinction between top-down and bottom-up structuring processes.

That distinction is what drew me to The Evolution of Everything. While not written specifically for designers, the book addresses this subject directly. Per its jacket, the book aims to “definitively [dispel] a dangerous, widespread myth: that we can command and control our world.” It pitches “the forces of evolution” against top-down forces for systems definition. What sorts of systems? Any and all of them: the universe, morality, life, genes, culture, the economy, technology, the mind, personality, education, population, leadership, government, religion, money, the internet.

There’s a chapter devoted to how top-down vs. bottom-up approaches have played out for each of these complex subjects. Mr. Ridley aims to demonstrate that advances in all of them have been the result of evolutionary forces, and the hindrances the result of intentional, planned actions. I don’t think I’m doing the author a disservice by describing it in such binary terms. In the book’s epilogue, Mr. Ridley states his thesis in its “boldest and most surprising form:”

Bad news is man-made, top-down, purposed stuff, imposed on history. Good news is accidental, unplanned, emergent stuff that gradually evolves. The things that go well are largely unintended, the things that go badly are largely intended.

Examples given of the former include the Russian Revolution, the Nazi regime, and the 2008 financial crisis, while examples of the latter include the eradication of infectious diseases, the green revolution, and the internet.

While the whole is engaging and erudite, the earlier chapters, which deal with the evolution of natural systems, are stronger than the latter ones, which deal with the evolution of social systems. The book’s political agenda becomes increasingly transparent in these later chapters, often at the expense of the primary top-down vs. bottom-up thesis.

If you already buy into this agenda, you may come away convinced. I wasn’t. Sometimes bottom-up forces enable command-and-control structures and vice-versa. But you’ll find no such nuance here; the book offers its subject as an either-or proposition. This leads to some weak arguments. (E.g., “While we should honour individuals for their contributions, we should not really think that they make something come into existence that would not have otherwise.”)

Understanding the difference between top-down vs. bottom-up structuring is essential for today’s designers. The Evolution of Everything doesn’t entirely dispel the myth that we can command-and-control the world, but it does provide good examples of bottom-up emergence — especially in its earlier chapters. Still, I’d like a more nuanced take on this critical subject.

Buy it on Amazon.com

How to Measure Network Effects

Li Jin and D’Arcy Coolican, writing for Andreessen Horowitz:

Network effects are one of the most important dynamics in software and marketplace businesses. But they’re often spoken of in a binary way: either you have them, or you don’t. In practice, most companies’ network effects are much more complex, falling along a spectrum of different types and strengths. They’re also dynamic and evolve as product, users, and competition changes.

They go on to outline sixteen ways in which network effects can be measured, grouped into five categories:

Acquisition

  • Organic vs. paid users
  • Sources of traffic
  • Time series of paid customer acquisition cost

Competitors

  • Prevalence of multi-tenanting
  • Switching or multi-homing costs

Engagement

  • User retention cohorts
  • Core action retention cohorts
  • Dollar retention & paid user retention cohorts
  • Retention by location/geography
  • Power user curves

Marketplace metrics

  • Match rate (aka utilization rate, success rate, etc.)
  • Market depth
  • Time to find a match (or inventory turnover, or days to turn)
  • Concentration or fragmentation of supply and demand

Economics-related

  • Pricing power
  • Unit economics

I love it when somebody adds granularity and nuance to a concept I previously understood only in binary terms. This post does that for network-centric businesses.

16 Ways to Measure Network Effects

Designing a Better System

One of the occupational hazards systems thinkers face is equating the understanding that something is a system with understanding the system itself. Knowing that the outcomes you see result from complex interactions between myriad components doesn’t endow you with the ability to tweak them skillfully.

In complex systems — the weather, the economy — the interactions between the parts often produce counter-intuitive behavior in the whole. You must observe the functioning system for a long time to develop a useful mental model. (Useful in that it helps you make reasonable predictions about what’s coming next.)

Developing good models of complex systems is very difficult. Even after observing the system for a long time — as people have done with the weather and the economy — what makes it tick may elude us. The larger and more complex the system is, the more there is to take in. Such systems are continually changing; often the best you can do is procure a snapshot of its state at any given time. Also, such systems are often the only of their kind. (The sample size for atmospheres precisely like the one that envelops our planet is one.)

The ultimate hazard is hubris. Having understood that something is a system — and perhaps even having developed a good snapshot model of the system — we start to believe we could do as well or better if allowed to start over. You’ll recognize the trap as a violation of Gall’s Law. When dealing with complex systems, no individual — no matter how smart or clairvoyant — can design a better system than one that has evolved to suit a particular purpose over time.

Cybernetics and Planned Economies

Great New Yorker story on project Cybersyn, Salvador Allende’s failed effort to create a cybernetic system to centralize control of Chile’s economy. Stafford Beer—an important figure in the cybernetics world—consulted on the design of the system. He left Chile before Allende was overthrown in the coup that led to the Pinochet dictatorship. But Beer had become disillusioned with the project before then:

One of the participating engineers described the factory modelling process as “fairly technocratic” and “top down”—it did not involve “speaking to the guy who was actually working on the mill or the spinning machine.” Frustrated with the growing bureaucratization of Project Cybersyn, Beer considered resigning. “If we wanted a new system of government, then it seems that we are not going to get it,” he wrote to his Chilean colleagues that spring. “The team is falling apart, and descending to personal recrimination.” Confined to the language of cybernetics, Beer didn’t know what to do. “I can see no way of practical change that does not very quickly damage the Chilean bureaucracy beyond repair,” he wrote.

This doesn’t surprise me. While I can see the allure of using computer technology to gather real-time feedback on the state of an economy, the control aspect of the Cybersyn project eludes me—especially given the enormous practical and technical constraints the team was facing. What would lead intelligent people to think they could 1) build an accurate model of an entire country’s economy, 2) that would be flexible enough to adapt to the changes that invariably happen in such a complex system, 3) that they could control, 4) using early 1970s computing technology? Hubris? Naïveté? Wishful thinking? More to the point, what lessons does this effort hold for today’s world, which we are wiring up with feedback and control mechanisms Cybersyn’s designers could only dream of? (I’ve long had the book Cybernetic Revolutionaries, which deals with the project’s history, on my reading queue. This New Yorker story has nudged me to move it up a couple of places in the line.)

The Planning Machine: Project Cybersyn and the original of the Big Data nation