Towards Greater Diversity in Design Teams

The websites and apps you interact with are parts of systems. These systems are often commercial organizations with responsibilities to various stakeholders, including the owners of the business, its employees and managers, its customers, and — more broadly — the rest of us who live in the society where the organization operates.

The people who “own” these digital products and services — product owners, business line managers, etc. — are tasked with being good stewards of these systems. They’re called to steer them towards greater value for stakeholders in the short and long term even as conditions around the systems change. Design decisions will change these systems — even if slightly. For example, the team could develop a new feature, fix an existing (and underperforming) feature, or address an entirely new user audience.

These are systemic interventions. Their effects are seldom limited to the task at hand; a seemingly minor alteration could have a large impact downstream. As a result, product owners must look out for second- and third-order effects; they’re looking to intervene skillfully as the system faces perturbations in its context.

To do this, product owners must become aware of the possible options open to them and their potential effects. Their ultimate goal is to achieve dynamic stability: for the system to continue serving its intended purposes as it evolves over time to address changing conditions. This calls for these folks to become systems thinkers.

One of the central tenets of cybernetics — the science of systems — is the Law of Requisite Variety. It’s relevant to people who aim to control systems. In cybernetics, the word variety has a special meaning: It refers to the number of possible states of a system. The Law of Requisite Variety suggests that skillful control of a system requires (at least) an equal number of possible responses to its number of possible states. This is usually articulated as a maxim: only variety can destroy variety.

Translation into humanspeak: a system with few possible states requires a small range of responses, whereas a system with many possible states requires a broad range of responses. This idea has proven to be useful in a variety of fields, including sports, ecology, management, medicine, and more. The more complex the system you’re dealing with, the more states it can be in. Controlling such systems requires at least an equal amount of flexibility in your ability to respond to changes.

Of course, not all digital products and services aim to serve the same purposes. Some are simpler — and less ambitious — than others. Simpler systems will have — and require — less variety. But many digital products and services are very complex and can have many possible states. A digital system that aspires to become the de facto environment where we interact — socially, commercially, civically, etc. — will have a huge range of possible states. The folks who design and manage these systems face a great deal of variety. To intervene skillfully, they need a larger range of possible responses. Among other things, this calls for greater diversity in their teams.

Purposeful Governance

Some systems are best left alone. For example, a rainforest can function perfectly well without human intervention. That’s a natural system that evolved into its current configuration over a long time, and it’s likely to continue adapting to changing conditions. (Barring some major environmental disruption.)

Most human-made systems haven’t had as much time to adapt; they’re aggregates of design decisions that may or may not effectively serve their intended purposes. Some of these interventions may truly be in service to the systems’ goals, but others may be driven by political motivations. (That’s one reason why you should think small when designing a system from scratch.)

As with the rainforest, conditions around the man-made system will change over time. How will the system address these changes? Designing the system itself is not enough; the design team must also design the system that continues the ongoing design of the system. We call this governance. Governance, government, governing; they all have to do with ongoing interventions aimed at keeping systems functioning as intended. These terms are all derivates from the Greek word kubernan (“to steer”), which is also the root for the word cybernetics. Governing is a quintessential systemic activity.

When do you intervene? How do you intervene? With how much force? How frequently? Who intervenes? If the intent is to keep systems functioning for a long time, these questions are essential. They also imply a corollary: you must know what you’re governing towards. What’s the purpose of the system? What are its intended outcomes? You can’t steer effectively if you’re unclear on the destination.

The Limits of the Ethical Designer

Curt Arledge writing in his company’s blog:

As our discourse about design ethics matures, we need better models for understanding this big, squishy subject so that we’re not talking about everything all at once. What does it really mean to be an ethical designer? What is most important, and what should we care about the most? What power do we really have to make a difference, and how should we use it?

Mr. Arledge offers a model that divides the areas of concerns in three layers:

  • Interface
  • Business
  • Infrastructure

The stack goes from specific and concrete at the top to systemic and abstract at the bottom. This seems like a useful way of understanding the domain — and especially the parts where designers have the ​most influence on the problem.

That said, design work is medium-agnostic. There’s no reason why designers should constrain themselves to only the layers that have to do with the ​interface. There are many problems at the business and infrastructure layers that would be well-served by strategic design.

This is one of the central points in Living in Information, where I present a similar model. It’s encouraging to see other designers thinking along these lines.

Design Ethics and the Limits of the Ethical Designer

Three Lessons From the Work of Charles & Ray Eames

Last weekend I had the opportunity to take my kids to see an exhibit of the work of Charles and Ray Eames at the Oakland Museum of California. The Eameses are among the most famous designers ever, so little of the work on display was unfamiliar to me. Still, seeing so much of it together in one place was inspiring and enlightening.

The Eameses had a compelling mix of rigor and joie de vivre that has universal appeal. The show captures the playfulness of the resulting work. (My kids were a bit apprehensive about going to see a museum exhibit but got into it once they realized some of the items on display were toys they could play with.)

Three ideas stood out to me in this visit that I thought worth sharing. They apply to design in all domains.

Framing is a creative act

Careful composition and selection — determining what to leave out of a problem domain — opens up new ways of understanding and approaching familiar problems. As Brian Eno has written, “A frame is a way of creating a little world round something… Is there anything in a work that is not frame, actually?”

So much of the Eames’s​’​s work was about creative framing of ordinary things. In their myriad photographs, framing was the central (and literal) creative gesture; Powers of 10 moves the frame up and down levels of granularity to change our understanding of our place in the universe; the Case Study Houses re-frame the materials, construction techniques, and aesthetic of housing.

Accommodate a range of experiences

In the part of the show that presented the Eameses’s Mathematica exhibit, a quote from Charles Eames stood out to me; it reflected their aspirations for the exhibit. He said, “[Mathematica] should be of interest to a bright student and not embarrass the most knowledgeable.”

A physical model of a mobius strip, part of the Eameses's Mathematica exhibit. Image by Ryan Somma CC BY-SA 2.0, via Wikimedia Commons
A physical model of a mobius strip, part of the Eameses’s Mathematica exhibit. Image by Ryan Somma CC BY-SA 2.0, via Wikimedia Commons

The idea of accommodating a range of experiences is very important, and in some cases, challenging. Sometimes we must design for users that have very different perspectives and degrees of experience. This calls for 1) a solid understanding of the problem domain, 2) maintaining a beginners mind, and 3) testing and iterating.

It’s structure all the way down

I’ve always been inspired by the breadth of the Eames Office’s output. They excelled in film, graphic design, industrial design, architecture, exhibit design, and more. Beyond the obvious joy the Eameses got from experimenting with media, materials, techniques, and craft, the unifying conceptual drive behind all of this work was an acknowledgment that it was all underpinned by structure.

Photo by Cliff Hutson on Flickr
Photo by Cliff Hutson on Flickr

A building has structure. House of Cards — a delightful toy that consists of playing cards with carefully placed slits that allow them to be interconnected with each other — has structure. So does a chair, and a film. Even given the wide scope of their work — and the fact that most people saw them as “designers” — Charles Eames saw himself as an architect. “I can’t help but look at the problems around us as problems of structure,” he said, “and structure is architecture.”

The World of Charles and Ray Eames runs in the Oakland Museum of California until February 18, 2019.

Elegant Simplicity

When defining design principles for a project, someone in the design team will invariably suggest “simplicity.” The drive towards simplicity is understandable: simplicity is often cast as a desirable trait. Simple sells. Simplicity is the ultimate sophistication. Keep it simple, stupid.

But simplicity per se isn’t a good principle. Things can be simple and also inadequate — if you leave out the wrong things. Some things are inherently complex; reducing them to a simpler state can compromise their usefulness or sacrifice their essence.

In most cases what you want isn’t plain simplicity but a simplicity that is appropriate to the problem at hand. You want elegant simplicity: to do the most with the minimum resources (or components) necessary to achieve a particular outcome.

Elegant simplicity is graceful. It embodies efficiency and skill. It’s also hard, since it requires that you understand the system you’re working on and its intended outcomes. Once you do, you can ask questions: What’s essential here? Which components are critical? Where do I focus my efforts?

Appeals for elegant simplicity abound. Saint-Exupéry: “Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.” Lao Tse: “To attain knowledge, add things everyday. To attain wisdom, remove things every day.” (Attributed to) Albert Einstein: “Everything should be made as simple as possible, but not simpler.”

These aren’t calls for us to hack about arbitrarily at problems. Instead, they speak to intelligent use of materials and ideas; to understanding the point beyond which simplification compromises desired outcomes. It’s a central principle for good design — and for life.

The Urgent Design Questions of Our Time

George Dyson, from his 2019 EDGE New Year’s Essay:

There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun.

For a long time, the central objects of concern for designers have been interfaces: the touchpoints where users interact with systems. This is changing. The central objects of concern now are systems’ underlying models. Increasingly, these models aren’t artifacts designed in the traditional sense. Instead, they emerge from systems that learn about themselves and the contexts they’re operating in, adapt to those contexts, and in so doing change them.

The urgent design questions of our time aren’t about the usability or fitness-to-purpose of forms; they’re about the ethics and control of systems:

  • Are the system’s adaptation cycles virtuous or vicious?
  • Who determines the incentives that drive them?
  • How do we effectively prototype emergent systems so we can avoid unintended consequences?
  • Where, when, and how do we intervene most effectively?
  • Who intervenes?

Childhood’s End: The digital revolution isn’t over but has turned into something else

Framing the Problem

Jon Kolko on problem framing:

The goal of research is widely claimed to be about empathy building and understanding so we can identify and solve problems, and that’s not wrong. But it ignores one of the most important parts of research as an input for design strategy. Research helps produce a problem frame.

A conundrum: The way we articulate design problems implies solutions. At the beginning of a project, we often don’t know enough to communicate the problem well. As a result, we could do an excellent job of solving the wrong thing.

Addressing complex design problems — “solving” them — requires that we define them; that we put a frame around the problem space. This frame emerges from a feedback loop: a round of research leads to some definition, which in turn focuses the next round of research activities, which leads to more definition, etc.

Framing the problem in the way described by Mr. Kolko — by using research to define boundaries and relevant context, and using the resulting insights to guide further research — is a practical way to focus ill-structured problems. It’s an often overlooked part of the design process, and — especially in complex problems — a critical one.

Problem Framing, Not Problem Finding

Four Types of Prototypes

Prototypes are central to the design process; they’re the means by which designers establish the feedback loops that allow artifacts to evolve. But there’s a pitfall when discussing prototypes: there are different types and uses of prototypes, but only one word (“prototype”) to describe them. This unacknowledged variety can cause confusion and false expectations. Talking about prototype resolution and fidelity is as far as many designers go, but often that’s not far enough.

In a paper (PDF) published in the Handbook of Human-Computer Interaction (2nd Ed) (1997), Apple designers Stephanie Houde and Charles Hill resolve this issue by identifying four categories of prototypes, based on the dimension of the design project they’re meant to illuminate:

  • Role prototypes
  • Look and feel prototypes
  • Implementation prototypes
  • Integration prototypes

(Following quotes are from the paper.)

Role prototypes

Role prototypes are those which are built primarily to investigate questions of what an artifact could do for a user. They describe the functionality that a user might benefit from, with little attention to how the artifact would look and feel, or how it could be made to actually work.

In other words, this dimension is meant to cover the “jobs to be done” angle of the product or system. How will people use it? Would it be useful in its intended role? What other purposes could it serve for them?

Look and feel prototypes

Look and feel prototypes are built primarily to explore and demonstrate options for the concrete experience of an artifact. They simulate what it would be like to look at and interact with, without necessarily investigating the role it would play in the user’s life or how it would be made to work.

This dimension doesn’t require much explanation; these are prototypes that are meant to explore UI possibilities and refine possible interaction directions. (I sense that many people focus on this aspect of prototyping above the others; look and feel is perhaps the most natural target for critique.)

Implementation prototypes

Some prototypes are built primarily to answer technical questions about how a future artifact might actually be made to work. They are used to discover methods by which adequate specifications for the final artifact can be achieved — without having to define its look and feel or the role it will play for a user. (Some specifications may be unstated, and may include externally imposed constraints, such as the need to reuse existing components or production machinery.)

These prototypes often seek to answer questions about feasibility: Can we make this? What challenges will we face in producing such a thing? How will a particular technology affect its performance? Etc.

Integration prototypes

Integration prototypes are built to represent the complete user experience of an artifact. Such prototypes bring together the artifact’s intended design in terms of role, look and feel, and implementation.

As its name suggests, this final category includes prototypes that are meant to explore the other three dimensions. Their comprehensive nature makes them more useful in simulating “real” conditions for end users, but it also makes them more difficult and expensive to build.

The authors illustrate all four categories with extensive examples. (Mostly charming 1990s-era software projects, some of them prescient and resonant with today’s world.) A running theme throughout is the importance of being clear on who the audience is for the prototype and what purpose it’s meant to serve. A prototype meant to help the internal design team explore a new code library will have a very different form than one meant to excite the public-at-large about the possibilities afforded by a new technology.

The paper concludes with four suggestions for design teams that acknowledge this empathic angle:

  • Define “prototype” broadly.
  • Build multiple prototypes.
  • Know your audience.
  • Know your prototype; prepare your audience.

Twenty-plus years on, this paper remains a useful articulation of the importance of prototypes, and a call to using them more consciously to inform the design process.

What do Prototypes Prototype? (PDF)

Structuring the Problem

Designers are increasingly dealing with more complex problems. The systems we work with often span both digital and physical domains. Requirements and constraints are more abundant and difficult to articulate. More stakeholders are affected. The workings of the system may be opaque to our clients and us.

One of the biggest challenges of working on such projects is that the problem we’re solving for isn’t apparent. This is not out of ill-will or incompetence; some problems are just difficult to pin down. In The Art of Systems Architecting, Mark W. Maier and Eberhardt Rechtin define what they call ill-structured problems:

An “ill-structured” problem is a problem where the statement of the problem depends on the statement of the solution. In other words, knowing what you can do changes your mind about what you want to do. A solution that appears correct based on an initial understanding of the problem may be revealed as wholly inadequate with more experience.

Facing an ill-structured problem is difficult and frustrating. It’s also not uncommon. Complex design projects often start with a vague understanding of what the problem is we’re designing for, or perhaps we’re solving for several problems that appear incompatible. Solutions are often implicit in the way these problems are articulated.

To do a good job, you must clearly understand and articulate the problem(s) you’re seeking to solve. Stating the problem is the starting point for all that follows; it frames the work to be done. Poorly structured problems lead to poorly structured solutions.

Structuring the problem isn’t something you can expect stakeholders to do. It’s up to you, the designer, to ensure the problem is structured correctly. How do you do it? First, you acknowledge that the initial problem statement will be vague and/or poorly structured. You assume your initial understanding of the problem will be flawed. You then move to develop a better understanding as quickly as possible.

This requires iterating through artifacts that allow both designers and stakeholders to grasp new dimensions of the problem so you can set off in the right direction. The forms these artifacts take vary depending on the type of project you’re dealing with. (Concept maps work well for the types of systems I work on.) You want to establish processes that allow these artifacts to evolve towards greater clarity and focus.

This takes courage. Stakeholders and clients want answers, not vague abstractions. The process of clarifying the problem may point away from initial project directions. Because of this, delving in the problem-definition stage of a project can produce tension. But the alternative — getting to a high degree of fidelity/tangibility prematurely — can lead folks to fall in love with solutions to the wrong problems.