“Every system is perfectly designed to get the results it gets.”
— W. Edward Demming
Every system serves at least one purpose. That’s what a system is: a set of elements that work in interrelated ways towards a purpose. Your body is a system; its primary purpose is to keep you alive. Your body’s constituent elements compose various subsystems that support this purpose. For example, your stomach is part of your digestive (sub)system, whose purpose it is to bring energy into the body.
Systems that have evolved into their current configuration (such as your body) are well-fitted to serving their purpose within the environment they exist in. (Those that weren’t well-fitted aren’t around to read blog posts.) The particular elements that compose your body — and the ways they relate to each other — are the result of hundreds of thousands of years of small experiments that lead towards ever tighter form-context-purpose fit.
Design is, in a sense, an attempt to accelerate this process. Your business doesn’t have six hundred thousand years to launch a new product; it has six months. So you assemble novel configurations of elements and test them. Not all possibilities, mind you: a tiny set. “Intelligent design” is a redundant phrase; design is intelligent by definition. The alternative is an undirected process. In either case, the goal is good fit.
The flip side is that a currently existing system that’s producing “bad” results is working as intended. If it hasn’t destroyed itself (or its environment) yet, then it’s functioning “well” towards its purpose — or at least have the ability to adapt further. Now, you may look at what the system is doing and be horrified. You may deem its purpose to be undesirable. You can then do something about it: either tweak its configuration or shut it down altogether. (That said, there aren’t many systems that are under your exclusive control, so you’ll have to build consensus to intervene.)
But effective interventions call for clarity; for understanding what’s really going on with the system. Are you sure you know how it works and towards what ends? How do you know? Complex systems often serve more than one purpose. How do you know that an intervention meant to tweak one outcome won’t inadvertently affect another? (Possibly with catastrophic results.) Complex systems that have achieved good fit have done so for reasons, some of which won’t be obvious on superficial examination. Tread mindfully, with humility and genuine curiosity.
What does it mean to have a systemic approach to design? It’s not just about striving for a comprehensive understanding of the key components and actors in the system and how they relate to each other. For the complex problems and environments we’re facing today, that’s table stakes. Beyond this, designers must also understand the conditions that brought the system about to begin with. What key forces precipitated the need for the design intervention? Often, the problem we’re being asked to work on is a symptom of a deeper issue.
For example, imagine somebody in your organization has discovered an inefficiency in the way service personnel interacts with customers. You’re being asked to design a system that allows service reps to get a more comprehensive picture of interactions with customers. It’s great if the system you design can resolve the problem, but if it’s even better if the process of doing so also helps resolve the underlying organizational issues that brought it about to begin with.
Often these issues emerge not from technical deficiencies, but from social/political/organizational/interpersonal ones. You won’t find this stuff spelled out in RFPs! Discovering the underlying issues requires you to ask difficult questions. (The five whys framework is useful for this.) It also requires keen observation. Designing in such projects often calls for working with multiple stakeholders, people from groups that may not interact with each other day-to-day. What have you noticed happening among them? Where are the disconnects? Are they using different names to describe the same things — or worse, using the same names to describe different things? Why have these disconnects come about? What contextual conditions led to the situation? Are these conditions still relevant?
On the surface, even a complex system will address a set of requirements. Resolving them will add value to the organization, and (ideally) to society in general. But addressing the issues that brought about those issues to begin with will create even more value — especially if they’re resolved with a generative perspective that accounts for their ongoing evolution.
“Can I give you some feedback?” You hear the words, and immediately get a sinking feeling in your belly. “Uh oh,” you think. “What did I do wrong?…” For many of us, the word feedback has negative connotations. It’s become a polite euphemism for criticism; something we offer up only when our expectations aren’t being met. But feedback is not inherently negative.
Feedback refers to the means by which a system can alter its behavior. (The outputs of the system “feed back” into it as inputs, which the system then acts on.) Considered it in this light, feedback can be seen as a steering mechanism: a way of keeping things within bounds. Too much of a good thing can be as bad as too little; knowing where you are relative to the bounds you’ve set allows you to correct course. If you step on the gas, yoiu see the needle on the car’s speedometer creeping up. Past a certain point, you know you’re exceeding the speed limit, so you take our foot off the gas. The speedometer is one of the car’s mechanisms for giving you feedback.
Feedback applies to relationships as well as other systems (such as cars). If you’re managing someone who is performing below the expectations that are expected of him or her, part of your job is to act as the speedometer; you must let them know. (Hence, the sinking feeling.) But the speedometer doesn’t only tell you when you’re going too fast; it also lets you know how fast you’re going in general. Sometimes going too slow is not good either. Giving (and receiving) clear, frequent feedback is essential; it allows team members to assess how they’re doing and whether or not they’re on the right track.
At the beginning of every new endeavor, there is chaos: A jumble of disparate ideas, people, and things that only hint at possible directions; a mess pregnant with latent value. Manifesting that value calls for coherence. It calls for us to bring order to the chaos.
A new order establishes new distinctions between things and new relationships between them. What exactly are we dealing with? What is it? What is it not? How is it different from things that precede it? What are its constituent parts? How does part A affect part B? (Is part A subservient to part B? Its peer? A container?) How do the people who will be impacted understand them? And so on.
We use language to give names to things; to set them apart from other things. We describe how they act, how they influence each other. We cut some bonds and establish others. We create cognitive constructs that allow the new endeavor to manifest as a real, practical thing in the world. (Charles Eames: “The quality of the connections is the key to quality itself.”)
The new order brings coherence to a small part of the universe. It gives you a new understanding of your health, your job, your diet, your marriage, your relationship to society. Or maybe its something of less consequence. (A compelling new way to whittle away your remaining time, perhaps?)
Whatever it is, the new order changes how you understand a part of the world, and therefore your behavior. How do you know it works? It produces results: People adopt the new model and use it to decide and act. An effective model requires no coercion: the new framework itself is compelling and useful enough to drive change.
At least that’s the ideal. Most new orders are a messy combination of some things that work and others that don’t. Remember: this is all emerging from chaos. By definition, the first draft will be rough. Over time, you’ll iterate towards a more precise set of distinctions and connections; towards an progressively clearer direction. (By “precise” we mean distinctions and connections that are crisp enough to achieve the results you want without compromising the society that makes the whole thing possible to begin with.)
Steve Jobs famously said that “Design is not just what it looks like and feels like. Design is how it works.” An important and useful distinction that has helped design move beyond the futility of mere aesthetics. Alas, a distinction that still presupposes the going concern is an it. The true power of design doesn’t manifest in ever-more compelling doohickeys; it manifests in the conceptual frameworks that make it possible for such things come into being — or whether it is even desirable for them to do so in the first place.
Have you ever been lost? I don’t mean it metaphorically, as in “I don’t follow what you’re saying.” I mean have you ever not known where you are — physically — and not known how to get to where you want to be? It’s terrifying. I’ve been lost a couple of times. Once when I was a kid, I got separated from my family while we were skiing. I had no map, and this was in the middle of a whiteout, so I couldn’t see where I was going even if I did. I’d also lost one of my skis. (Don’t ask.) Like I said, terrifying. I got out of the situation by following a simple heuristic: “keep going down the mountain.”
In situations such as this one, you run the risk of panicking and freezing. (In this case, literally.) You must find ways of taking skillful action — that is, acting in ways that get you closer to your goal. (In this case, reconnecting with my family at the lodge. And ideally, hot cocoa.) You act based on your understanding of your current situation: where you are, what options are open to you. But often we can’t see these things clearly, as happened to me on that mountain.
Lou [Rosenfeld] pitched the idea of an information architecture book to Lorrie LeJeune at O’Reilly in 1996. She didn’t bite. But a year later, she called us back. At industry conferences, Lorrie kept hearing web developers complain about a pain with no name. Users couldn’t find things. Sites couldn’t accommodate new content. It wasn’t a technology problem. It wasn’t a graphic design problem. It was an information architecture problem, we explained, and so began the book.
A pain with no name. The phrase has stuck with me. I’ve seen many teams experiencing this strange affliction. You know it’s the pain with no name because sufferers don’t know how to describe it. They know how to talk about usability issues, accessibility issues, and technology issues. They know when the logo needs to be bigger, or when pages aren’t loading fast enough. But the pain with no name eludes them. Something isn’t right, but they don’t know how to describe it. They may tell you people aren’t using the system because it’s missing key features, or that they spent a fortune on the redesign and users aren’t responding as expected. But they don’t know how to point to exactly what’s wrong with the thing.
Often, what’s wrong is that they skipped a critical step in the design process: they didn’t work through a conceptual model of the system before moving on to creating its user interface. A conceptual model is an abstract representation of:
the main concepts the system will expose to its users,
how those concepts relate to each other to help users accomplish their purposes within the system, and
the language that will be used to describe those concepts so users can understand them.
In other words, it’s an articulation of the system as a whole (as opposed to its components) as users will experience it. Producing solid conceptual models requires that designers understand the components of the system, the causal relationships between them, and the mental models users bring to the interaction. Therefore, they require research and iteration.
Conceptual models make stakeholders nervous. Why? Because they’re not user interfaces. Stakeholders want to see progress. They expect designers to produce artifacts that look like screens. Conceptual models don’t look like screens; they’re “boxes and arrows” diagrams. Getting to the stage where you can produce a conceptual model takes lots of work, and at the end of that effort, you have… a diagram. Developers can’t do anything with abstract diagrams; they want specs they can build against. Executive Vice Presidents can’t do anything with them either; they want comps to drop into PowerPoint decks. Conceptual models don’t help them do their jobs. They’re also difficult to validate on their own because many people have a hard time mapping abstract diagrams to user interfaces.
As a result, conceptual models are often seen by both designers and stakeholders as a burden. They resist calls to work them out. But conceptual models are the opposite of a burden. A good one will clarify hidden complexity and highlight overlooked opportunities. It’ll guide the team to produce user interfaces that are coherent, clear, and solve real problems for the user. Conceptual models help teams avoid the pain with no name. Alas, as with the pain with no name, many people don’t even know what to call them or how important they are. But now that you do, you can help relieve their pain.
Photographers have a catchphrase: “It’s not the camera you have, it’s what you do with it.” (You sometimes hear this variation: “It’s not the camera, it’s the photographer.”) What this means is that the tools you use are less important to outcomes than your degree of mastery over the subject. An experienced photographer will make excellent images with a crappy camera, whereas someone who doesn’t know what he or she is doing will mess things up even with top-of-the-line equipment.
My systems class involves making lots of diagrams. Diagramming software (e.g., OmniGraffle, Visio, and Adobe Illustrator) is often intimidating to students who don’t have design backgrounds. Some assume they must master one of these tools before they can create clear, elegant diagrams. I disagree; the minimum denominator — PowerPoint — will do fine in a pinch.
You don’t need to master “big” software to create great diagrams. Instead, you need:
An understanding of who the diagram is for. Are you the audience, or is it someone else? Do they have particular ways of understanding the space that will influence representations?
An understanding of the purpose of the diagram. What are you trying to explore or convey? What is the framing question the diagram seeks to answer?
An understanding of how to break things down into their constituent elements. What elements should be included/left out of the diagram? Will all elements be represented at similar levels of granularity, or will some be broader than others?
An understanding of how to represent relationships between elements. Do some influence others? Are relationships one-way? Two-way? One-to-one? One-to-many? Many-to-many? Are some elements containers for others?
Feedback. Are people getting it? Is it clear? What can be improved?
None of these things require software; you can explore them using pen and paper. The more you do it, the better you become at it — and the better you become at it, the better you will be when it comes time to wield the big diagramming tools. Practice is the key; becoming good at it using simple tools will keep you focused on the things that truly matter. Then you can learn the more powerful tools with the confidence that you know what you’re doing.
Systems theory can be quite abstract. I’ve set for myself the challenge of making the subject come alive for my students at CCA. This week, we covered simple dynamic systems. (“Clockworks” in Ken Boulding’s classification scheme.) To help make the subject tangible, I led students on a field trip to Fort Mason.
After the visit to The Interval, we strolled to another part of the Fort, where I delivered a short lecture while students sat (unbeknownst to them) on Christopher Alexander’s bench facing Alcatraz. It was a beautiful setting, and towards the end of the lecture, I revealed the bench’s backstory. The students had read Alexander’s A City Is Not a Tree before class; discussing the subject on a structure designed by the architect (albeit a modest one) was a rare privilege. (I’m grateful to my friend Dan Klyn for making me aware of the existence of this bench.)
We ended class with a brief group juggling exercise that illustrated the importance of structure when dealing with simple dynamic systems. The setting was perfect — a clearing among the buildings, facing the Bay — if a little cold. But the chilly breeze gave the exercise a sense of urgency that would’ve been hard to simulate otherwise.
It was a great, content-packed — and fun — afternoon. I consider myself lucky to be able to teach this subject here. While systems are indeed everywhere, the most enticing ones are not evenly distributed. The Bay Area has some of the finest examples in the world, if you know where to look.
Very few people are perfectly at ease. Most have something that worries them. Their kids’ education. Their job security. A big upcoming expense. A pending medical procedure. The success of a big project. You get the idea.
Their degree of agency over these things varies. Some they can’t do much about. (For example, their job could go away through no fault of their own.) But others — many, in fact — give them some agency. They can devote more time to that project, or skip that big night out so they can save money.
At some point, they’ll need to decide. How do they choose what to do? Experience plays a role; they’ve probably faced similar decision points in the past. But they must also understand the criteria that will allow them to choose one way or the other. They can inform themselves by asking friends, reading the news, or searching Google. Whatever the case, they need information.
They need more than this, though. They also need a causality model: a way to predict the repercussions of the decision. “If I do A, then Z is likely to result.” Their models mustn’t be perfect (they can’t be) — they only need offer some degree of confidence.
Leaders in organizations face many such choices, often with big stakes. These people are among the best decision-makers in the world. But creating solid models is very difficult in the complex contexts they work in. For example, a product feature may inadvertently create a national security problem. We call these unintended consequences, and they’re one of the things that keep leaders up at night.
In our complex world, we experience unintended consequences all the time. (Sometimes with disastrous results.) But there are ways we can improve our ability of predict outcomes with some degree of certainty. This is why you need to understand how systems work. Doing so allows you to learn how to model — have some control over — complex, volatile situations. It helps you sleep better.