We Are Tradesmen

“I call myself a tradesman, but any good tradesman should work only on problems that come within his genuine ineterst, and you solve a problem for your client where your two interests overlap. You should do an injustice both to yourself and your client to work on a problem of interest to the client but not to you — it just wouldn’t work. In that spirit, we are tradesmen. If your work is good enough it can be art, but art isn’t a product. It’s a quality.”

— Charles Eames

The Client-Designer Relationship

Designers gather input from various sources that affect the direction of a project: There’s bespoke research around the problem space, relevant case studies, regulators (both external and internal), subject matter experts, validation sessions with end users, etc. But there’s one entity that tends to have more influence on the direction of the project than others: the client.

By “client” I mean the entity that has commissioned the design project — i.e., the person or team who is paying the designer to focus his or her attention on the problem at hand. The client has money and reputation at stake; the designer has a contractual obligation to deliver results. The client is tasked with changing the state of whatever is being designed. “We are at point A, and need to get to point Z by X date.” The designer is there to shepherd that transformation through designerly means; that is, by manifesting key decisions in ways that reflect intended changes so they can be tested against reality.

The designer has an important responsibility in creating these feedback loops, but the client ultimately owns the results. This is obvious when the designer is engaged as a consultant – i.e., not an employee of the client’s organization — but is no less true when the designer is an “innie.” Many internal design teams don’t “own” the things they’re designing; they work with counterparts in other parts of the organization who have bottom-line responsibility for the thing being designed.

Charles Eames's sketch of the design process. Image: Eames Office. http://www.eamesoffice.com/the-work/charles-eames-design-process-diagram/
Charles Eames’s sketch of the design process. Image: Eames Office.

The client-designer relationship is central to the design process. Understanding the dynamic of this relationship, and knowing what each party is expected to bring to the process, is key to success. That said, it is up to the designer to ensure that directions are clear. In architectural projects, these directions are often manifest in the form of a brief, or architectural program: a document that lays out the requirements for the project.

The content for this brief must ultimately come from the client, but is formulated in close collaboration with — and often led by — the architect. Architects shepherd what is an initially vague set of requirements towards something more specific and actionable, much like they shepherd design artifacts. The brief is a sort of meta-design artifact that is also designed. Given its importance to the project, the designer and the client must develop it together.

Successful design projects call for relationship-building among all parties involved. Few relationships are as important to the success of the project as that between the client and the designer. At best, these relationships are true partnerships, with both parties having a healthy respect for what the other brings to the project. That said, both parties can’t be expected to have the same degree of understanding of this fact. While this may be the one time in his or her career that the client will be working with a designer, the designer will work with many clients over time. Because of this, it behooves designers to understand the client-designer dynamic and create the conditions necessary for these relationships to be fruitful.

A Data Primer for Designers

My friend Tim Sheiner, writing for the Salesforce UX blog:

demand is high for designers who can create experiences that display data in useful and interesting ways. In my personal experience this became much, much easier to do once I’d learned to speak the crisp, precise and slightly odd language used by technical people for talking about data.

What follows is a phenomenal post that clearly explains much of what you need to know to understand and speak competently about data. A must-read for anybody involved in designing for digital information environments.

Designer’s Field Guide to Data

The Illusion of Explanatory Depth

“The only true wisdom is in knowing you know nothing.”
— Socrates

You know less than you think you do. We all do. Consider an object you interact with every day: a flushing toilet. You know how to operate this device. Depending on where you live, you activate it by either pushing a button or pulling on a small lever, which causes water to flush away wastes. Fine, but how does it do this? Knowing how to operate a thing doesn’t mean understanding how it does it. You probably have a rough mental model of how the toilet does its thing, but if asked to draw a diagram that explains it in detail, you’d likely have to do a bit of research.

This is an example of a cognitive bias called The Illusion of Explanatory Depth. Although it’s an old principle (as evidenced by Socrates’s quote), it was first named by cognitive scientists Leonid Rozenblit and Frank Keil. In a 2002 paper, Rozenblit and Keil explained that most of us think we know how things work, when in fact we have incomplete understandings. Our “folk theories” offer explanations that lead us to believe we know more than we actually do. We become overconfident, our mental models inadequate.

When we interact with complex systems, we often experience only a small part of the system. Over time we develop an understanding of cause-effect relationships through the elements we experience directly. While this understanding may correspond to the way the subsystem actually works, it doesn’t necessarily correspond to the way the whole works. Our understanding of the subsystem leads us to think we understand the whole. This is a challenge when interacting with systems where we can directly experience cause-effect relationships (e.g., we pull the flush lever​ and see and hear water rushing through the toilet) but it’s an even greater challenge in systems where such mechanics are hidden away from the user.

I’ve owned my Apple Watch for four years, and I still don’t understand why sometimes the device’s battery lasts all day, while at other times it’s completely depleted shortly after mid-day. At first, I was confident about my understanding of the problem; surely the Watch worked like an iPhone, a device I had some experience with. (And therefore for which I had a reliable energy usage mental model.) I tried tweaking the Watch in the same way I do the iPhone, but nothing worked as I expected. Eventually, I had to admit to myself that my model of how the Watch uses energy was flawed. I’ve since adopted a Socratic mindset with regards to the Apple Watch: I just don’t know what triggers greater energy consumption on the device. The only thing I know for sure with regards to this subject is that I don’t know.

The Illusion of Explanatory Depth leads us to make less-than-optimal decisions. Intervening in a complex system while thinking you know more than you actually do about the systems’ workings can lead to disastrous results. Designers — people who intervene in systems for a living — must adopt a “beginners mind” attitude when it comes to their workings. Even if (especially if) we think we understand what’s going on, we must assume we don’t really.

Designers should also aspire to create systems that are easy to use but offer some degree of transparency; that allow their users to create mental models that correspond to how the thing works. The first time I opened a toilet tank was a revelation: I could clearly see the chain of interactions that led from my pulling the lever to having water rush from the tank and down the tubes. Opening the tank isn’t something you do in your day-to-day use of the toilet, but it’s an ability the system affords. I can’t lift the lid on my Apple Watch to examine how it uses up energy.

Increasingly, the systems we design look more like an Apple Watch than a flush toilet: they’re extremely complex and driven by algorithms and data models that are opaque and often emergent. When we design an “intuitive” user interface to such a system, we run the risk of making people overconfident about how it works. We can’t build good models of systems if we can’t see how they do what they do. While this may not be an issue for some classes of systems​, it could be extremely problematic for others. As we move key social interactions to some of these systems, our inability to build good models of how they work coupled with their “easy-to-use” UIs can cause serious challenges for our societies at every level.

The Informed Life With Gretchen Anderson

Episode 2 of The Informed Life podcast features my friend Gretchen Anderson. Our conversation focused on how Gretchen wrote her new book, Mastering Collaboration: Make Working Together Less Painful and More Productive.

One of the interesting aspects of Gretchen’s workflow is how she moves between digital and analog information environments:

I am a real analog person. Even writing, I find that that motion of the hand is what gets my brain engaged. And so even the first time I make an outline, I’m often doing that by hand. And I love whiteboards because — again, I like to be able to fit everything in one canvas that I can take in at one time. And I don’t think I’m alone in that. I’ve designed robotic surgery suites and I’ve done genetic analysis equipment. Like I’ve done really complicated things, but I think the goal is: you should be able to grok the system in one go.

I’m aiming to make incremental improvements to the show with each episode. The big new feature with this one is a full transcript, which should aid findability. Hope you enjoy it!

 The Informed Life Episode 2: Gretchen Anderson 

Brexit Explained

Are you confused by Brexit? I am. This short video from the folks at Information is Beautiful clarified for me the conundrum the UK finds itself in now:

Information is the lifeblood of democracy. People can’t effectively govern the system if they don’t understand the choices before them. I wonder how many people who voted in the Brexit referendum truly understood the implications of their decision.

Brexit Explained

Towards Greater Diversity in Design Teams

The websites and apps you interact with are parts of systems. These systems are often commercial organizations with responsibilities to various stakeholders, including the owners of the business, its employees and managers, its customers, and — more broadly — the rest of us who live in the society where the organization operates.

The people who “own” these digital products and services — product owners, business line managers, etc. — are tasked with being good stewards of these systems. They’re called to steer them towards greater value for stakeholders in the short and long term even as conditions around the systems change. Design decisions will change these systems — even if slightly. For example, the team could develop a new feature, fix an existing (and underperforming) feature, or address an entirely new user audience.

These are systemic interventions. Their effects are seldom limited to the task at hand; a seemingly minor alteration could have a large impact downstream. As a result, product owners must look out for second- and third-order effects; they’re looking to intervene skillfully as the system faces perturbations in its context.

To do this, product owners must become aware of the possible options open to them and their potential effects. Their ultimate goal is to achieve dynamic stability: for the system to continue serving its intended purposes as it evolves over time to address changing conditions. This calls for these folks to become systems thinkers.

One of the central tenets of cybernetics — the science of systems — is the Law of Requisite Variety. It’s relevant to people who aim to control systems. In cybernetics, the word variety has a special meaning: It refers to the number of possible states of a system. The Law of Requisite Variety suggests that skillful control of a system requires (at least) an equal number of possible responses to its number of possible states. This is usually articulated as a maxim: only variety can destroy variety.

Translation into humanspeak: a system with few possible states requires a small range of responses, whereas a system with many possible states requires a broad range of responses. This idea has proven to be useful in a variety of fields, including sports, ecology, management, medicine, and more. The more complex the system you’re dealing with, the more states it can be in. Controlling such systems requires at least an equal amount of flexibility in your ability to respond to changes.

Of course, not all digital products and services aim to serve the same purposes. Some are simpler — and less ambitious — than others. Simpler systems will have — and require — less variety. But many digital products and services are very complex and can have many possible states. A digital system that aspires to become the de facto environment where we interact — socially, commercially, civically, etc. — will have a huge range of possible states. The folks who design and manage these systems face a great deal of variety. To intervene skillfully, they need a larger range of possible responses. Among other things, this calls for greater diversity in their teams.

Purposeful Governance

Some systems are best left alone. For example, a rainforest can function perfectly well without human intervention. That’s a natural system that evolved into its current configuration over a long time, and it’s likely to continue adapting to changing conditions. (Barring some major environmental disruption.)

Most human-made systems haven’t had as much time to adapt; they’re aggregates of design decisions that may or may not effectively serve their intended purposes. Some of these interventions may truly be in service to the systems’ goals, but others may be driven by political motivations. (That’s one reason why you should think small when designing a system from scratch.)

As with the rainforest, conditions around the man-made system will change over time. How will the system address these changes? Designing the system itself is not enough; the design team must also design the system that continues the ongoing design of the system. We call this governance. Governance, government, governing; they all have to do with ongoing interventions aimed at keeping systems functioning as intended. These terms are all derivates from the Greek word kubernan (“to steer”), which is also the root for the word cybernetics. Governing is a quintessential systemic activity.

When do you intervene? How do you intervene? With how much force? How frequently? Who intervenes? If the intent is to keep systems functioning for a long time, these questions are essential. They also imply a corollary: you must know what you’re governing towards. What’s the purpose of the system? What are its intended outcomes? You can’t steer effectively if you’re unclear on the destination.