Working With Ambiguity

Design requires comfort with ambiguity; making progress even when requirements are unclear, uncertain, or unspecified. Good designers are unfazed by lack of clarity, without being foolhardy. They understand that their job is to make the possible tangible. If possibilities were already evident, there would be no need for their help; others would simply make the thing.

But possibilities are never definite. Nobody has perfect clairvoyance. Stakeholders discuss the new thing conceptually, but what will it actually be? They don’t know. Yes, it’ll be a user interface for a new medical imaging system. But that statement is an abstraction. There are hundreds — if not thousands — of decisions to be made before such a thing is concrete enough to be built. Making those decisions is the part of the designer’s remit.

Not that they’re ultimately the designer’s responsibility; stakeholders must ultimately decide whether or not the designer’s choices meet requirements. (The logo may indeed need to be bigger.) Articulating the concept with artifacts that help stakeholders understand what they’re actually talking about is, by definition, an act of reducing ambiguity.

Making sense of ambiguous situations requires having the right attitude. It calls for self-confidence, playfulness, and entrepreneurial drive. Although these traits can be improved, they come more naturally to some designers than others. Some folks are less willing than others to be made vulnerable.

That said, working successfully with ambiguity is not just about attitude; context also plays an important part. The problem with uncertainty is that you may get things wrong; the thing you produce may be partially (or wholly) inadequate. Time is lost. Money is lost. What then? What are the consequences?

Some project environments are more tolerant of mistakes than others. Because they’re the ones making things tangible and they often lack political power in their organizations, designers can easily become scapegoats for bad directions. Environments that punish mistakes will make exploration difficult.

Some problem domains also lend themselves more to making mistakes than others. The consequences for failing to capture the essence of a new brand are different than the consequences for failing to keep a bridge upright. It’s more challenging to deal with ambiguity when designing high-stakes systems, such as those that put lives are at risk.

Ultimately, design calls for working with ambiguity. This requires a combination of the right attitude within the right context. When considering your work, how easy is it for you to deal with unclear or uncertain directions? What are the consequences of getting things wrong? And more importantly, what can you do about these things?

The End of Engagement

Mobile operating system vendors are starting to give us the ability to become more aware of (and limit) the time we spend using our devices. For example, the Screen Time feature in Apple’s iOS 12 will make it possible for users of iPhones and iPads to define how long they want to spend using specific apps or entire app categories.

If adopted widely, these capabilities will impact the way many information environments are designed. Today, many apps and websites are structured to increase the engagement of their users. This is especially true of environments that are supported by advertising since the more time people spend in them translates directly to more exposure, and hence more money.

The novelty of always-connected supercomputers in our pockets at all times has fostered a cavalier attitude towards how we apportion our attention when in the presence of these things. The time we spend online has more than doubled over the past decade.

As digital designers, we have the responsibility to question the desirability of using engagement as the primary measure of success for our information environments. While it may be appropriate for some cases, engagement is overused today. This is because engagement is easy to measure, easy to design for, and in many cases (such as advertising,) it translates directly to higher revenues.

But the drive towards user engagement is a losing proposition. It’s a zero-sum game; you have a limited amount of time in the day — and ultimately, in your life as a whole. Whatever time you spend in one app will come at the expense of time spent engaging with other apps — or worse, spent engaging with other people in your life. Google and Apple’s “digital wellbeing” and “digital health” initiatives are an admission that this has become an issue for many people. With time, we will become more sophisticated about the tradeoffs we’re making when we enter these environments.

So if not engagement, what should we be designing for? My drive is towards designing for alignment between the goals of the user, the organization, and society. When your goals are aligned with the goals your environment is designed to support, you will be more willing to devote your precious time to it. You will enter the environment consciously, do what you need to do there, and then move on to something else. You’ll aim for “quality time” in the environment, rather than the information benders that are the norm today.

Designing for alignment is both subtler and more difficult than designing for engagement. It’s not as easy to measure progress or ROI on alignment. It also requires a deeper understanding of people’s motivations and having a clear perspective on how our business can contribute to social well-being. It’s a challenge that requires that we take design to another level at a time when design is just beginning to hit its stride within organizations. But we must do it. Only through alignment can we create the conditions that produce sustainable value for everyone in the long term.

Controlling Screen Time

Yesterday Apple presented in public the 2018 updates of its operating systems. As happens every year, we got a glimpse of many new software features coming to the Mac, iPads, Apple Watches, Apple TVs, and iPhones. One feature coming to iOS — the system that runs iPhones and iPads — stands out not because of things it allows us to do with our devices, but because of what it doesn’t allow: to consume our time mindlessly with them.

The new feature, called Screen Time, allows users to examine the time they’ve spent using apps and websites, and set constraints on that time. For example, somebody could decide she only wanted to spend a maximum of thirty minutes every day using the Instagram app on her phone. The phone would keep track of the time she spends on the app, notify her when she was approaching her limit, and ultimately turn off access to the app altogether when she exceeded her allotted time. She could do this not just for herself, but also for her kids.

Apple is not the first to do this; Google has announced similar features for Android as part of its Digital Wellbeing program, and there are also third-party apps that accomplish similar goals. That said, Apple’s announcement is significant because of the company’s cultural pull and the prominence they’re giving this feature in their flagship OS.

Three thoughts come to mind right away. The first is that the existence of this feature is an acknowledgment that something is not right with the way we’re currently using our devices. The time you spend engaged with information environments comes at the expense of the time you spend engaged in your physical environment. When companies compete with each other for your attention, and you have a device with you that gives you instant access to all of them at any time, a race ensues in which you and your loved ones lose. By establishing “digital wellbeing” and “digital health” (Apple’s phrase) programs, the operating system vendors are admitting that this has become a problem.

The second thought is that as platform vendors, neither Google or Apple can directly control the form of the information environments their systems host; what they can control is the amount of time users can spend in those environments. You can think of the OS vendors as managing cities. Formerly, the city’s spaces — parks, buildings — were open 24×7, but now they can have operating hours. This is especially useful when some of the buildings contain casinos; some folks need a nudge to go home and sleep once in a while.

The third thought is that the OS vendors are giving users the tools to examine their behavior in these environments and the power to define their operating hours for themselves. This gives us as individuals the ability to engage more consciously with the information environments where we spend our time. I hope the push towards providing us more control over our attention will help steer companies away from business models that drive us towards continuous engagement.

I see the development of the platform vendors’ digital wellbeing initiatives as an encouraging sign. That said, it doesn’t relieve the organizations that design and produce websites and apps from the responsibility of ensuring those environments support the needs and aspirations of their users and society at large. Ideally the most addictive of these digital places will now look for ways to better align their business goals with the goals of their users.

The Cone of Uncertainty

Erratum: an earlier version of this post made it sound as though this concept was my idea. My friend Christina Wodtke rightly called me out on it; the cone of uncertainty is a concept with a long history in project planning. I must have read about it at some point and buried it in my subconscious. Apologies for any confusion this may have caused; I’ve edited the post to clear it up.

“The beginning is always today.”
— Mary Wollstonecraft

Imagine you’re ramping up to work on a new project that will keep you and your team busy for many months. Before you start, you must define a plan for how you will tackle the work. For example, you must figure out what resources you will need and by when. This requires that you make predictions about the future state of the project: “By the fourth week, we should have already produced high-level design directions. At that point, we’ll be ready to engage a UI designer and a prototyper.”

The problem is that you can’t predict the future with certainty; the best you can do is make educated guesses based on previous experience and best-practices. And of course, reality has a way of messing with things: By week three, the team may uncover a vital requirement they initially missed that forces them to re-think their direction.

Because of this, the team’s confidence in their plans should drop the farther they cast into the future. They must be sure of the activities they need to undertake immediately, and doubtful of the things needed in the more distant future. I often visualize this as a circle of uncertainty around the team:

Continue reading

If These Walls Had Ears

In early 1896, the Lumière brothers exhibited one of the first motion pictures ever made: THE ARRIVAL OF A TRAIN AT LA CIOTAT. With a run time of less than a minute, THE ARRIVAL OF A TRAIN isn’t long. It also has a straightforward premise: the movie consists of a single stationary shot of a steam train pulling into a station, and the subsequent disembarkment of passengers. The shot is composed so the camera points down the track, with the locomotive coming towards it. You can see the film here:

THE ARRIVAL OF A TRAIN is famous not just because it was the first movie shown in public; it’s also famous because of the legend that’s grown around it. Supposedly, the first showings caused audiences to panic, with some people scrambling to the exits. Unaccustomed to moving pictures, these early movie-goers somehow thought there was a real train barreling towards them, and ran for their lives.

Whether this happened exactly as described is inconsequential. The story speaks to the power of the motion picture medium to conjure illusions and has therefore become enshrined as the founding myth of cinema. It also speaks to how information can alter our sense of place, especially when we’re interacting with it in novel ways. As such, it’s a good analog for some uncanny experiences we are encountering today.

Recently, a Portland woman named Danielle received a call from one of her husband’s employees. “Unplug your Alexa devices right now,” this person said. “You’re being hacked.” The employee then described in detail a conversation that had happened earlier inside Danielle’s home. Apparently, the family’s Amazon Echo device was recording their conversations and sharing them with others.

In the subsequent investigation of the incident, Amazon’s engineers concluded that somebody had uttered a particular set of phonemes during the conversation that the Echo interpreted as its activation command, followed by a command to send a message to the person who then received the recordings. In other words, it wasn’t a hack; it was an unintentional triggering of one of the Echo’s features. (You can read about this story on The Verge.)

I can’t help but wonder ​how this incident has altered this family’s relationship with the physical environment​ of their home. When people first experienced THE ARRIVAL OF A TRAIN at the end of the 19th Century, they had never seen anything like it — except in “real life.” The first audiences were inexperienced with the new information delivery medium, so it’s understandable that they felt confused or even panicky. Whatever their reaction was, undoubtedly their experience of being in a particular place was radically transformed by the experience.

Even now, over 120 years later, it still is. Think about the last time you went to a movie theater. The experience of sitting in a movie theater is very different before and after the movie is playing. How long does it take for you to stop being conscious of the physical environment of the theater as you become engrossed by the film? (This is one of the reasons why contemporary movies are preceded with reminders to turn off your electronic devices; you’re there to draw your attention away from our physical reality for a couple of hours, and you don’t want anything yanking it back.)

Always-on smart devices such as the Echo, Google Home, and Apple HomePod change the nature of our physical environments: They add an information interaction layer to the place that wasn’t there before you turned on the cylinder in the room. Unlike a movie, however, these devices aren’t designed to capture your attention. In fact, these devices are designed to be unobtrusive; you’re only meant to be aware of their presence when you summon them by issuing a verbal command.

One can only assume that the form of these things is a compromise with the constraints imposed by current technology and the laws of physics. The ideal form for this class of devices is completely invisibile; we want them to be perceived not as devices at all, but as a feature of the environment. But is this really the ideal? Is it desirable for our physical environments to be always listening to us in the background?

Partly due to their design, we’re responding to these smart cylinders in a way that stands in stark contrast to how we received THE ARRIVAL OF A TRAIN. Instead of panicking and running out of the room, we’re placidly deploying these instruments of contextual collapse into our most intimate environments. What does the possibility of inadvertent broadcast do to our ability to speak frankly with each other, to rage with anger, to say sweet, corny things to each other, to share with our kids the naughty delight of “pull-my-finger” jokes?

Those panicky Parisians of 1896 would run out of the theater to a perfectly ordinary street, no threatening locomotive in sight. I bet they initially felt like fools. Soon enough, the novelty would pass; eventually, they’d be able to sit through — and enjoy — much longer, more exciting film entertainments. What about us? Is panic merited when we discover our rooms have ears and that others can listen to anything we say? Will we be able to run out of these rooms? How will we know?

Information Architecture as MacGuffin

SALLAH: Indy, you have no time. If you still want the ark, it is being loaded onto a truck for Cairo.
INDIANA: Truck? What truck?

This exchange from RAIDERS OF THE LOST ARK (1981) leads to one of the most thrilling car chases in movie history, in which our hero, Indiana Jones, fights his way onto the vehicle mentioned above. Onboard the truck is the Ark of the Covenant, which Nazis are trying to smuggle out of Egypt so their boss — Adolf Hitler — can use it to take over the world.

Sounds like a pretty important thing, right? Well, it isn’t. (Spoiler alert!) By the end of the movie, the crated ark is wheeled into a nondescript government warehouse packed with similar crates as far as the eye can see. The implication: this thing, which we’ve just spent a couple of hours obsessing about, will soon be forgotten — as it should be. You don’t want the audience to go home thinking about the implications of having something as powerful as the ark out and about in the world.

Continue reading

Places Are Making You Stupid

There are great tacos in the San Francisco Bay Area. My family and I are lucky to live near a small restaurant that makes good ones. It’s run by a family who knows what they’re doing when it comes to tacos. They also know what they’re doing when it comes to pricing, hospitality, and ambiance, so the place is always packed. It’s one of our favorite restaurants. Alas, as good as the tacos are, I have a beef with the place: it makes us stupid.

You see, one of the things about this restaurant that makes it popular is its cornice lined with televisions, always tuned to soccer matches. This feature of the place makes it difficult for my family to do what we want to do when we hang out: focus on each other. I’m a middle-aged man, and I find it difficult to keep my gaze from wandering to the screens. For my young children, it’s almost impossible. As a result, our conversations in this place seldom get deep; they’re jagged and scattered. (Until the food arrives — then conversation stops altogether. They are good tacos.)

You could say it’s not a big deal. We’re not at the taco place to do anything “mission critical,” right? But what if we are? What if we miss an opportunity to do a small kindness for each other, or fail to mention something that matters a great deal? (Or worse — what if we do say it but the other person misses it because somebody just scored a goal?) These little moments are the stuff our relationships — our lives — are made of. And this place snatches them from us. Its unstated policy is that the tribal experience of organized sports matters more than the experience of an intimate conversation.

Still, we’ve made a conscious decision to be there. Sometimes we’re not given a choice. For example, a friend of mine always complains about having to work in an open office “cube farm” where her co-workers make constant noises that destroy her concentration. The quality of her work in that environment is different than it’d be in a place that allowed her greater control over her attention. She can’t help but work there, and her work suffers. I, on the other hand, can choose where to work. I’m writing these words in my local public library. I find it easier to work here; the arrangement of furniture, the levels of light, the silence — all are conducive to helping get into a state of flow with my writing. This place is the converse of the taco restaurant or the open plan office: it makes me smarter.

So places can either augment or degrade your cognitive abilities. Some physical environments — such as the taco place — don’t let you do much about it; a quality conversation requires you to go elsewhere. In a noisy cube farm, you can shield your attention by putting on noise-isolation earphones. (Suggestion: Philip Glass’s Music in Twelve Parts.) Other places, like the library, augment some abilities (thinking, reading, writing) but not others (conversing.)

You can improve your cognitive abilities by re-configuring your physical environment — or moving altogether. That said, it’s worth noting that if you’re like most of us you’re also subject to interruptions from your electronic devices. Often, the configuration of these information environments will have as much of an impact on your performance as the configuration of your physical environments. So for a quick cognitive boost when you need to get things done, switch your devices to “do not disturb” mode. It’ll make you smarter, wherever you are.

I Fight for the Balance

Hang around long enough with UX designers, and you’ll hear someone say it: “I’m an advocate for the users.” If the designer is especially nerdy, she’ll quote Tron: “I fight for the users.” She’ll go on to explain she’s the one who brings the users’ voice into “the room.” (A euphemism to describe the project team.)

This is an alluring stance for designers to take. (I know — I’ve said it myself earlier in my career.) For one thing, it sounds heroic. (Again, cue the image of Tron holding his disc over his head, ready to sacrifice himself for what is just and good and true.) For another, it clarifies designers’ position vis-a-vis the tough decisions ahead. Or so they think.

© Disney
© Disney

As compelling as it may be, “I fight for the user” is a misguided position for designers to adopt. Yes, it’s important to consider the needs and expectations of the people who use the organization’s products and services. But user needs aren’t the only forces that influence design.

The subtext to “I fight for the user” is that in this context (in “the room”) the user needs a feisty advocate — perhaps because others don’t care. This sets up a false duality: if I’m here for the user, you’re here for other reasons: making money, saving money, reducing call center volume, etc.

This framing isn’t healthy. Everyone should come to the room with the understanding that user needs will be important. It’s table stakes. If this attitude is not present from the start, then the designer should strive to bring it into the room — but as a way of building alignment with colleagues, not drawing distinctions between them.

So if designers aren’t in the room to “fight for the user,” what are they there for? Designers are there to move the project towards alignment between forces that could otherwise pull it apart. These forces include (but aren’t limited to):

  • Deadlines
  • Budgetary constraints
  • Regulatory/legal constraints
  • Production constraints
  • Business goals
  • Customer needs
  • User needs
  • Social needs

Striking the correct balance between these forces requires understanding their relative importance, which varies from project to project. (For example, healthcare projects have different regulatory constraints than those in entertainment.)

The team may get the initial balance wrong. That’s why we test prototypes in real-world conditions: We establish feedback loops that move the product or service towards ever-better fit with its context or market. Design’s role is in this process is making the possible tangible, progressively moving from abstraction to concreteness as the team iterates through increasingly better prototypes.

Eventually, the product or service will be in good enough shape to put into production. Design’s role then shifts to translating the intended direction into artifacts that guide the people who will build it. This requires understanding what developers need to do their work effectively. (It’s worth noting that this doesn’t need to​ happen in a strictly sequential “waterfall” manner.)

Shepherding this process calls for clarity and nuance. Good designers understand the relevance and directionality of all the forces shaping the project. User needs are an essential force, but not the only one. To pretend otherwise is to do a disservice to ourselves, our organizations, and design itself.

Designing for “Smart” Agents Among Us

Earlier this week, Google demonstrated Duplex, an astonishing advance in human-computer interaction. If you haven’t seen the relevant part of Sundar Pichai’s presentation, please watch it now:

If you understand how computers work, you’ll know how difficult it is for computers to do what Duplex is doing in this video. The system seems to be forming accurate models of the evolving contexts it’s participating in. It also shows nuance in communicating back to its human interlocutors, injecting “ums” and “ahs” at the right moments. Again, all of this is very difficult. (That’s why the audience laughs at several points in the presentation; they know how improbable this thing they’re hearing is.)

It’s worth noting this demonstration doesn’t suggest an artificial general intelligence like HAL 9000 or C-3PO; Duplex seems to be modeling a relatively narrow area of human interactions. (Namely, making an appointment.) Still, the system sounds convincingly human, and that raises deep questions. The (human) interlocutors seem not to be aware that they’re talking to an artificial entity. Are they being manipulated? (Google has said the system will be transparent to the people who interact with it, but this didn’t come across in the demonstration.) What would widespread availability such technology do to relations between human beings, to our ability to empathize with others? What would it do to social inequity?

We’re not far from the day when interactions with convincingly-sounding artificial agents are commonplace. We will both deploy agents to do our bidding, and interact with agents that have been deployed by others to do theirs. Both scenarios will play out in information environments. What affordances and signifiers are required? How will we balance transparency and seamlessness? (And how will this balance evolve as we become accustomed to engaging with these agents?) How will we structure the information environments where these human-agent encounters happen so they augment (rather than erode) human-human interactions? How will we know such erosion isn’t happening? Interesting challenges ahead.