Designing for the Brilliant Cacophony

Mike Monteiro writing for the Adobe Blog:

When I was a little baby designer I was taught that good design meant simplifying. Keep it clean. Keep it simple. Make the system as efficient as possible. As few templates as possible. I’m sure the same goes for setting up style sheets, servers, and all that other shit we do. My city would run more efficiently if we simplified everything.

But I wouldn’t want to live there.

My city is a mess. My country is a mess. The internet is a mess. But in none of those cases is the answer to look for efficiencies, but rather to celebrate the differences. Celebrate the reasons the metro stops aren’t all the same. Celebrate the crooked streets. Celebrate the different voices. Celebrate the different food smells. Understand that other people like things you don’t. And you might like things they don’t. And it’s all cool! That’s what makes this city, and all cities, a blast. And when all these amazing people, some of them who we don’t understand at all, go online they are going to behave as inefficiently in there as they do out there. And that is awesome.

And your job, the glorious job you signed up for when you said you wanted to be a designer, is to support all of these people. Make sure none of these incredible voices get lost. And to fight against those who see that brilliant cacophony as a bug and not the greatest feature of all time.

You are our protection against monsters.

The call for diversity resonates with me. (It’s the subject of the keynote I’ll be delivering at World IA Day 2019.) Being aware of the distinctions we are creating (or perpetuating) is particularly important for designers who are working on the information architecture of these systems, since the structures we create tend to be longer-lived than other parts of the information environment.

That said, it’s impossible for the systems we create—and the structures that underlie them—to represent every point of view. Designers must make choices; we must take positions. How do we determine what voices to heed among the cacophony? In order to know, we must ask another set of questions: what is this information environment ultimately in service to? What am I in service to? Are the two aligned?

Who Do Designers Really Work For

Folder-centric to App-centric Workflows

Yesterday I had a busy day, that had me shuttling between Berkeley, Oakland, and San Francisco. In days like these, I prefer to work from my iPad (as opposed to a traditional laptop computer.) The iPad takes up less space, which makes it easier to use in cramped public transport. It also has an LTE modem, so I can remain connected to the internet when I’m out and about. Its smaller screen also encourages focus, which helps in distracting environments. I love it, and on days like these, I wonder when the day will come when I can do most of my work from an iPad.

That said, working from the iPad requires that I shift how I think about the structure of my work. I’ve written before about how I keep all my project materials organized using folders in the file system of my Mac. While iOS includes a Files app that allows interacting with such file structures, the system encourages app-centric (rather than project-centric) way of working. Rather than thinking “I’m now working on project x, and all the stuff for project x is in this folder,” context switching calls for remembering what app I was working in: “I was editing the document for project x in Google Docs; hence I must open Google Docs.”

Many of the productivity apps in iOS allow for arbitrary document groupings. Hence, I find myself replicating my file structure in the various apps. I end up with a project x folder in Google Drive, another in Pages, another in Keynotes, another in OneNote, etc. This adds to my workload and requires that I keep track of which app I used for what. I find it a less natural way of working than keeping everything grouped in a single folder. It’s one of the challenges of working in iOS that I’m continually looking to overcome.

Wikipedia as Information Infrastructure

Wikipedia is more than a publication. As I point out in Living in Information, Wikipedia is also the place where this publication is created. At its scale, it couldn’t happen otherwise. But Wikipedia is more than that: increasingly, it’s also a key part of our society’s information infrastructure. Other systems increasingly rely on it for the “authoritative” versions of particular concepts.

This works well most of the time. But it’s not perfect, and can lead to weird, unexpected consequences. For example, a Wikipedia entry is part of the reason why Google says I’m dead. More recently, a Wikipedia hack led to Siri showing a photo of a penis whenever a user asked about Donald Trump. While the former example is probably due to bad algorithms on Google’s part, the latter seems to be a fault with Wikipedia’s security mechanisms.

The people who manage Wikipedia are in an interesting situation. Over time they’ve created a fantastic system that allows for the efficient creation of organized content from the bottom-up at tremendous scale. They’ve been incredibly successful. Alas, with success comes visibility and influence. The more systems there are that depend on Wikipedia content, the more of a target it becomes for malicious actors.

This will require that the team re-think some of the openness and flexibility of the system in favor of more top-down control. How will this scale? Who will have a say on content decisions? How will Wikipedia’s governance structures evolve? These discussions are playing out right now. Wikipedia is a harbinger of future large-scale generative information environments, so it behooves us all to follow along.

Seeing Clearly

The ultimate use of information is to help you make better decisions. You gather information through your senses; the better your reading of the situation corresponds to what is really there, the better positioned you’ll be to make good decisions.

Some decisions are more consequential than others. I chose to eat fried eggs with Brussel sprouts and chorizo for breakfast today. I could’ve picked something else, and it wouldn’t have mattered much. That’s a low stakes decision; I’ll get another shot at breakfast tomorrow morning. Others have much higher stakes. Choosing to marry and start a family, for example, forever changes the course of your life.

Ultimately what you choose comes down to how well you understand the feasible options. When I opened the refrigerator this morning, I could see what ingredients were available to me. I also knew how much time I’d have to make breakfast, what utensils were available in the kitchen, and so on. This is information. I could’ve chosen to have a soufflé for breakfast instead, but I’ve never made one before. I would’ve had to go look up a recipe online, go to the supermarket to buy ingredients, block out most of my morning, etc. Fried eggs with Brussels sprouts and chorizo was an easier choice; my senses told me so.

Continue reading

Start With a Structure

Often, one of the biggest obstacles to getting started with something is your canvas’s initial blank state. It may be a white sheet of paper or a blinking cursor in the word processor. You stare at it, not knowing where to begin. When facing these conditions, I often find that adding a bit of structure does the trick. Having a framework frees you from having to pick a place to start. With a skeleton in place, your next step becomes clearer: all you must do is flesh it out.

Here’s an example. Many times in my life I tried to start a journal. Invariably, I’d sit down at the beginning of the day intent on writing a journal entry. Facing the blank document, I wouldn’t know where to start. What should I write about? The first few days (while still in the rush of having started a journal) I’d slog through the indecision. But eventually, something would happen—I’d wake up late, or go on a business trip—that would disrupt my routine. Under time pressure, the blank document became too hard an obstacle to overcome. I’d give up on journaling that one day, and it became a precedent. Soon I’d give up altogether.

Continue reading

Project Focus Mode

For most of my career, I’ve worked on several projects simultaneously at any given time. This means lots of information coming and going from and to different people, keeping track of documents and commitments, scheduling meetings, etc. Most of it happens on my computer, which for almost twenty years has been a laptop. (Meaning: it comes with me.) In the past few years, more mobile devices (e.g., iPhone, iPad) have also joined my toolkit. There’s a lot of things going on in these information environments. Keeping everything organized impacts my effectiveness; the time I spend looking for stuff isn’t valuable to my clients. Early on I realized that the only way I’d be able to do this would be if I developed organization systems, and stuck to them over time.

For example, I always have a “projects” folder on my computer. Each project I take on gets an individual subfolder in there. These folders use consistent naming schemes. These days it’s usually the client name, followed by a dash, followed by a (brief!) unique project name. Why not per-client folders? At one point I realized I had to strike a balance between depth and breadth. Going n-folders deep often meant not locating things as quickly. Of course, over time this folder can get crowded. Eventually, I determined the projects folder only needed to contain active projects; I set up a separate “archive” folder where I moved completed project folders.

Continue reading

A Bit of Structure Goes a Long Way

One of the most important lessons I learned in architecture school was the power of constraints. I’d always assumed that in creative work, complete freedom leads to better, more interesting results. After all, given more latitude you’re likely to try more things. But this turns out to be wrong.

The problem is twofold. For one, there’s the paralysis that sets in when facing a completely blank canvas. What to do? Where to start? Etc. For another, you never really have total freedom in the first place. All creative endeavors must grapple with constraints. There are time limits, budgets, the physical properties of paper, the force of gravity, the limits of your knowledge, the limits of what your society deems acceptable, and more. All of them narrow the scope of what you can do at any given time. Understanding the constraints that influence the project — and learning how to work creatively with them, rather than against them — is an essential part of learning to be a good designer.

Continue reading

Towards More Adaptive Information Environments

Atul Gawande has published a great piece in The New Yorker on why doctors hate their computers. The reason? Poorly designed software. Specifically, several of the examples in the story point to information architecture issues in the system. These include ambiguous distinctions between parts of the information environment and taxonomies that can be edited globally:

Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. [Dr. Susan Sadoughi] used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.

The bottom line? Software is too rigid, too inflexible; it reifies structures (and power dynamics) in ways that slow down already overburdened clinicians. Some problem domains are so complex that trying to design a comprehensive system from the top-down is likely to result in an overly complex, overly rigid system that misses important things and doesn’t meet anybody’s needs well.

In the case of medicine (not an atypical one) the users of the system have a degree of expertise and nuance that can’t easily be articulated as a design program. Creating effective information environments to serve these domains calls for more of a bottom-up approach, one that allows the system’s structure to evolve and adapt to fit the needs of its users:

Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.

Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.

Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.

My take is there’s nothing inherent in software that would keep it from being more adaptive. (The notion of information architectures that are more adaptive and emergent is one of the core ideas in Living in Information.) It’s a problem of design — and information architecture in particular — rather than technology. This article points to the need for designers to think about the object of their work as systems that continuously evolve towards better fitness-to-purpose, and not as monolithic constructs that aim to “get it right” from the start.

Why Doctors Hate Their Computers

Top-down/Bottom-up

One of the most frequent objections I hear about approaching design work more architecturally is that architecture is “top-down.” By this, my interlocutor usually means that architects come to problems with a prescribed solution that they impose onto the situation, In contrast, of course, to a solution that emerges more fluidly from understanding the context and people served by the thing being designed.

It’s understandable that they’d come to this conclusion since many of the famous architects people know about produce work that doesn’t look intuitive or contextually relevant. It’s hard to see, for example, how Frank Gehry’s Guggenheim Museum in Bilbao is the result of a user-centered design approach. The worst offender here is perhaps Le Corbusier, whose urban Plan Voisin for Paris would’ve razed large portions of the city in exchange for a de-humanizing grid of skyscrapers:

Continue reading