Tackling New Challenges

When starting something new, you either know what steps are required to bring the undertaking to life, or you don’t. The former is the case when you’ve done something similar before. Let’s say you’re a designer who’s just started working on the design of a feature for a financial services system. If you’ve worked on a similar system in the past, you’ll have expectations about what you (and others) should do and in what sequence. You’ll mainly be looking for where your new project diverges from the patterns you’ve picked up from prior experience.

Other undertakings may be new to you but have been well-documented by others. Perhaps you’ve never worked on the design of the specific type of challenge this financial services system requires, but other people have. You can ask them, or read about it. It’ll take you a bit more time to get up to speed with such a project than if you have previous experience with something like it, but at least you have a framework to build on. Your challenge will be not just spotting instances where the project at hand varies from the pattern but also understanding what it is.

Still another class of undertaking is entirely new to you and to others. This is obviously a greater challenge than either of the two previous classes: You’ll be grappling with the content and context of the challenge and the frameworks that inform them. You may even have to invent frameworks and implement mechanisms to update them. This requires that you understand what goals they’re serving. But perhaps even the goals are unclear, and all you have is a hunch to go on. Scary stuff, especially if you’re committing resources to the project.

While this last class of challenges is rare, it can lead to breakthroughs. When facing such a challenge, I try to look for frameworks I can leverage from other fields. (Early in my career, I was using what I learned in architecture school in order to design websites.) The work will diverge fairly quickly as the specific character of the new challenge becomes evident, but starting with a dummy framework offers a point of departure and makes the undertaking less scary.

Book Notes: “Playing to Win”

Playing to Win: How Strategy Really Works
By A.G. Lafley and Roger L. Martin
Harvard Business Review Press, 2013

You can’t successfully design something as complex as an information environment if you’re not clear on the strategic direction it seeks to support. Unfortunately, the subject of strategy can be hard for designers to grasp, perhaps because people often explain it only at very high levels.

That’s why Playing to Win is one of my favorite books on business strategy: it makes the subject concrete. The authors’ backgrounds have the right balance between theory and practice: A.G. Lafley is a former CEO of Procter & Gamble, and Roger L. Martin was dean of the Rotman School of Management in Toronto. Together they crafted strategies that helped P&G win in several markets, and the book is chock full of case studies.

So what is strategy, according to the authors?

Continue reading

Cybernetics and Planned Economies

Great New Yorker story on project Cybersyn, Salvador Allende’s failed effort to create a cybernetic system to centralize control of Chile’s economy. Stafford Beer—an important figure in the cybernetics world—consulted on the design of the system. He left Chile before Allende was overthrown in the coup that led to the Pinochet dictatorship. But Beer had become disillusioned with the project before then:

One of the participating engineers described the factory modelling process as “fairly technocratic” and “top down”—it did not involve “speaking to the guy who was actually working on the mill or the spinning machine.” Frustrated with the growing bureaucratization of Project Cybersyn, Beer considered resigning. “If we wanted a new system of government, then it seems that we are not going to get it,” he wrote to his Chilean colleagues that spring. “The team is falling apart, and descending to personal recrimination.” Confined to the language of cybernetics, Beer didn’t know what to do. “I can see no way of practical change that does not very quickly damage the Chilean bureaucracy beyond repair,” he wrote.

This doesn’t surprise me. While I can see the allure of using computer technology to gather real-time feedback on the state of an economy, the control aspect of the Cybersyn project eludes me—especially given the enormous practical and technical constraints the team was facing. What would lead intelligent people to think they could 1) build an accurate model of an entire country’s economy, 2) that would be flexible enough to adapt to the changes that invariably happen in such a complex system, 3) that they could control, 4) using early 1970s computing technology? Hubris? Naïveté? Wishful thinking? More to the point, what lessons does this effort hold for today’s world, which we are wiring up with feedback and control mechanisms Cybersyn’s designers could only dream of? (I’ve long had the book Cybernetic Revolutionaries, which deals with the project’s history, on my reading queue. This New Yorker story has nudged me to move it up a couple of places in the line.)

The Planning Machine: Project Cybersyn and the original of the Big Data nation

Prototypes and the Used Universe

The first Star Wars movie—now known as EPISODE IV: A NEW HOPE—came out in 1977. It was a blockbuster, with crowds lining up for blocks to see it. Part of its success was due to its mythologically sound story. But its aesthetic was also an essential element in its popularity. Two elements in particular stand out: its excellent (for the time) special effects and the richness of its environments. I’m particularly interested in the second of these.

Before A NEW HOPE, most “space” movies looked “new”; their props and ships and clothes all looked clean and “modern.” Think of the most artistically successful pre-Star Wars space movie—2001: A SPACE ODYSSEY—and its antiseptic “NASA” aesthetic. Star Wars didn’t look clean; it looked crufty. Its sets, costumes, and props looked as though they’d been around for a long time. The movie’s creator, George Lucas, described it as a “used universe.”

Take a look at C-3PO, one of the two robots at the center of the movie:

Image: starwars.com
Image: starwars.com

Even though he’s golden and reflective, the filmmakers covered him in dust and oil. The grime suggests there’s depth there. For example, the streaks running down his breast suggest something about how he’s built. They help suspend our disbelief; we no longer think we’re looking at a thin man inside an uncomfortable costume, but a machine that’s leaking oil from its chest. Applying this bit of makeup on the costume was probably cheap—certainly much less expensive than actually building a functioning android.

I love this idea of adding depth to an artifact by touching it up with superficial details. When designing a prototype, you usually want to explore and convey specific ideas. The focus of the prototype should be on those. But paying attention to small details can give it depth, making it easier for users to believe in the world the prototype creates.

For example, the system you’re prototyping may include the concept of user accounts. It’s relatively common functionality; many people will be familiar with how account management features work. You don’t need to build out the parts of the prototype that give users access to those features; the mere presence of a strategically placed menu can suggest that they exist. Another example is notifications, something else that people have experienced in other systems. While notification features may not be the central idea you’re exploring with the prototype, hinting at them can add depth and realism to the prototype.

Creating a “used universe” prototype calls for balance. You don’t want to go overboard with this stuff, lest it distracts users from the main ideas the prototype is exploring. That said, little details can go a long way towards making the prototype more believable—to allow testers to really “get into it”—which is what you want when they’re interacting with it.

Designing for the Brilliant Cacophony

Mike Monteiro writing for the Adobe Blog:

When I was a little baby designer I was taught that good design meant simplifying. Keep it clean. Keep it simple. Make the system as efficient as possible. As few templates as possible. I’m sure the same goes for setting up style sheets, servers, and all that other shit we do. My city would run more efficiently if we simplified everything.

But I wouldn’t want to live there.

My city is a mess. My country is a mess. The internet is a mess. But in none of those cases is the answer to look for efficiencies, but rather to celebrate the differences. Celebrate the reasons the metro stops aren’t all the same. Celebrate the crooked streets. Celebrate the different voices. Celebrate the different food smells. Understand that other people like things you don’t. And you might like things they don’t. And it’s all cool! That’s what makes this city, and all cities, a blast. And when all these amazing people, some of them who we don’t understand at all, go online they are going to behave as inefficiently in there as they do out there. And that is awesome.

And your job, the glorious job you signed up for when you said you wanted to be a designer, is to support all of these people. Make sure none of these incredible voices get lost. And to fight against those who see that brilliant cacophony as a bug and not the greatest feature of all time.

You are our protection against monsters.

The call for diversity resonates with me. (It’s the subject of the keynote I’ll be delivering at World IA Day 2019.) Being aware of the distinctions we are creating (or perpetuating) is particularly important for designers who are working on the information architecture of these systems, since the structures we create tend to be longer-lived than other parts of the information environment.

That said, it’s impossible for the systems we create—and the structures that underlie them—to represent every point of view. Designers must make choices; we must take positions. How do we determine what voices to heed among the cacophony? In order to know, we must ask another set of questions: what is this information environment ultimately in service to? What am I in service to? Are the two aligned?

Who Do Designers Really Work For

Intentional Computing

Thanks to the generosity of my friend Alex Baumgardt—who gifted me a functioning logic board—yesterday I brought my old Mac SE/30 back to life. My kids spent an hour or so exploring old games on its 9-inch monochrome screen while I reminisced about the days when that Mac was my primary computing experience. (My daughter Julia is smitten with Zork; I’m giddy.)

The kids had lots of questions.

“Does it have color?” No, it only has black and white.

“Does it have sound?” It used to. Gotta look into that.

“Does it play [current game]?” No, alas.

“Was it expensive?” In its day, it was very expensive.

“Does it ‘do’ the internet?” No, this one doesn’t.

An artifact from a different world.

I put my iPhone 8 Plus next to the SE/30. The phone’s screen lit up instantly, as it always does. It’s always on, and always on me. I’ve stopped thinking about using the iPhone as something I do. Instead, it’s become a natural extension of my day-to-day being. I simply take it out of my pocket, sometimes mindlessly.

Using the old Mac, on the other hand, is an intentional act. It’s off most of the time. To turn it on, you must flip a large mechanical switch on its back. It makes a loud, satisfying “thunk!” Various noises follow: a fan spinning up, the faint chirping of the disk drive. Then the “happy Mac” icon on the screen. A little world coming to life. Eventually, a folder appears showing the software available on the system. There’s not much there; a few games, a paint program, perhaps a text editor. No web browser, of course. (Although this particular Mac once had Netscape installed on it; I’d use it to browse the early web through a dial-up modem.)

“What do I want to do now?” isn’t a question I ever asked of this system. If I’d gone through the trouble of turning it on, it was because there was something I needed to do: work on a history paper, sequence some music, create an architectural model. (Yes, on the 9-inch screen! Good times.) A more intentional—a more mindful—way of computing. Closer to using a fine tool than a television.

I’m writing this in Ulysses’s “distraction-free” mode. Many text editors today have a similar feature: a way of forcing our always-on, always-connected, always-beckoning devices into something that works more like an SE/30. But what I’m talking about here is more than cutting out distractions; it’s about a different conception of the work and the tools used to do the work. It’s about computing as a discreet activity: something with a beginning, an end, a goal, with no possibility of meandering onto random destinations. As wonderful as the iPhone is (and it is a technological wonder), revisiting this 30-year-old computer made me think George R.R. Martin may be onto something.

Folder-centric to App-centric Workflows

Yesterday I had a busy day, that had me shuttling between Berkeley, Oakland, and San Francisco. In days like these, I prefer to work from my iPad (as opposed to a traditional laptop computer.) The iPad takes up less space, which makes it easier to use in cramped public transport. It also has an LTE modem, so I can remain connected to the internet when I’m out and about. Its smaller screen also encourages focus, which helps in distracting environments. I love it, and on days like these, I wonder when the day will come when I can do most of my work from an iPad.

That said, working from the iPad requires that I shift how I think about the structure of my work. I’ve written before about how I keep all my project materials organized using folders in the file system of my Mac. While iOS includes a Files app that allows interacting with such file structures, the system encourages app-centric (rather than project-centric) way of working. Rather than thinking “I’m now working on project x, and all the stuff for project x is in this folder,” context switching calls for remembering what app I was working in: “I was editing the document for project x in Google Docs; hence I must open Google Docs.”

Many of the productivity apps in iOS allow for arbitrary document groupings. Hence, I find myself replicating my file structure in the various apps. I end up with a project x folder in Google Drive, another in Pages, another in Keynotes, another in OneNote, etc. This adds to my workload and requires that I keep track of which app I used for what. I find it a less natural way of working than keeping everything grouped in a single folder. It’s one of the challenges of working in iOS that I’m continually looking to overcome.

New Keynote: “Designing Distinctions”

I’ve been invited to deliver the closing keynote at World Information Architecture Day Switzerland 2019, which will happen in Zurich in February. (You can sign up here.) The conference’s theme of “Design for Difference” prompted me to work on a new presentation, which I’m calling “Designing Distinctions.” This is the description:

Information architects design distinctions. We categorize things for a living—that is, we set off concepts against each other to make it easier for people to “find their personal paths to knowledge.”

As software “eats the world,” the distinctions we create in information environments grow ever more powerful. They come to frame how people understand themselves, their contexts, and the relationship between the two. As a result, information architects have greater responsibility today than ever before. We must vie to create systems that establish useful distinctions.

This presentation explores the tensions inherent in making distinctions. What are the responsibilities for professional distinction-makers in a world in which the effects of their work have greater impact than ever before? How might information architecture lead to healthier societies in the long-term?

I’ll be working on this talk over the next few weeks, and am curious about​ what you think about the subject. What thoughts does it spark? Any concerns/areas you think I should cover? Books or blogs I should be reading on the subject? Please send me a note to let me know.