Prototypes and the Used Universe

The first Star Wars movie—now known as EPISODE IV: A NEW HOPE—came out in 1977. It was a blockbuster, with crowds lining up for blocks to see it. Part of its success was due to its mythologically sound story. But its aesthetic was also an essential element in its popularity. Two elements in particular stand out: its excellent (for the time) special effects and the richness of its environments. I’m particularly interested in the second of these.

Before A NEW HOPE, most “space” movies looked “new”; their props and ships and clothes all looked clean and “modern.” Think of the most artistically successful pre-Star Wars space movie—2001: A SPACE ODYSSEY—and its antiseptic “NASA” aesthetic. Star Wars didn’t look clean; it looked crufty. Its sets, costumes, and props looked as though they’d been around for a long time. The movie’s creator, George Lucas, described it as a “used universe.”

Take a look at C-3PO, one of the two robots at the center of the movie:

Image: starwars.com
Image: starwars.com

Even though he’s golden and reflective, the filmmakers covered him in dust and oil. The grime suggests there’s depth there. For example, the streaks running down his breast suggest something about how he’s built. They help suspend our disbelief; we no longer think we’re looking at a thin man inside an uncomfortable costume, but a machine that’s leaking oil from its chest. Applying this bit of makeup on the costume was probably cheap—certainly much less expensive than actually building a functioning android.

I love this idea of adding depth to an artifact by touching it up with superficial details. When designing a prototype, you usually want to explore and convey specific ideas. The focus of the prototype should be on those. But paying attention to small details can give it depth, making it easier for users to believe in the world the prototype creates.

For example, the system you’re prototyping may include the concept of user accounts. It’s relatively common functionality; many people will be familiar with how account management features work. You don’t need to build out the parts of the prototype that give users access to those features; the mere presence of a strategically placed menu can suggest that they exist. Another example is notifications, something else that people have experienced in other systems. While notification features may not be the central idea you’re exploring with the prototype, hinting at them can add depth and realism to the prototype.

Creating a “used universe” prototype calls for balance. You don’t want to go overboard with this stuff, lest it distracts users from the main ideas the prototype is exploring. That said, little details can go a long way towards making the prototype more believable—to allow testers to really “get into it”—which is what you want when they’re interacting with it.

Designing for the Brilliant Cacophony

Mike Monteiro writing for the Adobe Blog:

When I was a little baby designer I was taught that good design meant simplifying. Keep it clean. Keep it simple. Make the system as efficient as possible. As few templates as possible. I’m sure the same goes for setting up style sheets, servers, and all that other shit we do. My city would run more efficiently if we simplified everything.

But I wouldn’t want to live there.

My city is a mess. My country is a mess. The internet is a mess. But in none of those cases is the answer to look for efficiencies, but rather to celebrate the differences. Celebrate the reasons the metro stops aren’t all the same. Celebrate the crooked streets. Celebrate the different voices. Celebrate the different food smells. Understand that other people like things you don’t. And you might like things they don’t. And it’s all cool! That’s what makes this city, and all cities, a blast. And when all these amazing people, some of them who we don’t understand at all, go online they are going to behave as inefficiently in there as they do out there. And that is awesome.

And your job, the glorious job you signed up for when you said you wanted to be a designer, is to support all of these people. Make sure none of these incredible voices get lost. And to fight against those who see that brilliant cacophony as a bug and not the greatest feature of all time.

You are our protection against monsters.

The call for diversity resonates with me. (It’s the subject of the keynote I’ll be delivering at World IA Day 2019.) Being aware of the distinctions we are creating (or perpetuating) is particularly important for designers who are working on the information architecture of these systems, since the structures we create tend to be longer-lived than other parts of the information environment.

That said, it’s impossible for the systems we create—and the structures that underlie them—to represent every point of view. Designers must make choices; we must take positions. How do we determine what voices to heed among the cacophony? In order to know, we must ask another set of questions: what is this information environment ultimately in service to? What am I in service to? Are the two aligned?

Who Do Designers Really Work For

Intentional Computing

Thanks to the generosity of my friend Alex Baumgardt—who gifted me a functioning logic board—yesterday I brought my old Mac SE/30 back to life. My kids spent an hour or so exploring old games on its 9-inch monochrome screen while I reminisced about the days when that Mac was my primary computing experience. (My daughter Julia is smitten with Zork; I’m giddy.)

The kids had lots of questions.

“Does it have color?” No, it only has black and white.

“Does it have sound?” It used to. Gotta look into that.

“Does it play [current game]?” No, alas.

“Was it expensive?” In its day, it was very expensive.

“Does it ‘do’ the internet?” No, this one doesn’t.

An artifact from a different world.

I put my iPhone 8 Plus next to the SE/30. The phone’s screen lit up instantly, as it always does. It’s always on, and always on me. I’ve stopped thinking about using the iPhone as something I do. Instead, it’s become a natural extension of my day-to-day being. I simply take it out of my pocket, sometimes mindlessly.

Using the old Mac, on the other hand, is an intentional act. It’s off most of the time. To turn it on, you must flip a large mechanical switch on its back. It makes a loud, satisfying “thunk!” Various noises follow: a fan spinning up, the faint chirping of the disk drive. Then the “happy Mac” icon on the screen. A little world coming to life. Eventually, a folder appears showing the software available on the system. There’s not much there; a few games, a paint program, perhaps a text editor. No web browser, of course. (Although this particular Mac once had Netscape installed on it; I’d use it to browse the early web through a dial-up modem.)

“What do I want to do now?” isn’t a question I ever asked of this system. If I’d gone through the trouble of turning it on, it was because there was something I needed to do: work on a history paper, sequence some music, create an architectural model. (Yes, on the 9-inch screen! Good times.) A more intentional—a more mindful—way of computing. Closer to using a fine tool than a television.

I’m writing this in Ulysses’s “distraction-free” mode. Many text editors today have a similar feature: a way of forcing our always-on, always-connected, always-beckoning devices into something that works more like an SE/30. But what I’m talking about here is more than cutting out distractions; it’s about a different conception of the work and the tools used to do the work. It’s about computing as a discreet activity: something with a beginning, an end, a goal, with no possibility of meandering onto random destinations. As wonderful as the iPhone is (and it is a technological wonder), revisiting this 30-year-old computer made me think George R.R. Martin may be onto something.

Folder-centric to App-centric Workflows

Yesterday I had a busy day, that had me shuttling between Berkeley, Oakland, and San Francisco. In days like these, I prefer to work from my iPad (as opposed to a traditional laptop computer.) The iPad takes up less space, which makes it easier to use in cramped public transport. It also has an LTE modem, so I can remain connected to the internet when I’m out and about. Its smaller screen also encourages focus, which helps in distracting environments. I love it, and on days like these, I wonder when the day will come when I can do most of my work from an iPad.

That said, working from the iPad requires that I shift how I think about the structure of my work. I’ve written before about how I keep all my project materials organized using folders in the file system of my Mac. While iOS includes a Files app that allows interacting with such file structures, the system encourages app-centric (rather than project-centric) way of working. Rather than thinking “I’m now working on project x, and all the stuff for project x is in this folder,” context switching calls for remembering what app I was working in: “I was editing the document for project x in Google Docs; hence I must open Google Docs.”

Many of the productivity apps in iOS allow for arbitrary document groupings. Hence, I find myself replicating my file structure in the various apps. I end up with a project x folder in Google Drive, another in Pages, another in Keynotes, another in OneNote, etc. This adds to my workload and requires that I keep track of which app I used for what. I find it a less natural way of working than keeping everything grouped in a single folder. It’s one of the challenges of working in iOS that I’m continually looking to overcome.

New Keynote: “Designing Distinctions”

I’ve been invited to deliver the closing keynote at World Information Architecture Day Switzerland 2019, which will happen in Zurich in February. (You can sign up here.) The conference’s theme of “Design for Difference” prompted me to work on a new presentation, which I’m calling “Designing Distinctions.” This is the description:

Information architects design distinctions. We categorize things for a living—that is, we set off concepts against each other to make it easier for people to “find their personal paths to knowledge.”

As software “eats the world,” the distinctions we create in information environments grow ever more powerful. They come to frame how people understand themselves, their contexts, and the relationship between the two. As a result, information architects have greater responsibility today than ever before. We must vie to create systems that establish useful distinctions.

This presentation explores the tensions inherent in making distinctions. What are the responsibilities for professional distinction-makers in a world in which the effects of their work have greater impact than ever before? How might information architecture lead to healthier societies in the long-term?

I’ll be working on this talk over the next few weeks, and am curious about​ what you think about the subject. What thoughts does it spark? Any concerns/areas you think I should cover? Books or blogs I should be reading on the subject? Please send me a note to let me know.

Five Books I Enjoyed in 2018

I’ve previously posted lists of books I’ve liked during the year. I usually do this close to the New Year, since I’ll often get through a couple of additional books during the holidays. However, a recent “books I loved this year” post by Bill Gates made me realize that it may be better to share these lists before the holiday season—that way they can serve as gift ideas. (Either for yourself or others.)

In any case, here are five books I enjoyed this year, and that you and/or your friends may like. (They didn’t necessarily come out in 2018—that’s just when I got to them.)

Factfulness, by Hans Rosling, Ola Rosling, and Anna Rosling Rönnlund. Like many people, I first heard about Hans Rosling via his popular TED talk, where he showed evidence the world is getting better by using animated bubble charts. Factfulness is like a paper-based version of that presentation: It does, indeed, use data to explain how things are getting better. But it does more than that: It also explains why we find that so hard to believe. Read my book notes or buy it on Amazon.com.

Architectural Intelligence, by Molly Wright Steenson. A masterful examination of how architectural thinking and doing have shaped our current information environments. The book focuses on the work of four influential architects: Christopher Alexander, Richard Saul Wurman, Cedric Price, and Nicholas Negroponte. Read my book notes or buy it on Amazon.com.

Playing to Win, by A.G. Lafley and Roger R. Martin. Excellent book on corporate strategy, and one of the clearest and most compelling business books I’ve read. The authors are both experienced and respected business leaders with a proven track record. (Mr. Martin is dean of the Roman School of Management, and Mr. Lafley a former CEO of Procter & Gamble.) Buy it on Amazon.com.

Radical Candor, by Kim Scott. Even though I haven’t been anybody’s boss in a long time, I found this book very valuable. It’s about how to be more effective in team environments by being sincere and firm yet kind. Ms. Scott was a former manager at several high-profile Silicon Valley companies (e.g., Apple and Google), and the book is packed with real-world examples. Buy it on Amazon.com.

Lincoln in the Bardo, by George Saunders. I don’t read much fiction (not as much as I’d like, anyway), and when I do it’s usually as an audiobook. George Saunders’s debut novel is one you wouldn’t expect to work well in the medium (it features 166 narrators!), and is somewhat disorienting at first. But after a bit,​ I couldn’t stop listening. It still haunts me. Buy it on Amazon.com.

Wikipedia as Information Infrastructure

Wikipedia is more than a publication. As I point out in Living in Information, Wikipedia is also the place where this publication is created. At its scale, it couldn’t happen otherwise. But Wikipedia is more than that: increasingly, it’s also a key part of our society’s information infrastructure. Other systems increasingly rely on it for the “authoritative” versions of particular concepts.

This works well most of the time. But it’s not perfect, and can lead to weird, unexpected consequences. For example, a Wikipedia entry is part of the reason why Google says I’m dead. More recently, a Wikipedia hack led to Siri showing a photo of a penis whenever a user asked about Donald Trump. While the former example is probably due to bad algorithms on Google’s part, the latter seems to be a fault with Wikipedia’s security mechanisms.

The people who manage Wikipedia are in an interesting situation. Over time they’ve created a fantastic system that allows for the efficient creation of organized content from the bottom-up at tremendous scale. They’ve been incredibly successful. Alas, with success comes visibility and influence. The more systems there are that depend on Wikipedia content, the more of a target it becomes for malicious actors.

This will require that the team re-think some of the openness and flexibility of the system in favor of more top-down control. How will this scale? Who will have a say on content decisions? How will Wikipedia’s governance structures evolve? These discussions are playing out right now. Wikipedia is a harbinger of future large-scale generative information environments, so it behooves us all to follow along.

The Eponymous Laws of Tech

Dave Rupert has a great compendium of “Laws” we frequently encounter when working in tech. This includes well-known concepts like Moore’s Law, Godwin’s Law, and Dunbar’s Number alongside some I hadn’t heard before, such as Tessler’s Law:

"Every application must have an inherent amount of irreducible complexity. The only question is who will have to deal with it."
Tessler’s Law or the “Law of Conservation of Complexity” explains that not every piece of complexity can be hidden or removed. Complexity doesn’t always disappear, it’s often just passed around. Businesses need to fix these complex designs or that complexity is passed on to the user. Complex things are hard for users. 1 minute wasted by 1 million users is a lot of time where as it probably would have only taken a fraction of those minutes to fix the complexity. It cringe thinking about the parts of my products that waste users’ time by either being brokenly complex or by having unclear interactions.

Good to know!

The Eponymous Laws of Tech