Designing for Density and Sequence

In observance of the 50th anniversary of the Apollo 11 Moon landing, I’m reading Mike Collins’s memoir, Carrying the Fire: An Astronaut’s Journeys. I’m loving it. Collins is an engaging writer, and the book is packed with lots of details about the Apollo program and the process of becoming a NASA astronaut in the early 1960s.

When discussing the challenges inherent in the design of Apollo’s cockpits and controls, Collins calls out one I’ve faced when designing complex systems UIs: In Apollo, “more information is available that can possibly be presented to the pilot at any one time, so each subsystem must be analyzed to determine what its essential measurements are.” The point is to give users the information they need to make decisions quickly without overloading them in an already stressful environment.

This challenge applies to many design problems here on Earth. When working on information-heavy, highly specialized systems (neurosurgery, energy management, etc.), nailing these critical choices and getting the density right calls for subject domain knowledge — and ideally, subject domain experience. Co-creation is useful for this. (In any case, research, research, research!)

The discussion includes this gem about the importance of getting the sequence of interactions right:

A classic case of poor cockpit design is the ejection procedure which used to be in one Air Force trainer. It was a placard listing half a dozen important steps, printed boldly on the canopy rail where the pilot couldn’t miss seeing it. The only flaw was that step 1 was “jettison the canopy.”

Don’t do that.


A Space for Collaboration

Yesterday I spent most of the afternoon working with a friend and colleague. We were synthesizing the results of a workshop we co-facilitated earlier in the week. It was fun, but I often felt constrained by the limitations of the space we were in and the technology we had available.

This type of work usually requires reviewing lots of photos from sketches and stickies posted on walls. My friend and I bounced ideas and memories from the workshop off of each other; we spotted patterns in these materials and captured them in a presentation deck. It’s easier to do this sort of work if we can both see the photos and files we’re editing. We took over the living room in my house, where we had access to ample wall space and projector. We projected photos from the workshop on one of the walls in the space, while we sat on the couch discussing their implications.

While this sounds like the ideal setup, soon it became apparent that there were limitations. For example, we were constrained to a single rectangular window of information on the wall. We could show photos and the document we were editing, but only if we split this rectangle, reducing our ability to see what we were doing. This was workable but not ideal.

A bigger issue was that only one of us could control what was being projected. For example, I was examining the photos from my laptop and my friend was editing the presentation deck. If I was sharing the pictures on the wall, we couldn’t see changes to the presentation deck and vice-versa. Yes, there are workarounds to this problem. For example, we could’ve used Google Docs (or something equivalent), which would’ve allowed us to edit the deck jointly. But this wasn’t ideal either. We spent more time than I would’ve liked trying to figure out how to best collaborate in this setup.

What I wanted was for all of the walls in my living room to be “digitally active” — to allow us to arbitrarily distribute our work around the room and jointly control it. Current computer display technologies are based on a one user/one computer/one display paradigm; projectors are treated as a display that is expected to be displaying the information of one computer at a time.

Instead, I’d like to place various photos on the walls around the room — perhaps recreating the space of the workshop. My friend would put his presentation on another wall. Both of us could then annotate and edit these digital objects arbitrarily. We’d be inhabiting a physical space that was also digitally active, a shared computing environment that we could inhabit and manipulate together.

Something like this is already being built at Dynamicland. That project features a space that allows users to manipulate digital information with physical artifacts. The digital information is projected onto the environment, with cameras detecting the positions of objects in physical space. As you manipulate these objects, the information projected on them changes. It’s a fascinating environment, one pregnant with potential. However, Dynamicland’s objective isn’t to extend our current collaboration paradigms but to reinvent them.

What I’m describing here is conceptually different: I want the sort of stuff we’re used to moving around in computer windows in our laptops and desktop computers up on the walls, while transcending the current single-user paradigm. (It’s a much more conservative vision than Dynamicland’s.) Does such a thing exist? (Perhaps using augmented reality instead of projectors?) It seems like it should be feasible.

Discoverability in the Age of Touchscreens

When I was first getting started with computers, in the late 1970s, user interfaces looked like this:

Visicalc, the first spreadsheet program, required users to learn commands. Image: Wikipedia (
Visicalc, the first spreadsheet program, required users to learn commands. Image: Wikipedia

Getting the computer to do anything required learning arcane incantations and typing them into a command line. While some — such as LOAD and LIST — were common English words, others weren’t. Moving to a new computer often meant learning new sets of incantations.

As with all software, operating systems and applications based on command line interfaces implement conceptual models. However, these conceptual models are not obvious; the user must learn them either by trial and error — entering arbitrary (and potentially destructive) commands — or reading the manual. (An activity so despised it’s acquired a vulgar acronym: RTFM.) Studying manuals is time-consuming, cognitively taxing, and somewhat scary. This held back the potential of computers for a long time.

Continue reading