Project Focus Mode

For most of my career, I’ve worked on several projects simultaneously at any given time. This means lots of information coming and going from and to different people, keeping track of documents and commitments, scheduling meetings, etc. Most of it happens on my computer, which for almost twenty years has been a laptop. (Meaning: it comes with me.) In the past few years, more mobile devices (e.g., iPhone, iPad) have also joined my toolkit. There’s a lot of things going on in these information environments. Keeping everything organized impacts my effectiveness; the time I spend looking for stuff isn’t valuable to my clients. Early on I realized that the only way I’d be able to do this would be if I developed organization systems, and stuck to them over time.

For example, I always have a “projects” folder on my computer. Each project I take on gets an individual subfolder in there. These folders use consistent naming schemes. These days it’s usually the client name, followed by a dash, followed by a (brief!) unique project name. Why not per-client folders? At one point I realized I had to strike a balance between depth and breadth. Going n-folders deep often meant not locating things as quickly. Of course, over time this folder can get crowded. Eventually, I determined the projects folder only needed to contain active projects; I set up a separate “archive” folder where I moved completed project folders.

This system works well for project documents, but that’s not the only information I deal with in my projects. I must also communicate with people. This entails using email apps, messaging clients, contacts lists, calendars, etc. I make commitments to these people (and to myself), and need ways to track those commitments. I’d love it if my computer had a “project mode” I could enter that would focus all of this information on a per-project basis. I’d only see the notes related to the project, the contacts for that project, the email for that project, etc.

There was a stretch in my career when I used an app that implemented this functionality, to a degree: Microsoft’s Entourage. Entourage was part of MS Office for the Mac, a replacement for MS Outlook. The two apps had some similarities; like Outlook, Entourage also offered unified access to email, contacts, calendar, to-dos, and notes. Unlike Outlook, however, Entourage offered a “Projects” tab that allowed me to filter information per-project. When operating in this mode, I could look at notes, emails, contacts, to-dos, and appointments for particular projects. I could also link a folder in the computer’s file system to the project inside Entourage, so was always one click away from all the other project documents.

This setup was close to ideal for me. Alas, it also had drawbacks. For one thing, it wasn’t easy to sync project contexts between multiple devices. This wasn’t an issue when I was using just one computer, but it became an issue as phones started growing more capable. However, the biggest challenge with the project tab in Entourage is that it required a lot of maintenance work. If a new person joined the project, I had to set up the contact in the project. While there was some automation around tagging email per-project, it wasn’t perfect. The most natural thing to do here would be to tag all emails coming from one person as belonging to one project. However, sometimes I’d be working on a couple of projects with the same people. As a result, I’d end up having to do lots of per-email tagging to keep things organized. This created lots of overhead.

Eventually, Microsoft discontinued Entourage in favor of Outlook for the Mac. Outlook was an improvement in many ways. Alas, the Project tab didn’t make the cut. At this point, I migrated to Apple’s detail Mail, Contacts, and Calendar apps, with OmniFocus thrown into the mix for task management. Instead of a monolithic app like Outlook, the default Mac apps follow the Unix “small pieces loosely joined” philosophy. This makes them much better for their intended tasks, but worse at keeping per-project focus.

Even with all the additional work it required, I loved working with Entourage. The ability to create rich contexts that encouraged focus allowed me to juggle projects more effectively and made me more productive. I wonder what a modern take on this concept would be like, given today’s more cloud-centric, AI-powered information environments. Such a system would surely be better at keeping things tidy without requiring so much input from the user, and sync between various devices. I’d love to try something like that—especially if it worked as a filtering layer over the apps I already use.

Blogging and Social Media

“Sooner or later, everything old is new again.”
— Stephen King

A little over a year ago, I completed the bulk of Living in Information. I’d found my voice, and wasn’t ready to put the microphone down. So I started blogging again. While it may seem old fashioned—and perhaps a bit quixotic—I’m loving it.

I’m in service to ideas. Most aren’t original to me; I just give them a voice. Blogging helps me make them a thing in the world. It compels me to dig deeper than I could if I was writing exclusively in environments designed for other ends.

Facebook is great for finding out what your acquaintances are up to. It’s given the Web enough structure for your high school friends to share photos of their pets. Twitter is great for pithy, context-free hot takes. A disaster for discourse.

These environments prioritize novelty and engagement, not coherence and continuity. They’re designed to hold your attention, not to help you reason. I can’t point you to any viable ideas I’ve posted on​ Facebook or Twitter. They’re there but are now indistinguishable from the detritus.

Don’t get me wrong. I love these social networks and have no plans to leave them in the near term. But that’s because I know their role in my life. (Hopefully, they helped bring you here.)

It’s a cliché, but a true one: I’m often unsure of what I think until I’ve written about it. I’m thrilled to have a venue where I can “think out loud;” where I can give ideas life. And I’m thrilled that you’re here. This place isn’t bustling like the social networks, but that’s a good thing. Hopefully, you’ll find stuff of value without feeling nudged.

A Bit of Structure Goes a Long Way

One of the most important lessons I learned in architecture school was the power of constraints. I’d always assumed that in creative work, complete freedom leads to better, more interesting results. After all, given more latitude you’re likely to try more things. But this turns out to be wrong.

The problem is twofold. For one, there’s the paralysis that sets in when facing a completely blank canvas. What to do? Where to start? Etc. For another, you never really have total freedom in the first place. All creative endeavors must grapple with constraints. There are time limits, budgets, the physical properties of paper, the force of gravity, the limits of your knowledge, the limits of what your society deems acceptable, and more. All of them narrow the scope of what you can do at any given time. Understanding the constraints that influence the project — and learning how to work creatively with them, rather than against them — is an essential part of learning to be a good designer.

But it goes further than this. Sometimes doing good work calls for us to introduce constraints of our own. Think of the difference between great jazz playing and mere noodling around. The interesting improvisations happen against the constraints of rhythm and chord (or mode) changes. The musicians don’t have to respect these framing devices, but doing so makes the work come alive. The band’s rhythm section provides enough structure for the soloists to fly “free” — but always in dialog with the underlying structure, either pro or con.

I often think of the success of Facebook in this light. In some ways, Facebook is the apotheosis of the promise of the original World Wide Web. I remember thinking in the mid-1990s that one day everybody would have a web page of their own. However, the hurdles for doing so were too high at the time: you needed space on a web server to host your site, to learn HTML and web design, to understand all of these concepts. More importantly, you needed to have something to tell the world — a compelling reason to get you to overcome the inertia of not doing anything at all.

Over a third of humanity now has a presence online thanks to Facebook. No doubt this is because Facebook abstracted out the hosting and sharing bits. There’s no further need for you to learn HTML or design, or to find a web host! But of course, many other companies had been doing this before Facebook. What Facebook added to the mix was structure: you’re not just sharing arbitrary free-form stuff, you’re sharing the minutiae of everyday life: photos and short text updates. (Think of how limited text editing and presentation is on Facebook. You can’t even bold or italicize text!) And you’re not just sharing them with anybody who cares to read them; you’re sharing them with the one audience that may care: your friends and family.

This structure underlies the entire system. It provides rails to the experience that make onboarding and day-to-day usage easier. The structure you fill out when you join is the same one you expect to see on my profile; you don’t need to re-learn it when you visit my profile page. Instead, you focus on the differences: the content I’ve posted. Text, photos, metadata about who I am — at least the ones I choose to share using the system’s structural constraints. I can tell you about where I live, or where I went to school, for example.

Tell me more...
Tell me more…

We come to expect these structural constructs in the environment, in much the same way jazz players expect rhythm. At scale, these structural constraints become normative. We play with them and around them but never break or transcend them. For a system such as Facebook (which is financed by advertising), the configuration of these structures is the result of a delicate balance between the things people would be enticed to do with such a system (e.g., share their lives with their friends) and how advertisers want to categorize us. Studying these structures reveal a complex picture of who we are not just as individuals, but as participants in a market economy.

Leaning towards overly-prescribed structure, Facebook has successfully gotten lots of people online — using the infrastructure of the Web, but not its open-ended ethos. Given the importance of structure to bootstrapping creative endeavors, I wonder if it could’ve been otherwise.

The Complexity Gap

Timo Hämäläinen:

The historical evolution of civilisations has been characterised by growing specialisation and the division of physical and intellectual labour. Every now and then, this evolution has been interrupted by a governance crisis when the established organisational and institutional arrangements have become insufficient to deal with the ever-increasing complexity of human interactions.

Some complexity scientists use the term “complexity gap” for this situation. Today’s societies are, again, experiencing a complexity gap. There are serious governance problems at all levels of our societies: individuals suffer from growing life-management problems, corporations struggle to adapt their rigid hierarchies, governments run from one crisis to another and multinational institutions make very little progress in solving global problems. A transition to the next phase of societal development requires closing the complexity gap with new governance innovations. Or else societies may face disintegration and chaos.

According to Mr. Hämäläinen, one way to overcome this complexity gap is by practicing second order science.

Second order science comes to the rescue in a complex world

Towards More Adaptive Information Environments

Atul Gawande has published a great piece in The New Yorker on why doctors hate their computers. The reason? Poorly designed software. Specifically, several of the examples in the story point to information architecture issues in the system. These include ambiguous distinctions between parts of the information environment and taxonomies that can be edited globally:

Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. [Dr. Susan Sadoughi] used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.

The bottom line? Software is too rigid, too inflexible; it reifies structures (and power dynamics) in ways that slow down already overburdened clinicians. Some problem domains are so complex that trying to design a comprehensive system from the top-down is likely to result in an overly complex, overly rigid system that misses important things and doesn’t meet anybody’s needs well.

In the case of medicine (not an atypical one) the users of the system have a degree of expertise and nuance that can’t easily be articulated as a design program. Creating effective information environments to serve these domains calls for more of a bottom-up approach, one that allows the system’s structure to evolve and adapt to fit the needs of its users:

Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.

Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.

Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.

My take is there’s nothing inherent in software that would keep it from being more adaptive. (The notion of information architectures that are more adaptive and emergent is one of the core ideas in Living in Information.) It’s a problem of design — and information architecture in particular — rather than technology. This article points to the need for designers to think about the object of their work as systems that continuously evolve towards better fitness-to-purpose, and not as monolithic constructs that aim to “get it right” from the start.

Why Doctors Hate Their Computers

The Mother of All Demos at 50

On December 9, 1968, Doug Englebart put a ding in the universe. Over 90 minutes, he and his colleagues at Stanford Research Institute demonstrated an innovative collaborative computing environment to an audience at the Fall Joint Computer Conference in San Francisco. This visionary system pioneered many of the critical conceptual models and interaction mechanisms we take for granted in today’s personal computers: interactive manipulation of onscreen text, sharing files remotely, hypermedia, the mouse, windows, and more. It blew everybody’s mind.

Apple’s Macintosh — introduced in 1984 — was the first computing system to bring the innovations pioneered by Mr. Englbart and his team to the masses. Macs were initially dismissed as “toys” — everybody who was a serious computer user knew that terminal commands were the way to go. Until they weren’t, and windows-based UIs became the norm. It took about a decade after the Mac’s introduction for the paradigm to take over. Roughly a quarter of a century after The Demo, it’d become clear that’s how computers were to be used.

We’re now in the midst another paradigm shift in how we interact with computers. Most computer users today don’t work in WIMP environments. Instead of the indirect mouse-pointer interaction mechanism, people now interact with information directly through touchscreens. Instead of tethered devices propped atop tables, most computers today are small glass rectangles we use in all sorts of contexts.

Still, fifty years on The Demo resonates. The underlying idea of computing as something that creates a collaborative information environment (instead of happening as a transactional user-machine interaction) is still very much at the core of today’s paradigm. Every time you meet with a friend over FaceTime or write a Google Doc with a colleague, you’re experiencing this incredibly powerful vision that was first tangibly articulated half a century ago.

A website — The Demo @ 50 — is celebrating Mr. Englebart’s pioneering work in this milestone anniversary. The site is highlighting events in Silicon Valley and Japan to commemorate The Mother of all Demos. If you aren’t in either location, there are several online activities you can participate in at your leisure. If you join online, you’ll be able to commemorate The Demo in a most meta way: by doing so in the type of interactive information environments presaged by The Demo itself.

Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.

The Possible and the Practical

Rodney Brooks on why your promised flying cars (or hyperloops or autonomous vehicles) aren’t likely to happen any time soon:

The difference between the possible and the practical can only be discovered by trying things out. Therefore, even though the physics suggests that a thing will work, if it has not even been demonstrated in the lab you can consider that thing to be a long way off. If it has been demonstrated in prototypes only, then it is still distant. If versions have been deployed at scale, and most of the necessary refinements are of an evolutionary character, then perhaps it may become available fairly soon. Even then, if no one wants to use the thing, it will languish in the warehouse, no matter how much enthusiasm there is among the technologists who developed it.

The Rodney Brooks Rules for Predicting a Technology’s Commercial Success

A Strong Body for Strong Workshops

I’m in the midst of an intense week that has me facilitating co-creation workshops for six days back-to-back. Every day starts around 7:00 and usually wraps with dinner around 9pm. In the middle, there’s a lot of standing around putting stickies on walls, coordinating sketching exercises, and leading folks through various other activities. Facilitating workshops is hard work. Doing so day after day for more than a couple of days is very intense. It’s cognitively and physically exhausting, and only works if the facilitator takes care of his or her body. For me, this means:

  • Eating healthy foods in small amounts. Many workshops treat meal times as work/social activities, and many of the “social” foods in our culture (such as pizza and sandwiches) can be carbohydrate- and fat-rich. Eating lots of carb-rich foods in the middle of the day can lead to reduced performance in the afternoon; you don’t want to be crashing at a time when you’re supposed to be helping energize others. Also be wary of snack foods meant to keep team energy and morale up. (Our workshop features a big tub of animal crackers.) I keep healthy snacks in my backpack; it helps me resist the temptation of indulging in sweets.
  • Not drinking too much. There’s an important social component to working successfully with the same group of people over several days. Workshop participants often decompress by sharing a few drinks at the end of the day. A glass or two of wine or a cocktail may be ok, but be mindful of not over-indulging; you don’t want to try to lead a workshop while fighting a hangover.
  • Getting lots of sleep. Your body needs to recuperate after long days of work. Especially when drinks are involved, you may be tempted to hang out until the wee hours. Not getting enough sleep can seriously impair your effectiveness as a facilitator.
  • Meditating. I often start the day by sitting silently for 15-20 minutes, observing my breath. Doing so clears my mind and helps me prepare for the intense day ahead. Meditation requires no equipment; it’s an easy practice you can do anywhere at any time.
  • Working out. This is often harder to fit into workshop days, especially if schedules start early and end with dinner/drinks. However, it’s important to move your body—especially if you’ll be spending the rest of the day in a conference room. During workshops, I prefer to exercise by going for a run outdoors, even if it’s cold outside; I’ll be spending most of my time during the day inside an office or hotel and this is a good opportunity to get some fresh air.

Workshop facilitation is an intense cognitive activity; you must head into the day with a good idea of what you will be doing and what you expect to get out of it. But it also has an important physical component. The mind won’t be as effective if not supported by the body. When I’m leading workshops, I think of myself as a kind of athlete; as with other athletic endeavors, training, preparation, and discipline lead to better results. Taking care of your body is paramount if you aspire to lead workshops successfully.