A Bit of Structure Goes a Long Way

One of the most important lessons I learned in architecture school was the power of constraints. I’d always assumed that in creative work, complete freedom leads to better, more interesting results. After all, given more latitude you’re likely to try more things. But this turns out to be wrong.

The problem is twofold. For one, there’s the paralysis that sets in when facing a completely blank canvas. What to do? Where to start? Etc. For another, you never really have total freedom in the first place. All creative endeavors must grapple with constraints. There are time limits, budgets, the physical properties of paper, the force of gravity, the limits of your knowledge, the limits of what your society deems acceptable, and more. All of them narrow the scope of what you can do at any given time. Understanding the constraints that influence the project — and learning how to work creatively with them, rather than against them — is an essential part of learning to be a good designer.

But it goes further than this. Sometimes doing good work calls for us to introduce constraints of our own. Think of the difference between great jazz playing and mere noodling around. The interesting improvisations happen against the constraints of rhythm and chord (or mode) changes. The musicians don’t have to respect these framing devices, but doing so makes the work come alive. The band’s rhythm section provides enough structure for the soloists to fly “free” — but always in dialog with the underlying structure, either pro or con.

I often think of the success of Facebook in this light. In some ways, Facebook is the apotheosis of the promise of the original World Wide Web. I remember thinking in the mid-1990s that one day everybody would have a web page of their own. However, the hurdles for doing so were too high at the time: you needed space on a web server to host your site, to learn HTML and web design, to understand all of these concepts. More importantly, you needed to have something to tell the world — a compelling reason to get you to overcome the inertia of not doing anything at all.

Over a third of humanity now has a presence online thanks to Facebook. No doubt this is because Facebook abstracted out the hosting and sharing bits. There’s no further need for you to learn HTML or design, or to find a web host! But of course, many other companies had been doing this before Facebook. What Facebook added to the mix was structure: you’re not just sharing arbitrary free-form stuff, you’re sharing the minutiae of everyday life: photos and short text updates. (Think of how limited text editing and presentation is on Facebook. You can’t even bold or italicize text!) And you’re not just sharing them with anybody who cares to read them; you’re sharing them with the one audience that may care: your friends and family.

This structure underlies the entire system. It provides rails to the experience that make onboarding and day-to-day usage easier. The structure you fill out when you join is the same one you expect to see on my profile; you don’t need to re-learn it when you visit my profile page. Instead, you focus on the differences: the content I’ve posted. Text, photos, metadata about who I am — at least the ones I choose to share using the system’s structural constraints. I can tell you about where I live, or where I went to school, for example.

Tell me more...
Tell me more…

We come to expect these structural constructs in the environment, in much the same way jazz players expect rhythm. At scale, these structural constraints become normative. We play with them and around them but never break or transcend them. For a system such as Facebook (which is financed by advertising), the configuration of these structures is the result of a delicate balance between the things people would be enticed to do with such a system (e.g., share their lives with their friends) and how advertisers want to categorize us. Studying these structures reveal a complex picture of who we are not just as individuals, but as participants in a market economy.

Leaning towards overly-prescribed structure, Facebook has successfully gotten lots of people online — using the infrastructure of the Web, but not its open-ended ethos. Given the importance of structure to bootstrapping creative endeavors, I wonder if it could’ve been otherwise.

The Complexity Gap

Timo Hämäläinen:

The historical evolution of civilisations has been characterised by growing specialisation and the division of physical and intellectual labour. Every now and then, this evolution has been interrupted by a governance crisis when the established organisational and institutional arrangements have become insufficient to deal with the ever-increasing complexity of human interactions.

Some complexity scientists use the term “complexity gap” for this situation. Today’s societies are, again, experiencing a complexity gap. There are serious governance problems at all levels of our societies: individuals suffer from growing life-management problems, corporations struggle to adapt their rigid hierarchies, governments run from one crisis to another and multinational institutions make very little progress in solving global problems. A transition to the next phase of societal development requires closing the complexity gap with new governance innovations. Or else societies may face disintegration and chaos.

According to Mr. Hämäläinen, one way to overcome this complexity gap is by practicing second order science.

Second order science comes to the rescue in a complex world

Towards More Adaptive Information Environments

Atul Gawande has published a great piece in The New Yorker on why doctors hate their computers. The reason? Poorly designed software. Specifically, several of the examples in the story point to information architecture issues in the system. These include ambiguous distinctions between parts of the information environment and taxonomies that can be edited globally:

Each patient has a “problem list” with his or her active medical issues, such as difficult-to-control diabetes, early signs of dementia, a chronic heart-valve problem. The list is intended to tell clinicians at a glance what they have to consider when seeing a patient. [Dr. Susan Sadoughi] used to keep the list carefully updated—deleting problems that were no longer relevant, adding details about ones that were. But now everyone across the organization can modify the list, and, she said, “it has become utterly useless.” Three people will list the same diagnosis three different ways. Or an orthopedist will list the same generic symptom for every patient (“pain in leg”), which is sufficient for billing purposes but not useful to colleagues who need to know the specific diagnosis (e.g., “osteoarthritis in the right knee”). Or someone will add “anemia” to the problem list but not have the expertise to record the relevant details; Sadoughi needs to know that it’s “anemia due to iron deficiency, last colonoscopy 2017.” The problem lists have become a hoarder’s stash.

The bottom line? Software is too rigid, too inflexible; it reifies structures (and power dynamics) in ways that slow down already overburdened clinicians. Some problem domains are so complex that trying to design a comprehensive system from the top-down is likely to result in an overly complex, overly rigid system that misses important things and doesn’t meet anybody’s needs well.

In the case of medicine (not an atypical one) the users of the system have a degree of expertise and nuance that can’t easily be articulated as a design program. Creating effective information environments to serve these domains calls for more of a bottom-up approach, one that allows the system’s structure to evolve and adapt to fit the needs of its users:

Medicine is a complex adaptive system: it is made up of many interconnected, multilayered parts, and it is meant to evolve with time and changing conditions. Software is not. It is complex, but it does not adapt. That is the heart of the problem for its users, us humans.

Adaptation requires two things: mutation and selection. Mutation produces variety and deviation; selection kills off the least functional mutations. Our old, craft-based, pre-computer system of professional practice—in medicine and in other fields—was all mutation and no selection. There was plenty of room for individuals to do things differently from the norm; everyone could be an innovator. But there was no real mechanism for weeding out bad ideas or practices.

Computerization, by contrast, is all selection and no mutation. Leaders install a monolith, and the smallest changes require a committee decision, plus weeks of testing and debugging to make sure that fixing the daylight-saving-time problem, say, doesn’t wreck some other, distant part of the system.

My take is there’s nothing inherent in software that would keep it from being more adaptive. (The notion of information architectures that are more adaptive and emergent is one of the core ideas in Living in Information.) It’s a problem of design — and information architecture in particular — rather than technology. This article points to the need for designers to think about the object of their work as systems that continuously evolve towards better fitness-to-purpose, and not as monolithic constructs that aim to “get it right” from the start.

Why Doctors Hate Their Computers

The Mother of All Demos at 50

On December 9, 1968, Doug Englebart put a ding in the universe. Over 90 minutes, he and his colleagues at Stanford Research Institute demonstrated an innovative collaborative computing environment to an audience at the Fall Joint Computer Conference in San Francisco. This visionary system pioneered many of the critical conceptual models and interaction mechanisms we take for granted in today’s personal computers: interactive manipulation of onscreen text, sharing files remotely, hypermedia, the mouse, windows, and more. It blew everybody’s mind.

Apple’s Macintosh — introduced in 1984 — was the first computing system to bring the innovations pioneered by Mr. Englbart and his team to the masses. Macs were initially dismissed as “toys” — everybody who was a serious computer user knew that terminal commands were the way to go. Until they weren’t, and windows-based UIs became the norm. It took about a decade after the Mac’s introduction for the paradigm to take over. Roughly a quarter of a century after The Demo, it’d become clear that’s how computers were to be used.

We’re now in the midst another paradigm shift in how we interact with computers. Most computer users today don’t work in WIMP environments. Instead of the indirect mouse-pointer interaction mechanism, people now interact with information directly through touchscreens. Instead of tethered devices propped atop tables, most computers today are small glass rectangles we use in all sorts of contexts.

Still, fifty years on The Demo resonates. The underlying idea of computing as something that creates a collaborative information environment (instead of happening as a transactional user-machine interaction) is still very much at the core of today’s paradigm. Every time you meet with a friend over FaceTime or write a Google Doc with a colleague, you’re experiencing this incredibly powerful vision that was first tangibly articulated half a century ago.

A website — The Demo @ 50 — is celebrating Mr. Englebart’s pioneering work in this milestone anniversary. The site is highlighting events in Silicon Valley and Japan to commemorate The Mother of all Demos. If you aren’t in either location, there are several online activities you can participate in at your leisure. If you join online, you’ll be able to commemorate The Demo in a most meta way: by doing so in the type of interactive information environments presaged by The Demo itself.

Developing a Mental Model of a System

In order to develop proficiency in a system, you must develop a mental model of how it works. This model must map to how the system is structured; you develop the model by interacting with the system. First impressions matter, but your understanding becomes more nuanced over time as you encounter different situations and conditions in the system. You also bring expectations to these interactions that influence your understanding. The degree to which your understanding becomes more accurate over time depends on how transparent the system is.

The Apple Watch serves as a good illustration. I’d never owned a smartwatch before buying mine, but I came to the experience of wearing a wrist-worn computer with expectations that were set by two devices that provided similar functionality: analog wristwatches and smartphones. From the former I brought assumptions about the Apple Watch’s timekeeping abilities and fit on the wrist, and from the latter expectations about a host of other features such as communication abilities, battery duration, legibility under various lighting conditions, how to access apps in the system, the fact there are apps at all, and so on.

In the first days after buying the Watch, I realized I had to adjust my model of how the device works. It wasn’t like my previous analog watch or my iPhone; some things were particular to this system that were very different from those other systems. For example, I had to learn a new way of launching apps. The starting point for most iPhone interactions is in a “home” screen that lists your apps. While the Watch also has a screen that lists your apps, that’s not where most interactions start; on the Watch, the starting point is your watch face. Watch faces can have “complications,” small widgets that show snippets of critical information. Tapping on a complication launches its related app. Thus, it makes sense to configure your favorite watch face with complications for the apps you use most frequently. This is a different conceptual model than the one offered by the analog watch or the smartphone.

After some time of using the Apple Watch, I now understand how it is structured, and how it works — at least when it comes to telling time and using applications. There’s an aspect of the system that still eludes me: which activities consume the most energy. For a small battery-powered computer like the Apple Watch, power management is crucial. Having your watch run out of power before your day is over can be annoying. This often happens to me, even after a few years of using this device. I’ve tried many things, but I still don’t know why some days end with 20% of battery left on the watch while others end with a dead watch before 5 pm. If the Apple Watch were more transparent in showing how it’s using power, I’d be better at managing its energy usage.

The tradeoff with making the system more transparent is that doing so can increase complexity for end users. I’m not sure I’d get more enjoyment from my Apple Watch if I knew how much energy each app was consuming. Designers abstract these things so that users don’t have to worry about them. As users, the best we can do is deduce causal relationships by trying different things. However, after three years of Apple Watch ownership,​ I still don’t understand how it manages power. The system is inscrutable to me. While this frustrates me, it’s not a deal breaker in the same way not grokking the system’s navigation would be. Not all parts of the system need to be understandable to the same degree.

Modeling in Design

Hugh Dubberly on why modeling is important for designers:

As designers increasingly focus on systems and communities of systems, we need to improve our modeling skills.

Without modeling, system design is not possible. Often service systems and computer-based applications are partly hidden or invisible, or they stretch across time and space and cannot be seen all at once or from a single vantage point. In such cases, models must stand in for systems during analysis, design, and even operation.

Using models, designers can unify otherwise separate artifacts and actions. Interaction models unify interface widgets. Service models unify customer touch points. Brand models unify messages. Platform models unify individual products.”

As is usual with Mr. Dubberly, the whole article is comprehensive and lucid. Worth your time.

Models of Models

Mobile Computing at a Different Level

There are many ways in which people use computers. (I’m talking about all sorts of computers here, including smartphones and tablets.) Some people’s needs are very simple; they may use the machines merely to stay connected with their world. Other people have more complex needs. You can organize these various uses on a continuum that ranges from least to most powerful. Consider at least three levels:

  1. Accessing Content: Computer is used primarily to find information on the internet. Users at this level interact with others through email or social networks, but do so lightly. They spend the bulk of their on-device time accessing content created by others. Many casual users are on this level; it’s also where they have the least power.
  2. Creating Content: In addition to the previous level, computer is also used as a content-creation device. While users at this level may spend a considerable amount of time accessing content created by others, they also produce lots of content of their own. Many people who use computers professionally are on this level.
  3. Tweaking Workflows: In addition to the previous two levels, the computer is also used to modify how the computer works. This includes enabling new workflows through programming or scripting. This level affords most users the most power.

(There’s an even higher level, which is closer to the machine’s metal and affords a very small subset of people tremendous power. I’m not going to get into that here; I’m concerned with how most of us interact with these devices.)

Consider a transportation analogy. On level one, you are a passenger in public transport. On level two, you are driving a car. On level three, you are a mechanic, capable of making modifications to the vehicle to fix it or improve its performance. As with transportation, the higher the level, the more complexity the user must deal with.

Which level are you? If you’re like most people, your at either levels 1 or 2. This is OK; very few people take advantage of level 3. Learning to program requires great effort, and for most uses the payoff may seem to not be worth the investment of time required.

I was around eight years old when I first interacted with a computer: a TRS-80 Model I. As with most machines of this vintage (late 1970s), when you sat down in front of a Model I you were greeted by a command prompt:

Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html
Image: https://picclick.com/TRS-80-Radio-Shack-Tandy-Model-1-Video-Display-323191180180.html

The computer could do very little on its own. You needed to give it commands, most often in the BASIC programming language. (Which incidentally just turned 50.) So level 3 was the baseline for using computers at this time. We’ve come a long way since then. Most computers are now like appliances; you don’t need to know much about how they work under the hood in order to take advantage of them. However, knowing even a little bit about how they work can grant you superpowers.

Level 3 has come a long way from the days of the TRS-80. I’ve been messing around with the new Shortcuts functionality in iOS 12, and am very impressed with how easy it is to string together several apps to accomplish new things. For example, the Home ETA shortcut strings together the Apple Maps, Contacts, and Messages apps. When you install the shortcut, you configure it with your home’s street address and the contact information of the person you want notified. When you activate the shortcut (which you can do through various means, including an icon on your home screen), Apple Maps figures out your current location and uses it to calculate how far you are from home. Then it passes that information to Messages, which then sends your estimated time of arrival to your selected contact.

It’s not a mind-blowing functionality, but the fact that iPhones and iPads can do this at all is impressive. iOS users can now create arbitrary connections between components of the system, opening up possibilities that were previously difficult or impossible. Shortcuts also promises to make these much better as productivity tools. It’s the old Unix “small pieces loosely joined” philosophy — but in a platform designed to be less of a computer than an appliance. It opens up level 3 possibilities for level 1 and 2 users, without asking that they become mechanics.

How to Compromise a Product Vision

Great products start with a vision. Somebody — perhaps a small group of people — has an idea to change how something works in the world. On its way to becoming a real thing, the team tweaks and adjusts the idea; they make small compromises to the laws of physics, market demands, manufacturing constraints, user feedback, and so on. In the process, the idea goes from a “perfect” imagining of the vision to a pretty good embodiment that can be used by people in the real world.

At least that’s the ideal. However, sometimes a product changes so much that its original vision becomes compromised. One of the best examples I’ve seen of this happened to one of the attractions in the Magic Kingdom theme park at Walt Disney World: Walt Disney’s Carousel of Progress. This is one of the few Disney attractions that have Walt’s name on them. There’s a good reason for this. The Carousel was the highest expression of his particular genius: using new technologies to convey big ideas to the masses in ways that they could connect to at an emotional level. Some people say it was his favorite attraction.

Continue reading

Think Better, Fast

The quality of your thinking is the factor that will most impact your life. Thinking well is essential to getting anything done, and this is as true for teams as it is for individuals. The better your thinking, the better the decisions you’ll make. This, in turn, will make it more likely you’ll achieve your goals. What would it be worth to you if you could think better, both individually and collectively? The dividends would be many-fold.

The first step to thinking better is understanding how you think. Many people believe thinking is something that only happens in the brain, which they see as some kind of meat computer. This is a misunderstanding. Cognition is more complicated than this. The brain is only one part of a complex system that extends outside of the body. As I write these words, I see them appear on the display of my MacBook Pro. My fingers move over the keyboard, and characters appear on the screen. The sentences I write don’t emerge fully-formed from my brain. Instead, the computer holds them for me in a temporary buffer where I can see and reflect on them. No, that’s not the right word; let’s try another one. I delete the word, type a new one. Over and over again. Little by little, the product of my thinking emerges from this brain-senses-fingers-keyboard-display system. The computer is part of my thinking apparatus, and not in a superficial way, but deeply. It would be more difficult for me if I had to craft the sentences exclusively in my brain and then transcribe them in “finished” form.

Of course, the extension of your brain doesn’t need to be a computer. You can also think with a paper-based notebook, a marker on a whiteboard, a stick on wet sand, etc. When you sketch or take notes in a journal, the notebook becomes a part of your thinking system. When you use a pile of stones to work through an arithmetic problem, the stones and the ground they’re lying on become part of your thinking system. You work through ideas by seeing them “out in the world.” There, you can explore relationships between elements in greater detail than you could if you had to hold everything in your mind. You change things, move them around, try variations, iterate, refine — much as I’m doing with the sentences I write here.

Continue reading