Seeing Clearly

You can’t act skillfully if you don’t have a good understanding of your context and situation. We say we want to “see clearly,” but this is obviously a metaphor; you don’t need the sense of sight to know what’s going on. What you do need is an open mind. Questioning things, especially things you take for granted.

Seeing clearly isn’t easy. We have imperfect access to information. Ambiguity abounds. To make matters worse, we’re vulnerable to confirmation bias: the tendency to only pay attention to information that confirms or reinforces our beliefs. We see what we want to see. We suspect things — and people — that contradict our positions. We make decisions based on assumptions and — given enough skin in the game — double down even when additional evidence suggests we may be wrong. Our egos are powerful forces, and belief systems even more so. Nobody likes to lose face. But the effects of not seeing clearly can be disastrous.

So how can we see more clearly? When I look back to situations in which I succumbed to confirmation bias, I find that I subconsciously knew what was happening early on. Alas, I became ensnared, my mind tickled by the possibilities, my eyes gravitating towards the things I wanted to see. I’ve learned that in situations like these, seeing clearly requires that I keep my eyes open — but do so while heeding another organ: my gut.

Measuring Progress From the Trenches

These days I find myself taking a short-term perspective toward my work, even as I advocate more long-term thinking. While the irony of this situation isn’t lost on me, execution requires that I focus on the here, now, and next.

My current commitments — teaching a weekly class, speaking at various events, this daily blog, finishing my book — call for a constant focus on what’s immediately around the corner. It’s not like I don’t have any sense of what I’m doing in the long term; I’ve mapped out a higher-level plan for all of these projects. But I’m now in execution mode. “In the trenches,” to use the gory cliché. I suspect it’s analogous to the situation many busy teams and organizations find themselves in. They’ve planned their approach or had someone plan it out for them; now they’re doing the necessary work.

The question is: When, how, and how frequently do they stop to take stock of progress, to see how they’re doing? And is it necessary to stop? Or are there ways to bake feedback mechanisms into execution so that they can constantly adjust? (I sense this is where frameworks like OKRs provide value; they offer a means for short-cycle feedback.) I’m looking for ways of implementing such short-cycle feedback loops in my work. The obvious challenge is that this work must compete for time with the actual work to be done. I plan to run a series of experiments on this and write about them here. If I have time, of course.

Responding to Undetectable Signals in the Environment

Walking is my favorite exercise and preferred means of conveyance. I love experiencing places from the pedestrian’s perspective and at the pedestrian’s pace: thoughtfully, consciously. I have a fast gait, much to the annoyance of my friends and loved ones. This also annoys me in two situations: when I’m walking in a crowded sidewalk and when I’m walking my dog, Bumpkin.

The issue with Bumpkin is that he wants to stop every few paces to sniff around. He sniffs electricity poles, fire hydrants, bushes, fences, trees — any surface in his immediate vicinity. Well, not just any surface; he takes particular interest in vertical surfaces that have little puddles on the ground below. In other words, he likes to sniff where other dogs have peed.

Dogs use urine to send various signals to each other; they can detect territorial markers, hierarchy, and sexual availability. Of course, I can tell no such things from the little puddles — I’m just annoyed at being slowed down. But for Bumpkin, these spots provide information. His interpretation of these signifiers affects my behavior too, since I must slow down.

Some front yards in our neighborhood have little signs on them that say “no dog pooping.” I’m always on the lookout for these signs when I’m walking Bumpkin because I don’t want to deal with an irate homeowner. If Bumpkin indicates that he wants to poop in a yard with a sign on it, I pull on his leash to get him to another, unrestricted yard. Of course, Bumpkin cannot read these “no poop” signs — not even the ones that use obvious graphics. Still, his behavior is modified indirectly by another entity (me) who can derive meaning from them.

So Bumpkin and I traverse our neighborhood as a unit composed of two organisms with different sensory systems and cognitive abilities, our joint behavior influenced by signs one or the other can’t understand or even perceive. When I’m part of this unit, I must adjust my pace in the expectation that my counterpart is deriving information from the environment that is useful to him, even though I’m oblivious to its meaning.

Design and Implementation Trade-offs

A couple of days ago I wrote about how important it is for designers to know their materials. The material for interaction designers is code, so a baseline understanding of what code can and can’t do is essential for designers to be effective.

I learned this principle in one of my favorite books: The Art of Computer Game Design, by Chris Crawford (Osborne/McGraw Hill, 1984). Crawford was one of the early Atari game designers/implementors. (I use the slash because the distinction wasn’t as clearly drawn then as it is now.) His book lists seven design precepts for computer games. The seventh of these is titled “Maintain Unity of Design Effort,” and includes the following passage:

Games must be designed, but computers are programmed. Both skills are rare and difficult to acquire, and their combination in one person is rarer still. For this reason, many people have attempted to form design teams consisting of a nontechnical game designer and a nonartistic programmer. This system would work if either programming or game design were a straightforward process requiring few judicious trade-offs. The fact is that both programming and game design are desperately difficult activities demanding many painful choices. Teaming the two experts is rather like handcuffing a pole-vaulter to a high jumper; the resultant disaster is the inevitable result of their conflicting styles.

More specifically, the designer/programmer team is bound to fail because the design will make unrealistic demands on the programmer while failing to recognize golden opportunities arising during programming.

Crawford illustrates this by using a couple of examples from his career. One that’s stuck with me comes from the development of the game EASTERN FRONT 1941, a war game for the early Atari 8-bit computers. While he was programming the game (which he’d also designed), Crawford spotted an opportunity: a simple addition to its calendar routines would allow color register values to change as game time progressed. This allowed the color of trees to change to reflect the seasons. A minor detail for sure, but one that added depth to the experience. (Keep in mind that programming for these early computers meant always optimizing for limited memory. This minor change came at the expense of only 24 bytes of computer memory; a “cost-effective improvement” in Crawford’s words.)

Software development is much less painful today than it was in the late 1970s and early 1980s. Still, limited budgets and timeframes call for trade-offs. Knowing where the opportunities and constraints are helps when you’re called to decide what to include and exclude in the work.

Team Moods

I was once part of a team that was going through a rough patch. We’d been through two reorganizations in eighteen months — not good for morale — and now we’d had a sudden change in leadership. (Which is to say: we found ourselves with no leadership.) There was no vision of the future, no clear lines of responsibility, no accountability. People were leaving — out of their volition and otherwise. It was a mess.

This team included some of the brightest people I’ve worked with. All of us found it very difficult to get anything done. We’d spend more time talking about the state of the team than about the work. We were worried about the future of the company and — of course — our jobs. It was an unpleasant experience for all; I remember the sense of relief when it ended. (The team was dissolved.)

If you’d been able to travel back in time to when I first joined the team, you would’ve gotten a very different picture. We were cracking then! We had a clear vision of what we were doing and who was responsible for what. We had competent and committed leadership. We had deadlines. We had the support of the company. It was exciting work! I have vivid memories of a celebration party the night we launched our first release. Everyone was exuberant.

Same group, two very different situations. In one, we were paralyzed; ineffective. In the other, we were at the peak of our productivity. What changed?

Continue reading

How Do You Know?

Your mental models influence how you think about things and how you act. For example, a long time ago the primary model in medicine was that the body contained four basic fluids, known as humors. Humors were related to different temperaments, and imbalances between them caused different diseases:

humorism
Humorism, image by Tom Lemmens via Wikimedia.

This model influenced medical practice for more than 2,000 years. If you were a doctor trying to save a dying patient during those times, your approach to treatment would be influenced by this model. You wouldn’t question it. In the 19th Century, advances in medical research did away with humorism. The model was disproven and abandoned as new knowledge came in.

When trying to make important decisions, I examine the models that influence my thinking. There are many: models about interpersonal relationships, models about incentives, models about technology, etc. These models are imperfect by definition. (Norbert Wiener: “The best material model of a cat is another, or preferably the same, cat.”) Some may reflect reality better than others, which is to say, some are more useful than others. (By useful I mean they result in better predictions about the outcomes of decisions. Note usefulness is only evident in retrospect.) By default, I assume these models are incomplete, especially when dealing with messy situations.

The only way to refine the models is through knowledge, which you must consciously search out. If you don’t understand something, you ask questions; if the questions aren’t forthcoming — or in more extreme cases, are actively rejected — that also adds a data point. Little by little you build a more complete model, one informed by what you’re observing in the world. This model should be built on observation and contemplation, not just hearsay. If your model relies on authority, you must be sure you trust that authority. (This is yet another important model that influences behavior. How do you know people know what they say they know?)

The ultimate authority to be suspicious of is ourselves. We run the risk of becoming attached to our models, closing ourselves off from new information that might shake our foundations. Thus, the importance of being open-minded. Open minded, with eyes wide open. Letting new knowledge in, with the expectation that all models are incomplete and up for revision as new information arrives.

Know Your Materials

One of the main things I learned when I learned to paint with oils is how deeply the characteristics of the materials involved — canvas, gesso, oil, pigments, solvents — influence the form of the work. As a painter, you think differently when working with oils than when working with acrylics or watercolors.

The same is true for any creative endeavor. Whether you’re designing a new evening gown or the landscaping for a vacation home, your thinking about the subject (and hence the form of the final output) will be deeply informed by how well you understand the constraints and possibilities afforded by the materials.

The more mastery you have over the medium, the more effective you’ll be. Mastery calls for more than conceptual knowledge. Conceptual (or “book”) learning is not enough; you need hands-on time with the object of your design to understand what it can and can’t do.

One of the perennial discussions among UX design practitioners is whether or not they need to know how to code. While I don’t believe designers must know how to code, I’ve observed that designers with first-hand knowledge of what code can do — and what it takes to implement things in code — are more effective. Designers who know code communicate more clearly with developers, and have more realistic expectations of what the medium can accommodate. They also have a better understanding of the possibilities the medium affords and can suggest options that wouldn’t have occurred to them otherwise. The result is better outcomes, done faster and with less pain for everyone involved.

Reducing Ambiguity in Labels

When my family and I moved to the U.S., we left some bulky stuff behind in storage. Last year we contracted a company to ship it to our home in California. Shipping big, heavy things internationally requires a lot more paperwork than mailing a small package, so in the process, I’ve been exposed to lots of forms.

Earlier today I was filling out one of them, and hit a snag: there was a field labeled “Shipper Name.” I was confused. Why would the shipper send me a form with a field that required me to state who the shipper is? I emailed them, and their response floored me: I was supposed to enter my name in the field.

As an information architect, I’ve seen many ambiguous labels. But this one was special: here was a case where the label had exactly opposite meanings to both parties involved. To me, the shipper is the shipping agency; them. To them, the shipper is the customer who’s contracted the shipping service; me. Because it’s their form, they used the “Shipper Name” label expecting that it’d be clear to me, but it wasn’t — and couldn’t be.

What to do in such cases? It’s obvious: the form’s designers need to approach the problem from the perspective of the person who will be filling it out, not from the company’s perspective. It may then be less clear to the company’s people, but they aren’t the ones tasked with filling the thing out. Whenever faced with an ambiguous label such as this one, re-write it to make it clear to the person who will use it.

Book Notes: “Planning for Everything”

Planning for Everything: The Design of Paths and Goals
By Peter Morville
Semantic Studios, 2018

Planning is essential, and many of us don’t do it very well. Fear, uncertainty, and powerlessness hold us back. Our lives are messy, and the challenges we face are multi-faceted and complex. Who better than one of the world’s most prominent information architects to help make sense of the mess?

On the surface, Peter Morville’s new book, Planning for Everything, seems like a hands-on guide to making better plans. And it is; it includes practical frameworks that can help with your planning. But there’s much below the surface that makes this book special. It goes deep into the subject, examining how we envision future possibilities, set goals, decide among various compelling options, strategize, act, and reflect. Throughout it weaves examples and stories both from the author’s personal experience — running marathons, leading a consultancy, parenting — and from literary sources that range from the Bhagavad Gita to Yuval Noah Harari. The result is not only practical, but also entertaining and inspiring.

This short book is long on wisdom; I left it feeling as though I’d just spent a calm afternoon with an insightful mentor. If you’re facing a major life decision (or even a minor one), it behooves you to read it.

Buy it on Amazon.com