Understanding Customer Mental Models

How well do you understand your customers? Do you know how they make decisions? How they see your business’s domain? What makes them tick?

Everyone understands things a bit differently. Nobody has a perfect, complete understanding of the whole of reality. A neurosurgeon may understand the human nervous system but be unable to successfully configure the security settings of her smartphone. Knowledge of one domain doesn’t necessarily translate to another.

You carry around in your mind internal representations of how things work. These representations are called mental models. Wikipedia has a “good enough” definition:

A mental model is an explanation of someone’s thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person’s intuitive perception about his or her own acts and their consequences.

The more accurately these representations mirror the way things really are, the more skillfully you can act. If you understand the distinctions between the components that define your phone’s security and how they relate to each other, you’ll be able to make good predictions about the consequences of your decisions.

Forming good mental models for complex domains isn’t easy. Modeling calls for thinking in abstract terms. You may be tempted to apply a model from one domain you understand well to another you don’t. (E.g., “I bet this works just like x.”) We aren’t formally trained to model the world. Instead, we form mental representations ad hoc, filling out the broader picture as we go along. Thus, we have imperfect models of much of reality.

Ideally, you want your customers to have good mental models of your business’ domain. This is easier to do in well established domains than in new ones. For example, more people are likely to have good mental models of the process of renting a car than securing their smartphone.

It’s important that you understand your customers’ mental models for your domain. This isn’t something you can ask them about in an interview. We don’t express our mental models overtly. Instead, they manifest indirectly in our actions. What to do?

One way to go about it is to observe them interacting with prototypes and making note of how they interpret its major concepts and their relations to each other. Another is to engage customers in co-creation sessions to design solutions for the domain.

In this second approach, we don’t expect the solutions that emerge to lead directly to products or features. Instead, the artifact functions as a MacGuffin that allows us to map the customers’ mental models of the domain. This approach is especially useful in early stages of the design process, when we don’t yet have a prototype to test.

With a better understanding of how customers see the domain, we can design solutions that allow them to make more skillful decisions. This may call for producing means for them to adjust their mental models to more closely align to reality. Or it may require that we adjust the system we’re designing to better match the models users bring with them.

In either case, we’re not starting from a blank slate: we must meet people’s understanding of the domain. This requires that we understand their mental models.

The Informed Life With Thomas Dose

My guest in the latest episode of The Informed Life podcast is Thomas Dose. Thomas is the Head of Music Services for DR, the Danish Broadcasting Corporation. In this role, he works with a large collection of music:

The department I’m working in has been systematically collecting music since 1949, and the physical archives that they consist of roughly about 900,000 physical units, that is records, which are shellacs, vinyl, CDs, and so on. But obviously for the last decade or so, we haven’t really added much to the physical archive. Only on those instances where a release is purely on physical, we will acquire that such. What else it’s all digital now. But we’re still very happy with the physical archive. It’s not collecting dust because the editorial units in DR are basically ordering digitization of older materials every day, and we handle those. And we digitize those from from vinyl and from shellac. And you would be surprised of the volume of music that is still not available on the mainstream streaming services. You think that it’s interesting that every piece of music recorded ever is on Spotify. It’s not nearly the case. So we’re still recording from from our physical archives.

Such massive amounts of music require mindful organization, and in this conversation we delved into how such a thing can be structured to make particular pieces of music easier to find.

In our case, our data model basically supports two types of composition. And one is, you could say, the normal type of composition where you have a title for the composition and then you would have composers and lyricists related to that. And the other type of composition would support sub-compositions, which is basically in one of the obvious example is you have a symphony which would have four movements and then and so those are the sub-compositions. And we are then able to relate each of these sub-compositions or movements to all the different recordings of this movement and this work.

We also discussed a problem I’ve had with my own music collection: how to organize pieces that originated before the era of recording technologies, and which don’t fit neatly into album-length containers. The show is worth your time — especially if you manage a lot of music.

The Informed Life Episode 18: Thomas Dose on Music Collections

Balancing Bottom-up and Top-down

A question I frequently encounter whenever I start a new information architecture project: What’s the right balance between defining an architecture from the top-down and leaving room for bottom-up structures to emerge?

The answer is different for each project. Some require a great deal of structure upfront, while others need just enough for bottom-up structures to emerge. The answer hinges on the type of environment we’re structuring, the team’s skills, the organization’s needs, and myriad other factors.

But what if this isn’t a dichotomy? What if we could design top-down structures broad enough (or perhaps deep enough) to allow for bottom-up organization to emerge organically? What architecture would lead to an environment that could be a receptacle for serendipity, happy accidents, improvisation — for humanity — while also accomplishing its business outcomes?

The Synthesizers In Charge

From an insightful (and terrifying) article in The Atlantic by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher about the potential impact of AI on our civilization:

The challenge of absorbing this new technology into the values and practices of the existing culture has no precedent. The most comparable event was the transition from the medieval to the modern period. In the medieval period, people interpreted the universe as a creation of the divine and all its manifestations as emanations of divine will. When the unity of the Christian Church was broken, the question of what unifying concept could replace it arose. The answer finally emerged in what we now call the Age of Enlightenment; great philosophers replaced divine inspiration with reason, experimentation, and a pragmatic approach. Other interpretations followed: philosophy of history; sociological interpretations of reality. But the phenomenon of a machine that assists—or possibly surpasses—humans in mental labor and helps to both predict and shape outcomes is unique in human history. The Enlightenment philosopher Immanuel Kant ascribed truth to the impact of the structure of the human mind on observed reality. AI’s truth is more contingent and ambiguous; it modifies itself as it acquires and analyzes data.

The passage above reminded me of this gem by E.O. Wilson:

We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.

True but for the “people” bit?

The Metamorphosis

Suggested Searches in Apple Photos

One of the most amazing features enabled by machine learning algorithms is the ability to issue text searches on photos and images. Google Photos’s search abilities are one of its headline features. And Apple, too, has been working to improve search in the Photos app that comes with iPhones and other iOS devices.

As an iOS user, I’ve been watching Photos’s search functionality improve over the last couple of years. Although it’s a bit slower than I’d prefer, it’s still very useful. I can search by dates and common terms (e.g. “Halloween”) and often find what I’m looking for. However, sometimes the search yields no results at all — even when the term I’m searching for is a common word.

Recently I noticed a change in Photos’s search results UI that makes its operation more transparent:

Searching Apple Photos on the iPhone

What’s going on here? I’ve typed the word gorilla into the search box and Photos finds no results. (Yes, I do have photos of gorillas in my collection.) Rather than leave me with nothing, Photos offers to broaden the scope of the search. I’m offered two alternate searches:

gorilla → Mammal
gorilla → Elephant

There’s clearly some term mapping happening behind the scenes. That’s not unusual for search systems. What’s intriguing is how Apple has represented the mapping of terms, with the arrow pointing from my original search term to the suggested alternatives. Neither alternative is very useful to me in this particular case, but I understand why the system is suggesting these terms. (“Mammal” is a broader category of which gorillas are a member, and “elephant” is a sibling in that group.) That said, I appreciate the ability to change the scope of the search with one tap and the compact clarity of this UI.

Are Dating Apps Making Marriages Stronger?

In Living in Information, I wrote about the increase in the number of romantic relationships that start online as a signal that we’re moving key social interactions to information environments. I’ve often spoken of this fact with a tone of surprise. At some deep level, the romantic in me wants to believe that when it comes to love, the key information is best found when being in the same physical space as the other person. But recently the Wall Street Journal highlighted results of a study that suggest otherwise:

According to the study, the rate of marital breakups for respondents who met their spouse online was 25% lower than for those who met offline.

Why would this be?

The researchers suggested that a greater pool of potential spouses might give users more options and allow them to be more selective.

They also found that more anonymous online communications produced greater self-disclosure-and stronger feelings of affection-than face-to face communications, laying the foundation for more enduring relationships. A 2011 paper published in the journal Communication Research reached a similar conclusion. In a study of 85 participants conducted by researchers at Cornell University, opposite-sex participants were assigned to a face-to-face exchange, an online exchange with the addition of a webcam, or a text-only exchange. Researchers found that the text-only couples made more statements of affection than either of the other groups and were more comfortable sharing intimate information.

In the book I defined information as “anything that helps reduce uncertainty so you can make better predictions about outcomes.” It may turn out that when it comes to finding a mate, what we learn in structured information environments helps us make better long-term decisions.

Dating Apps Are Making Marriages Stronger – WSJ

From Monolithic to Distributed Architectures

Amazon CTO Werner Vogels on how the company transitioned from a monolithic application architecture to a distributed one:

We created a blueprint for change with our “Distributed Computing Manifesto.” This was an internal document describing a new architecture. With this manifesto, we began restructuring our application into smaller pieces called “services” that enabled us to scale Amazon dramatically.

But changing our application architecture was only half the story. Back in 1998, every Amazon development team worked on the same application, and every release of that application had to be coordinated across every team.

To support this new approach to architecture, we broke down our functional hierarchies and restructured our organization into small, autonomous teams, small enough that we could feed each team with only two pizzas. We focused each of these “two-pizza teams” on a specific product, service, or feature set, giving them more authority over a specific portion of the app. This turned our developers into product owners who could quickly make decisions that affected their individual products.

Breaking down our organization and application structures was a bold idea, but it worked. We were able to innovate for our customers at a much faster rate, and we’ve gone from deploying dozens of feature deployments each year to millions, as Amazon has grown. Perhaps more dramatically, our success in building highly scalable infrastructure ultimately led to the development of new core competencies and resulted in the founding of AWS in 2006.

Technological change requires new ways of working — especially when the change is happening at the structural level. Decentralizing the implementation at the technical level isn’t enough; decision-making must be decentralized as well. I read Amazon’s transition to two-pizza teams as a push towards bottom-up systemic interventions.

This strikes me as a more appropriate response to today’s complex challenges than the top-down hierarchies of the past. Alas, many designers and product managers are still operating within organizational structures that emerged during the industrial revolution, and which don’t easily accommodate bottom-up decision-making.

Modern applications at AWS – All Things Distributed

The Informed Life With Rachel Price

The latest episode of The Informed Life podcast features an interview with Rachel Price, a Senior Information Architect at Microsoft. In addition to being a professional IA and teaching IA, Rachel is also a jazz saxophonist. In this episode, we discuss how opening space for improvisation can make us more effective at managing our information.

What does Rachel mean by improvisation?

[it’s] really making a series of choices about what note to play at a given time, but it’s in reaction to a bunch of other input… Improvisation is… Some sort of sensory input goes into the central nervous system at that point if the player uses all these connections in their head, schemas that they know really well, patterns that they know really well, kind of tools or tricks that they know really well, they make connections. They make a snap decision about what to play. Then they actually play it and then the whole loop starts over again. So now they’ve created sensory input for someone else or for themselves, and it’s just this recruitment repeating cycle of iteration.

This can be a helpful analogy for designers doing user research. And when managing our own personal information environments, it’s useful to have an underlying framework while being mindful of not over-structuring things.

[the] idea that chord changes are enough is so cool. Right? It’s this idea that this pretty spare framework is just enough context to allow people to communicate with each other meaningfully with some shared intention, but with enough freedom for these incredible unpredictable moments to happen as well.

I had a great time talking with Rachel about this subject. Hope you enjoy it too!

The Informed Life Episode 17: Rachel Price on Improvisation

TAOI: Personalized Yelp Results

The architecture of information:

Per TechCrunch, Yelp announced earlier this week that it will allow users to personalize search results:

Once you’ve made your selections, those preferences will start affecting the search results you see. The personalization should be obvious because the results will be identified as having “many vegetarian options” or “because you like Chinese food.” The homepage will also start highlighting locations that it thinks you would like.

Seems like an obvious feature, especially for a system like Yelp that aims to connect users with places they will like. A short video explains how it works:

A baseline 21st Century tech literacy skill: Training the algorithms that personalize your search results. (For designers: Watch for emerging user interface standards for such training mechanisms. I was intrigued by Yelp’s use of the heart icon to signify personalization.)

Yelp will let users personalize their homepage and search results