Clarifying Meanings Through Mindful Set-making

In his classic book, Thinking, Fast and Slow, Nobel laureate Daniel Kahneman poses an interesting question:

How many animals of each kind did Moses take into the ark?

Kahneman explains:

The number of people who detect what is wrong with this question is so small that it has been dubbed the “Moses illusion.” Moses took no animals into the ark; Noah did.

I don’t know about you, but the Moses illusion fooled me. So what’s going on here?

Continue reading

Shipping the Org Chart

While reorganizing my library a few weeks ago, I came across a handout from a 2003 workshop by my friend Lou Rosenfeld titled Enterprise Information Architecture: Because Users Don’t Care About Your Org Chart.

Lots of ideas quickly become obsolete in tech. But after 18 years, the idea that users don’t care about your org chart is still relevant. Teams still ship systems that reflect their internal structures. IA is still crucial to addressing the issue.

Few teams set out to design inwardly-focused systems. Instead, they inadvertently arrive at solutions that feel “natural” — i.e., that mirror their structures. Subtly, the systems they design come to reflect distinctions inherent in their orgs.

Continue reading

Dark IA?

Tyler Sonnemaker, reporting for Insider:

Newly unredacted documents in a lawsuit against Google reveal that the company’s own executives and engineers knew just how difficult the company had made it for smartphone users to keep their location data private.

Google continued collecting location data even when users turned off various location-sharing settings, made popular privacy settings harder to find, and even pressured LG and other phone makers into hiding settings precisely because users liked them, according to the documents.

The report alleges internal stakeholders weren’t clear on the system’s structure:

Jen Chai, a Google senior product manager in charge of location services, didn’t know how the company’s complex web of privacy settings interacted with each other, according to the documents.

Sounds like a concept map would help. But perhaps these issues could be due to more than a lack of understanding:

When Google tested versions of its Android operating system that made privacy settings easier to find, users took advantage of them, which Google viewed as a “problem,” according to the documents. To solve that problem, Google then sought to bury those settings deeper within the settings menu.

Google also tried to convince smartphone makers to hide location settings “through active misrepresentations and/or concealment, suppression, or omission of facts” — that is, data Google had showing that users were using those settings — “in order to assuage manufacturers’ privacy concerns.”

I don’t know anything about this case other than what is in the media, nor do I have firsthand experience with Android’s privacy settings. That said, these allegations bring to mind “Dark IA” — the opposite of information architecture.

Information architecture aims to make stuff easier to find and understand — implicitly, in service of empowering users. The antithesis of IA isn’t an unwittingly disorganized system, but one organized to inhibit understanding and deprive users of control.

Unredacted Google Lawsuit Docs Detail Efforts to Collect User Location

Modeling for Automated Organization

Zach Winn reporting in MIT News:

MIT alumnus-founded Netra is using artificial intelligence to improve video analysis at scale. The company’s system can identify activities, objects, emotions, locations, and more to organize and provide context to videos in new ways.

Netra’s solution analyzes video content to identify meaningful constructs in service of more accurate organization. This improves searchability and the pairing of video content with relevant ads. How does this work?

Netra can quickly analyze videos and organize the content based on what’s going on in different clips, including scenes where people are doing similar things, expressing similar emotions, using similar products, and more. Netra’s analysis generates metadata for different scenes, but [Netra CTO Shashi Kant] says Netra’s system provides much more than keyword tagging.

“What we work with are embeddings,” Kant explains, referring to how his system classifies content. “If there’s a scene of someone hitting a home run, there’s a certain signature to that, and we generate an embedding for that. An embedding is a sequence of numbers, or a ‘vector,’ that captures the essence of a piece of content. Tags are just human readable representations of that. So, we’ll train a model that detects all the home runs, but underneath the cover there’s a neural network, and it’s creating an embedding of that video, and that differentiates the scene in other ways from an out or a walk.”

This notion of ‘vectors’ is intriguing — and it sounds like an approach that might be applicable beyond videos. I imagine analyzing the evolution of such vectors over time is essential to deriving relevant contextual information from timeline-based media like video and audio. But I expect such meaningful relationships could also be derived from text.

Systems that do this type of analysis could supplement (or eventually replace) the more granular aspects of IA work. Given the pace of progress in ML modeling, “big” IA (especially high-level conceptual modeling) represents the future of the discipline.

Improving the way videos are organized | MIT News | Massachusetts Institute of Technology

Stripping in Information

Charlotte Shane, reporting for The New York Times on the adoption of content subscription website OnlyFans by sex workers during the pandemic:

Gia [the Smutty Mystic] describes the environment as a virtual strip club, and as is true in an actual strip club, a majority of visitors aren’t forking over much. The cost of subscribing to an account — often less than $20 — is like the handful of dollars slipped into a dancer’s garter while she’s on the main stage: appreciated, but not why she shows up to work. But some customers spend thousands, or even tens of thousands, on their favorite accounts. Personalized product sales and interactions through messages and cam shows — the equivalent of lap dances and time in the champagne room — are how the real money is made. “Eighty percent of your income comes from 20 percent of your customers,” Gia, who goes by a stage name, told me. “I’ve learned that’s a rule of business across industries.”

OnlyFans has provided a venue for many sex workers to continue making money during this time of social distancing. But the site’s information architecture doesn’t help:

Several performers I spoke with attributed their success on OnlyFans to the site’s traffic, but that’s not exactly true. OnlyFans’ search function is so unhelpful that several third-party websites exist solely to help users thoroughly explore the platform’s offerings. Explicit accounts aren’t showcased among the suggested creators on OnlyFans’ home page or tweeted.

What’s more, this appears to be by design as the company looks to avoid legal complications:

The fragility of payment processing may explain why OnlyFans is so averse to discussing the sexual dimension of its site. (Representatives for the company declined to speak on the record for this article after learning of its focus.) The company must rely on the same deflection, euphemisms and implausible plausible deniability that many sex workers use to minimize the damage of pervasive persecution.

Of course, this won’t go over well with the people who depend on the system’s findability:

Sex workers deeply resent OnlyFans’ absence of a sitewide search function and menu of categories and tags to browse, not only because it makes their jobs harder but also because it seems like proof that the site is eager to jettison them entirely — as so many have done before. But Ashley, the organizer, surmised that this choice is a canny tactic for minimizing legal liability, thereby keeping the site up and running. In other words, adult creators are right that the site tries to hide them.

Information architectures aren’t designed in a vacuum; they’re always constrained by the realities of the context in which they exist. OnlyFans sounds like an example of a marketplace whose architecture is driven more by its regulatory environment than the needs or wants of its consumers or producers.

OnlyFans Isn’t Just Porn 😉 – The New York Times

Flexibility vs. Ease-of-use

Chris Welch, reporting in The Verge about a new Android tablet feature:

The simply named “Entertainment Space” will be a new section to the left of the home screen on tablets… It’s an all-encompassing hub that brings together video (TV shows, movies, and YouTube), games, and books.

In other words, the feature aggregates the user’s media, making it easier to access. Instead of having to open individual apps to find movies, TV shows, YouTube clips, etc., users can now access a single screen that puts content upfront.

Computers are universal devices — tools for making tools. Depending on what app you’re using, your computer can be a spreadsheet, a music player, a book, a video editor, etc. This flexibility is a big part of what makes computers powerful.

The tradeoff is complexity. Learning to use a single-purpose tool entails forming an accurate mental model of how it works. This can be hard enough. (I’ve been using Excel for decades and still learning new things it can do.)

But when you’re using a platform, you must not only form a model of each tool but also of the means through which you manage tools — where to find them, how to install, launch, and configure them, where to save work-in-progress, etc.

There’s an inherent tension between flexibility and ease of use. System designers oscillate between both extremes. A new device may launch as a single-purpose appliance and evolve towards platformhood.

An example of this is Apple TV. Originally designed as a simple living room media player, today’s models offer a broad range of functions, including the ability to install apps like games and third-party media “stores.”

This flexibility makes the system more powerful but also more complex. In the earlier, simpler version, users could easily choose what content to experience. Now, they must keep track not just of what to experience, but where to do it.

Users of a single-purpose system must only understand a small set of taxonomies. For example, if they’re going to watch movies, they’ll expect to deal with genres, movie studios, directors, etc.

In contrast, a more complex system asks that users understand taxonomies of taxonomies: “this is the type of app where I can expect to see movie genres, whereas this other app over here has levels and health points.”

Features like Entertainment Space aim to square this circle by layering a simplified, content-first experience atop the platform. I expect their effectiveness depends on their discovery algorithms. It’s a tricky design challenge.

Google’s Entertainment Space makes Android tablets look like Google TV – The Verge

Building Bridges to Understanding

Some tasks are easy, like choosing a flavor of ice cream; other tasks are hard, like choosing a medical treatment. Consider, for example, an ice cream shop where the varieties differ only in flavor, not calories or other nutritional content. Selecting which ice cream to eat is merely a matter of choosing the one that tastes best. If the flavors are all familiar, such as vanilla, chocolate, and strawberry, most people will be able to predict with considerable accuracy the relation between their choice and their ultimate consumption experience. Call this relation between choice and welfare a mapping. Even if there are some exotic flavors, the ice cream store can solve the mapping problem by offering a free taste.

Richard H. Thaler, Cass R. Sunstein, Nudge

Thaler and Sunstein are describing part of what I understand as a mental model. New users aren’t blank slates. They approach interactions with a system using preconceptions shaped by prior experiences with analogous systems.

For example, imagine you encounter chocolate as a possible ice cream choice for the first time. (I know, it’s inconceivable. Everyone loves chocolate ice cream. Right? I know I do. Please bear with me.) If you’ve had chocolate candy and any other kind of ice cream before, you may have a rough idea of what to expect. Chocolate has a particular flavor, and ice cream is sweet, cold, and creamy.

Now consider an exotic ice cream flavor such as green tea. You may have had ice cream and green tea before, so you have reference points for both. However, your prior experiences confound your expectations of how green tea ice cream will taste and feel. Ice cream is sweet and cold; green tea is bitter and hot.

So, when choosing between chocolate or green tea ice cream, you’ll have a better model of the former. That is, your expectations of the taste of chocolate ice cream map more closely to your experience of eating it. If you’re feeling adventurous, you may pick green tea anyway. But it’s a gamble. Hence, those (obnoxiously small) free sample spoons in ice cream shops.

The primary function of information architecture is establishing meaningful distinctions. These distinctions appear as choices to users. Users understand those choices in relation to other choices (i.e., as sets of concepts) and in relation to prior interactions with similar choices (i.e., as individual concepts.)

Some of these concepts will be more obvious than others, much like chocolate is a more obvious choice of ice cream flavor than green tea. Users need help when choosing between unfamiliar or ambiguous concepts.

In other words, users need semantic analogs to those free ice cream samples. For example, each choice could include a clear label, plus an icon or a short phrase that clarifies its meaning in this particular context. Ideally, such aids give users a high-level preview of what they can expect to find when they choose that option. (I.e., they “give them a taste of what’s to come.”)

Much of the craft of IA consists of orchestrating the expectations of users as they’re inducted into new systems. This requires building nuanced bridges between users’ (imperfect) mental models and systems’ (complex, unfamiliar) conceptual models. When done successfully, a user‘s confidence in making choices will increase as he or she interacts with the system.

Cover photo: Ruth Hartnupt (CC BY 2.0)


Subscribe to my newsletter

If you find this post useful, you may also like my newsletter. Every other Sunday, I share ideas and resources about strategic design, systems thinking, and information architecture. Join us!

Processing…
Success! You're on the list.

Architectural Skeuomorphism

Sarah Barrett, writing in Medium:

While there is a lot that IA can learn from actual architecture or city planning, websites aren’t buildings or cities, and they don’t have to work like them. Instead, they should be designed according to the same principles that people’s brains expect from physical experiences.

We have innate skills that allow us to navigate and understand the ‘real’ world. Like physical places, information environments (i.e., websites and apps) are contexts where we can do and learn things.

As a result, it’s natural to want to layer real-world affordances onto digital places. But it’s a naive mistake. Digital can do things physical can’t and vice-versa. Thoughtlessly mimicking real-world affordances in information environments can lead to what Sarah calls “architectural skeuomorphism” — a plague of early web and app UIs.

Conversely, digital’s flexibility makes it easy to inadvertently confound our expectations of things when we experience them in more than one ‘place.’ Sarah offers a great example: a Google Doc document object offers different capabilities depending on where you’re interacting with it within Google’s app ecosystem.

To design more usable systems, we must understand how humans make sense of being in and operating within environments. Sarah offers four specific areas for exploration, and promises a longer-form treatment of each. If you’ve read Living in Information, you’ll know why I’m so excited to see where she’s taking this.

Websites are not living rooms and other lessons for information architecture

On the IA-Chess Analogy

Jessi Shakarian, writing in Medium:

When I picked up Information Architecture for the Web and Beyond, the so-called “polar bear” book, I didn’t expect to find a passion around chess. However, chess has become my lens of looking at information architecture in the real world.

In the book, the authors use chess is an analogy for information architecture — it’s a system of rules that doesn’t change based on where you play (on a wooden board in your living room, online against a friend across the country, or on an app on your phone).

The chess analogy is one of my favorite ways of explaining information architecture. As Jessi points out, the game has been around for a long time. Many people know about chess and — more importantly — are aware that it and its physical instantiation aren’t the same thing. As Jessi explains,

Continue reading