Two Approaches to Structure

There are at least two approaches to structuring a digital information environment: top-down or bottom-up.

In the top-down approach, a designer (or more likely, a team of designers) researches the context they’re addressing, the content that will be part of the environment, and the people who will be accessing it. Once they understand the domain, they sketch out possible organization schemes, usually in the form of conceptual models. Eventually, this results in sets of categories — distinctions — that manifest in the environment’s global navigation elements.

Top-down is by far the most common approach to structuring information environments. The team “designs the navigation,” which they often express in artifacts such as wireframes and sitemaps. This approach has stood the test of time; it’s what most people think of when they think about information architecture. However, it’s not the only way to go about the challenge of structuring an information environment.

The other possibility is to design the structure from the bottom-up. In this approach, the team also conducts extensive research to understand the domain. However, the designers’ aim here is not to create global navigation elements. Instead, they’re looking to define the rules that will allow users of the environment to create relationships between elements on their own. This approach allows the place’s structures to emerge organically over time.

Consider Wikipedia. Much of the usefulness and power of that environment come from the fact that its users define the place. Articles and the links between them aren’t predefined beforehand; what is predefined are the rules that will allow people to define elements and connections between them. Who will have access to change things? What exactly can they change? How will the environment address rogue actors? Etc.

Bottom-up approaches are called for when dealing with environments that must grow and evolve organically, or when the domain isn’t fully known upfront. (Think Wikipedia.) Top-down approaches are called for when dealing with established fields, where both content and users’ expectations are thoroughly known. (Think your bank’s website.) Most bottom-up systems will also include some top-down structures in their midst. (Even Wikipedia has traditional navigation structures that were defined by its design team.)

So do you choose top-down or bottom-up? It depends on what problem you’re trying to solve. That said, I find bottom-up structures more interesting than top-down structures. For one thing, they accommodate change more elegantly — after all, they’re designed to change. This approach requires that the team think more carefully about governance issues upfront. Bottom-up structures are more challenging to design and implement. Designers need to take several leaps of faith. They and the organization they represent are ceding control over an essential part of the environment.

Most information environments today are designed to use top-down structures. Some have a mix of the two: predefined primary nav systems and secondary systems that are more bottom-up. (Think tagging schemes.) I expect more systems to employ more bottom-up approaches over time. Tapping the distributed knowledge of the users of a system is a powerful approach that can generate structures that better serve their evolving needs.

An Architecture + Systems Thinking Reading List

A friend asked me for a syllabus on architecture and cybernetics. I don’t have a comprehensive syllabus on the subject, but I did send him a short list of readings that have informed my thinking about architecting from a systemic perspective. I thought you may get value from this list as well, so I’m sharing it here. The resources are in no particular order.

What major resources have I missed? Please let me know.

Getting More Done With Information

My recent conversation with Fabricio Teixeira (Ep 3, The Informed Life podcast) focused on how Fabricio and his partner Caio Braga manage UX Collective, one of the most popular UX design publications in the world. Fabricio and Caio leveraging a chain of tools that allows just the two of them to produce work that would’ve required a larger team in the past.

Much has been written about how social media and other information environments impair our cognitive abilities. (I touched on this myself in Living in Information.) But information environments can also augment our abilities. There are ​myriad easy-to-use information systems that allow us to get stuff done more efficiently.

As a small business owner, there’s much I can do online that would’ve required outsourcing or hiring other people in the past. There are online systems available to automate everything from bookkeeping to marketing. It’s not that they do it all for you; automation isn’t quite that advanced yet. That said, these systems allow you to better leverage your time.

Take Buffer, one of the systems that came up in the conversation with Fabricio. Buffer allows you to pre-schedule social media posts; you can determine when you’d like specific messages to be published through Twitter, Facebook, LinkedIn, and Instagram. In essence, it allows you to create a personal marketing system. This means you can allocate your time more wisely: rather than having to post messages in real-time (with the potential distractions that entails), you can set time aside to plan out your messages in a batch.

APIs make the system work. Buffer wouldn’t be of much use if it couldn’t leverage social networks. It’s not a free-standing tool, but rather a way to bring together several other systems that provide particular functionality. Centralizing posting to several social networks creates great efficiencies. I’ve been using Buffer for years, and have found it useful. It allows my messages to have greater reach than they would’ve if I had to post individually to each social platform in real-time.

Buffer is one of many such systems. I’m sure there are many others I’m not aware of that could automate or augment my other workflows, or help me do things that I simply wouldn’t have been able to before. One of the reasons why I started The Informed Life is that I want to learn about such systems — and share what I learn with you. What’s working for folks? What isn’t? How might we configure our personal information ecosystems so we can thrive?

Seeing What’s Actually There

One of the most important things I learned at university was how to see. Architects communicate through drawing, so it’s important for them to learn to draw. Drawing well requires observing carefully; capturing what’s actually there as opposed to what you think is there. This is harder than it sounds. The mind keeps breaking in with shortcuts. “I know what this is. It’s the roof of a house. We know what the roof of a house looks like, don’t we? Just draw that.” The result is often an abstraction that has little to do with what’s actually there.

Knowing that your mind meditates between the world and what you’re trying to capture is an important lesson. If it isn’t pointed out to you, you may not know you’re doing it. You go along merrily introducing theories and abstractions that influence your perception of reality.

I’m teaching my students to observe systems in action. Systems are comprised of various elements that relate to each other in particular ways. When these elements interact, the system exhibits particular behaviors. Understanding how the system works and what it does requires observing these elements and their behavior over time. What are the elements? How do they influence each other? What happens when they do?

When I ask the students to explain what they’re seeing, they invariably respond with a mix of observations and theories. Often, the theories have little to do with what’s actually happening. Interestingly, the observations they report are clearly influenced by their theories. The students make assumptions about what they’re seeing based on what they believe is happening.

We all do this. Observing with equanimity is difficult. Our chattering mind constantly breaks in with explanations. We pine for coherence; we want reality to correspond to our mental models, rather than the other way ‘round. We must practice seeing clearly and impartially in order to get better at it, much as we practice to get better at sport. It’s an essential meta-skill that improves our ability to acquire other skills.

Twitter as a Public Square

Managing an information environment like Twitter must be very difficult. The people who run the system have great control — and responsibility — over what the place allows and encourages. In a conversation platform (which is what Twitter is at its core), the primary question is: How do you allow for freedom of expression while also steering people away from harmful speech? This isn’t an easy question to answer. What is “harmful”? For whom? How and where does the environment intervene?

Episode 148 of Sam Harris’s Making Sense podcast features a conversation with Jack Dorsey, Twitter’s CEO, that addresses some of these questions head-on. I was very impressed by how much thought Mr. Dorsey has given to these issues. It’s clear that he understands the systemic nature of the challenge, and the need for systemic responses. He expressed Twitter’s approach with a medical analogy:

Your body has an indicator of health, which is your temperature. And your temperature indicates whether your system more or less is in balance; if it’s above 90.6 then something is wrong… As we develop solutions, we can see what effect they have on it.

So we’ve been thinking about this problem in terms of what we’re calling “conversational health.” And we’re at the phase right now where we’re trying to figure out the right indicators of conversational health. And we have four placeholders:

1. Shared attention: What percentage of the conversation is attentive to the same thing, versus disparate.
2. Shared reality: This is not determining what facts are facts, but what percentage of the conversation are sharing the same facts.
3. Receptivity: Where we measure toxicity and people’s desire to walk away from something .
4. Variety of perspective.

What we want to do is get readings on all of these things, and understand that we’re not going to optimize for one. We want to try to keep everything in balance.

I’d expect the idea to be to incentivize “healthy” conversations over “unhealthy” ones. This would be implemented in the design of the environment itself, rather than at the policy level:

Ultimately our success in solving these problems is not going to be a policy success. We’re not going to solve our issues by changing our policy. We’re going to solve our issues by looking at the product itself, and the incentives that the product ensures. And looking at our role not necessarily as a publisher, as a post of content, but how we’re recommending things, where we’re amplifying, where we’re downranking content.

Twitter has a great responsibility to get this right, because in some ways the system is becoming key public infrastructure. As Mr. Dorsey acknowledged,

Ultimately, I don’t think we can be this neutral, passive platform anymore because of the threats of violence, because of doxxing, because of troll armies intending to silence someone, especially more marginalized members of society. We have to take on an approach of impartiality. Meaning that we need very crisp and clear rules, we need case studies and case law for how we take action on those rules, and any evolutions of that we’re transparent and upfront about. We’re not in a great state right now, but that is our focus. I do believe that a lot of people come to Twitter with the expectation of a public square. And freedom of expression is certainly one of those expectations. But what we’re seeing is people weaponize that to shut others’ right to that down. And that is what we’re trying to protect, ultimately.

As a Twitter user, I was pleased to see the depth of the thinking and care that is going into these issues. I learned a lot from this podcast about the reasons for some of Twitter’s controversial design decisions. (E.g. I now know why Twitter doesn’t have an “edit” button.)

Unfortunately, the conversation didn’t address the elephant in the room: Twitter’s business model. Ultimately, Twitter makes money by showing ads to its users. A good public square shouldn’t attempt to sway our opinions; it should provide the venue for us to form them through engagement with others. How might “conversational health” might be used as a means for persuasion?

Making Sense Podcast #148 – Jack Dorsey

The Way to Proficiency in Complex Environments

Esko Kilpi, writing in Medium:

In complex environments, the way to proficiency is to recombine successful elements to create new versions, some of which may thrive.

As a result, not just the user interfaces, but the operating system of work is starting to change in a radical way. The traditional industrial approach to work was to require each worker to assume a predetermined responsibility for a specific role. The new approach represents a different logic of organizing based on neither the traditional market nor a process.

I’m drawn to systems that favor emergent structures over predefined top-down structures, for the same reasons Mr. Kilpi highlights in his post. Alas, important parts of our societies are still organized around somewhat rigid top-down structures.

Top-down structures can work when domains are simple, contingencies minimal, significant changes infrequent, and one has some degree of agency over the context. That’s the opposite of many current environments. Emergence – how natural structures come about — offers us an alternative approach to designing systems that address complex, evolving environments more skillfully.

The key is clarity on the purpose(s) the system is working towards. How do you achieve clarity of purpose in situations where multiple stakeholders have conflicting interests? You need leadership with vision. Top-down in service to emergence.

Collaborative and Competitive Creativity

The “Right” Way

Interacting with students is one of the privileges of teaching design at the graduate level. These budding designers are open-minded yet seriously focused on their chosen area of practice, a mindset that offers many opportunities for teaching and learning.

Many of the questions students ask are about the “right” way to do particular things. What’s the right way to diagram a system? What’s the right way to design an interaction? What’s the right way to present this? Is this how a conceptual map is supposed to look? Etc. My reply is often disappointing: There isn’t a “right” way to do it; it depends.

This answer seldom satisfies. But what’s the alternative? There aren’t right/wrong answers in design, only incremental approximations to improved conditions, some of which are preferable to others. Ambiguity comes with the territory, especially at the graduate level. (It certainly does when dealing with clients in “real-world” conditions.)

One of my aims is to help students realize that I’m not there to judge what’s wrong or right; they must develop this sense in themselves. What I can offer is a set of tools and practices that allow them to develop a particular skill: thinking-through-making.

Thinking-through-making is how a diverse group of smart people can come together to solve complex systems problems. These aren’t problems you can solve in your head or by talking with others; you must build models that allow you to externalize your understanding. The act of making the model prompts insights that won’t emerge otherwise. Doing so with others allows the entire group to tap into — and build — their pooled cognitive capacities in an incredibly powerful way.

Thinking-through-making is independent of any particular discipline; it’s evident in architecture, graphic design, interaction design, etc. The feedback loop at the center of the design process is a characteristic shared by all design disciplines. The designer facilitates this feedback loop.

Given the increasingly complex and multi-disciplinary challenges we face, it behooves us to think about design independently of our particular areas of practice. We can leverage our individual expertise in service to bringing diversity to the team; of proposing alternative approaches that may otherwise been missed. But at the core is design, a way of solving problems that doesn’t offer on-the-spot “right” answers but evolves incrementally towards better.

A Data Primer for Designers

My friend Tim Sheiner, writing for the Salesforce UX blog:

demand is high for designers who can create experiences that display data in useful and interesting ways. In my personal experience this became much, much easier to do once I’d learned to speak the crisp, precise and slightly odd language used by technical people for talking about data.

What follows is a phenomenal post that clearly explains much of what you need to know to understand and speak competently about data. A must-read for anybody involved in designing for digital information environments.

Designer’s Field Guide to Data

The Illusion of Explanatory Depth

“The only true wisdom is in knowing you know nothing.”
— Socrates

You know less than you think you do. We all do. Consider an object you interact with every day: a flushing toilet. You know how to operate this device. Depending on where you live, you activate it by either pushing a button or pulling on a small lever, which causes water to flush away wastes. Fine, but how does it do this? Knowing how to operate a thing doesn’t mean understanding how it does it. You probably have a rough mental model of how the toilet does its thing, but if asked to draw a diagram that explains it in detail, you’d likely have to do a bit of research.

This is an example of a cognitive bias called The Illusion of Explanatory Depth. Although it’s an old principle (as evidenced by Socrates’s quote), it was first named by cognitive scientists Leonid Rozenblit and Frank Keil. In a 2002 paper, Rozenblit and Keil explained that most of us think we know how things work, when in fact we have incomplete understandings. Our “folk theories” offer explanations that lead us to believe we know more than we actually do. We become overconfident, our mental models inadequate.

When we interact with complex systems, we often experience only a small part of the system. Over time we develop an understanding of cause-effect relationships through the elements we experience directly. While this understanding may correspond to the way the subsystem actually works, it doesn’t necessarily correspond to the way the whole works. Our understanding of the subsystem leads us to think we understand the whole. This is a challenge when interacting with systems where we can directly experience cause-effect relationships (e.g., we pull the flush lever​ and see and hear water rushing through the toilet) but it’s an even greater challenge in systems where such mechanics are hidden away from the user.

I’ve owned my Apple Watch for four years, and I still don’t understand why sometimes the device’s battery lasts all day, while at other times it’s completely depleted shortly after mid-day. At first, I was confident about my understanding of the problem; surely the Watch worked like an iPhone, a device I had some experience with. (And therefore for which I had a reliable energy usage mental model.) I tried tweaking the Watch in the same way I do the iPhone, but nothing worked as I expected. Eventually, I had to admit to myself that my model of how the Watch uses energy was flawed. I’ve since adopted a Socratic mindset with regards to the Apple Watch: I just don’t know what triggers greater energy consumption on the device. The only thing I know for sure with regards to this subject is that I don’t know.

The Illusion of Explanatory Depth leads us to make less-than-optimal decisions. Intervening in a complex system while thinking you know more than you actually do about the systems’ workings can lead to disastrous results. Designers — people who intervene in systems for a living — must adopt a “beginners mind” attitude when it comes to their workings. Even if (especially if) we think we understand what’s going on, we must assume we don’t really.

Designers should also aspire to create systems that are easy to use but offer some degree of transparency; that allow their users to create mental models that correspond to how the thing works. The first time I opened a toilet tank was a revelation: I could clearly see the chain of interactions that led from my pulling the lever to having water rush from the tank and down the tubes. Opening the tank isn’t something you do in your day-to-day use of the toilet, but it’s an ability the system affords. I can’t lift the lid on my Apple Watch to examine how it uses up energy.

Increasingly, the systems we design look more like an Apple Watch than a flush toilet: they’re extremely complex and driven by algorithms and data models that are opaque and often emergent. When we design an “intuitive” user interface to such a system, we run the risk of making people overconfident about how it works. We can’t build good models of systems if we can’t see how they do what they do. While this may not be an issue for some classes of systems​, it could be extremely problematic for others. As we move key social interactions to some of these systems, our inability to build good models of how they work coupled with their “easy-to-use” UIs can cause serious challenges for our societies at every level.