In this conversation, we discussed his latest initiative, Oslo for AI, which seeks to design better processes for governing artificial intelligence. Michael started by explaining the initiative’s name, which is based on an extra-official parallel negotiation effort during the Oslo Peace Accords in the 1990s:
The reason that I called it Oslo was that I had rewatched a film called Oslo, which is based on a play, which dramatizes the secret Oslo Peace Accords that were orchestrated by a couple of Norwegians, a husband and wife, during the early 1990s when the first negotiations between Israel and the PLO were happening, and Yitzhak Rabin, the Israeli Prime Minister at the time, and Yasser Arafat, the chairman of the PLO, were having their first-ever face-to-face talks and their first-ever serious negotiation at peace.
And the Norwegians saw or believed that process wasn’t going so well, and one of them had an idea about a very different way of doing negotiation and conversation and talked a couple of people from both sides into entertaining an experiment, and taking them away to Norway, just a couple of people from either side, and starting a very different kind of conversation: much smaller, and also one that involved as part of a holistic approach, making relationships as well as doing the daily work of negotiation.
Drawing inspiration from this effort, Michael has designed a new conversation around AI governance.
What [Oslo for AI] wants to do is to create new possibilities for participation. And in particular, it’s described a design process, which over the course of a year will create a set of serial design engagements, in the form of three-day retreats for small numbers of people, eight to twelve people at a time, in different places all over the world. And each time we bring those people together, we will ask those people to work together to contribute to the design of something that we’re calling a constitutional assembly, which is meant to answer the question, “What would it look like if we were to have a constitutional convention in the 21st century, and what would it look like if the context for that constitutional work were the governance of AI and pervasive technology?”
And so, the idea is that we’ll bring these small groups together, give them three intensive days of working on those questions and making a contribution to a design, and at the end of the project, we will assemble those designs into a demo that we can run, which will, as a practical matter, answer the question. We spent a year on this question of what it would look like to design a new way of running a participatory constitutional process. And this is what it looks like. And the hope is that that has something to contribute to the future of governance in the context of AI, not to be substitutional of other structures and institutions of governance but to be contributory and to perhaps add something new and maybe something missing from our institutions and our institutional constructs and their limitations.
The focus of this “constitutional convention” is the role and relation of various AI technologies in societies. Putting aside artificial general intelligence, which is still speculative, AI represents a major disruption to society. How will we manage it in ways that create the most good?
As Michael described the situation, there are many challenges to doing this from within existing structures and institutions, ranging from centralized control of AI by for-profit organizations to language that is rooted in cultures of dominance. Oslo for AI seeks to explore alternate ways of governance that overcome these challenges by inclusive design.
It’s a fascinating idea, and one that likely applies to other areas as well. AI governance just happens to be especially urgent and impactful. I hope you get as much value from this conversation as I did.
The Informed Life episode 136: Michael Anton Dila on Oslo for IA
]]>I currently have two such conversations going.
The first is a “morning pages” journal where I reflect on what happened the previous day and think about the day ahead. I have a little ritual around these. Every day starts with a 20-minute meditation. Then, I make a cup of (decaf) coffee and write a journal entry. It’s good to do this when the house is quiet.
Journal entries have a standard structure:
It takes less than ten minutes to complete these entries. Writing about my life helps me reflect on things differently than if I’d just thought about them. After all, writing is a way of thinking. This practice just applies it to the self.
As a side benefit, it also gives me a record of my life. I’ve been journaling for seventeen years. As a result, I can examine a significant portion of my life: successes, frustrations, mistakes, lessons learned, etc.
I recently heard someone say they’re using AI to extract insights from their journal. I haven’t tried this yet, but it’s such an interesting proposition — like having a personal biographer! (That said, I’d want to ensure the model isn’t being trained on my personal information.)
The second “self-conversation” is more recent. I’ve started using Apple’s Journal app to write down random thoughts and observations throughout the day. I also take photos of what I’m about to eat or what I’m doing.
The idea with this journal is to be more intentional about things. I’m framing it like sharing on social media, but for an audience of one: “future me.” While my morning pages journal is now an established practice, this running commentary is still very much an experiment. I still haven’t fully built habits around it. But so far, it’s helped me be more mindful.
A journal need not be a single, monolithic thing. With computers, you can have different journals for different purposes. The point is being more intentional and thoughtful about your life. Writing is a good way to go about it.
In particular, I’m looking to aid research. When starting a project, I aim to understand the system’s content, context, and users. Some of this entails interviewing users and stakeholders, but much of it is desk research: learning about the product, its subject matter, competitors, etc. by reading web pages, PDFs, presentation decks, and videos.
As a visual thinker, I often reflect on what I learn in concept maps. These visualizations serve as a shorthand to communicate models about the domain. I show maps to subject matter experts, stakeholders, and clients to align our understanding of what we’re working on. Their feedback leads to more accurate maps (and therefore, more accurate models.)
Given how quickly concept maps help teams get aligned on their understanding, I thought it might be useful to develop a tool to expedite the creation of concept maps. I don’t mean a tool to help you draw a map. After all, there are great diagramming tools in the market. I mean a tool that will draw concept maps for you.
Synthesis is one thing LLMs do well. Ask ChatGPT to summarize an article, and you’ll get a useful couple of paragraphs that get to the gist of whatever you’re wanting to understand. But the output is still text. What if it could do the same but render a diagram instead?
So that’s what I’ve been working on: an AI-assisted concept mapping tool. I call it LLMapper. In its current incarnation, it “reads” the contents of a web page and spits out a concept map, like this:
This is a concept map of the Wikipedia page of the first Star Wars movie. To get this result, all I needed to do was go to my Mac’s command line and pass the URL for this web page to the correct Wikipedia page, like this:
./llmapper https://en.wikipedia.org/wiki/Star_Wars_\(film\)
Currently, the tool has several limitations. For one thing, it’s hardwired to only read Wikipedia pages. That should be easy to fix. (It could parse any text at all; it doesn’t need to be a web page.) It also has no error checking; you could pass it garbage and it would try to work with it. That, too, should be fixable, but I’ve focused my explorations elsewhere.
And that’s okay since this isn’t meant as a production tool. It’s a way to learn about AI. The best way to learn about a new technology is to do stuff with it. (Far too many people are developing passionate opinions about AI without venturing further than ChatGTP. This is a mistake.)
But there might be a practical outcomes here beyond learning about AI. For visual thinkers, concept maps help elucidate complex subjects. It’s a skill I teach students of my systems course, and one that all systems-oriented designers should acquire. Having a concept mapping assistant could expedite the research stage of UX design projects.
Currently, LLMapper is a shell script that cobbles together a bunch of other command-line tools:
The key tool here is Simon Willison’s llm. I’ve written about this tool before since it’s at the heart of my other experiments with AI. Willison describes it as a “[command-line interface] utility and Python library for interacting with Large Language Models, both via remote APIs and models that can be installed and run on your machine.”
That means that it allows you to pass and receive information from a command-line interface (such as the one you access via your Mac’s Terminal app) to and from large language models. I’m using GPT-4, but llm allows you to use other models as well, including local open-source models.
By “passing and receiving” information, I mean you’re liberated from the constraints of the chat-based interface. Much of what you do with computers can be reduced to text. At that point, you can use the CLI to manipulate it in various ways. llm lets you add AI smarts to that ecosystem.
LLMapper uses llm to parse text from a particular section of a web page (i.e., a named DIV) and pass that to GPT-4 in three separate calls, each of which transforms it in particular ways:
There are a few other bells and whistles, but that’s the gist of it. Each “call” is a prompt that instructs GPT-4 to do various things, such as summarizing text, converting it to RDF, and converting it to DOT. Most of my “coding” time on this project has consisted of tweaking these prompts to get them to produce relatively reliable results.
I say “relatively” because the tool still has a lot of issues. The third step is reliable: GPT-4 is good at transforming RDF code into DOT code. But steps 1 and 2 are highly variable. Sometimes, the results are impressive, but often, they’re crap. ‘Scattered’ maps — those with no central focus — are a common problem. This is an example from the same web page as above:
Another common issue I’ve struggled with is relevance. Some LLMapper diagrams include lots of irrelevant details but miss the important stuff. This issue primarily affects step 1. I’ve noticed some Wikipedia pages produce better results than others. The problem compounds down the line: if the summarization and concept extraction step goes wrong, the RDF and DOT transformations will be wrong too. Garbage in, garbage out.
A realization I’ve had in working with this and other LLM-based tools is that claims about these tools having emergent intelligence are over-optimistic. LLMs really are super-powerful forms of autocomplete — and not much more. LLMapper makes mistakes that no human would. These systems don’t seem to have underlying models of the world — just predictions of what word is likely to come next in a sequence.
That turns out to be incredibly useful in a variety of scenarios. We have only caught small glimpses of these things’ capabilities. LLMs have already forever transformed how many knowledge workers — including me — create value. That said, I have a hard time seeing how they lead to artificial general intelligence.
But the main lesson I’ve learned while working with LLMapper is that rather than thinking of an LLM as some kind of omniscient superintelligence, it may be more useful to think of creating discrete ‘intelligences’ (or small agents) that collaborate with each other towards achieving a specific outcome. In this case, the three prompts require three kinds of ‘thinking’:
These are distinct processes. By breaking them up, I can develop them independently. Like I said, I feel good about step 3, but steps 1 and 2 can clearly be better. The RDF step is particularly primitive. (I’m learning about knowledge graphs as part of this project.) There are other projects that aim to process knowledge graphs with LLMs, and those could clearly help here.
Breaking free of the chat UI has been a boon. By using llm, I can pipe the output of one prompt into another, essentially allowing these little ‘agents’ to work with each other. There are many other problems beyond concept mapping where this approach could prove useful. If I get nothing else out of this experiment, this modular way of interacting with LLMs is something I’ll definitely use in other projects.
If you want to see more results from LLMapper, I’ve started a website called Modelor.ai to share outcomes from these experiments. It’s a running journal where I document tweaks to the tool and its prompts. Again, the point is learning how to work effectively with LLMs. Sharing the stumbles and successes is part of the process.
You can download LLMapper from Github; it’s published under an Apache license. As I mentioned above, this is a learning toy and not a production tool; it has nothing but rough edges. It assumes you have some command-line chops and has only been tested on Macs. (It should be easy to install on Linux systems. I’m not familiar enough with Windows to know what needs to happen there.)
If you do use it, I’d love to hear how you’re playing with it. I’m excited by the possibilities of LLMs and want to learn more. I’ve already learned a lot from the few folks who’ve used this tool. I’d love to hear from your efforts if you do decide to try it.
]]>As Dave described it, Gray Area is a combination maker space and venue for artistic expression that provides learning opportunities in advanced technologies. By ‘advanced technologies,’ I mean digital interventions in physical environments, such as augmented reality and projection mapping. Dave described it pithily: “It’s a play space and an empowering space for teaching and sharing various technologies.”
Such technologies have been mostly out of reach for most people outside of corporate contexts. Gray Area aims to democratize them, overcoming industry gatekeeping.
This manifests in various ways. One is public exhibitions of digital art, including guest exhibits by renowned artists, which set an example and motivate local students. Another is opportunities for hands-on experience and instruction on these technologies to make them accessible to a broad range of people.
The result is a community that provides access to firsthand experiences with technology plus the know-how to use them, opening doors to new possibilities. As Dave put it,
[Gray Area] brings together such an interesting group of people: amazingly technically proficient people who have an inkling to do something creative but have been told or don’t have the opportunities to be creative. And on the other side, creative people who see a possibility to do something with technology, but they have this vibe, or they’ve heard that they don’t have the technical skills to make that possible.
I got the opportunity to visit Gray Area before the pandemic, and was impressed by the work. That was before I knew about their educational mission. It’s such an exciting idea — one that I wish was replicated in more places.
]]>In 2019, my wife and I started homeschooling our kids. We were new at this, and the context in which children learn today is different from the one she and I grew up in. There are amazing information environments such as Khan Academy and Scratch where kids can focus on subjects they’re interested in and learn at their own pace. So we looked for tools and resources to help us.
Among these, we found a digital book called The Elements, by Theodore Gray. This is an iPad app that brings the chemical elements to life with the use of beautiful photography, text, and animation. It’s a boon for curious children and adults: an engaging guide to understanding a complex subject. There was nothing like this when I was a kid.
This app, of course, is a manifestation of the periodic table.
The periodic table should be familiar to you. It’s a matrix of the chemical elements, like gold, aluminum, hydrogen, and silicon — the stuff that stuff is made from. I learned about the table as a child. As with so many things we learn as children, I took it for granted. I thought it a staple of scientific knowledge, something that had always been there.
But the periodic table isn’t a fact of nature. It’s a designed structure, and it’s not really that old. It’s the work of a 19th century Russian chemist named Dmitri Mendeleev. Here’s a short video that explains how it came about. Please watch it now.
When I saw that video, I could relate to what Mendeleev was doing. I’ve been there. The resulting “beautiful new theoretical framework” is an information structure that brings order and understanding to a complex domain. In other words, the periodic table is a work of information architecture.
It’s also not the first time people have tried to organize the elements that make up the world. Long before people knew about sixty different elements or esoteric concepts like atomic weight, we thought everything was made up of four constituent elements:
Air, earth, fire, and water.
Although now we know better, this structure still influences aspects of our lives. You’ll occasionally see it come up in cultural artifacts such as the movie The Fifth Element. Information structures tend to stick around, even when the ways we experience them change.
The ancient Greeks grouped these elements into a matrix based on their shared characteristics, not unlike what Mendeleev did with his more complete set of elements.
Today we’d render this information structure in a 2x2 matrix.
That our ancient forebears did this shouldn’t surprise us. The drive to understand is deeply human, and understanding calls for order and structure.
Information architecture is the design discipline that wields order and structure in service to understanding. Our fast-changing world demands that we get better at it. To do so, we must look beneath the surface of IA itself, to seek out its constituent elements. I will call out four here, to reflect the four classical elements — and to acknowledge our ignorance. There is still much yet to discover.
The first of these elements is language.
Language is central to information architecture. Naming things is a big part of what we do. Architects of information define labels that allow people to move around in information environments such as websites and apps. We write section headers that make it possible for folks to understand what they’re looking at.
Most of the time we take for granted our ability to express ideas and experiences through sounds and symbols. But it’s a miracle. Think about these sentences. The fact you can do so — and that you can act on my request to do so — is something only humans can do. As far as we know, no other species has this ability.
Language expands our scope for action. But it can also constrain it. Wittgenstein said, “The limits of my language mean the limits of my world.” When we define an information architecture we shape a little world of language. We give people handles to manipulate and navigate that conceptual domain. We set its boundaries through the words we use — “this, you find and do here — no more, no less.”
Language is central not just to who we are but what we are: our cultures, our self-identity. And as we move more and more of our interactions to information environments – places made of language – language skills matter more and more.
So architects of information must master language. We must master its structure, vocabulary, and usage. We must know the contingent nature of language, how it changes over time.
This isn’t easy. As Andrew Hinton has reminded us, language is to people as water is to fish. We don’t think about it most of the time, because we exist in it. See what I mean? I mean, really see? These words have sparked concepts in your mind. The words themselves are transparent to you: you’re not thinking about them, but about the ideas they evoke.
IAs must look for ways of transcending the invisibility of language. An excellent way of doing so is by learning a second one. Learning a second language helps you understand that this wonderful thing you’ve taken for granted all your life isn’t a fact of nature, but a contingent framework that has evolved in particular cultures and times. You, too, can be an agent of its evolution.
Each element in the periodic table has a name. This name serves as a handle that allows us to discuss the concept.
We have words to talk about silicon and gold and hydrogen. These words also trigger powerful associations in our minds. Silicon Valley. Lead into gold. The hydrogen bomb.
When new elements have been discovered or synthesized, they’ve been granted names. In 2016, the International Union of Pure and Applied Chemistry settled on names for four new elements: Nihonium, Moscovium, Tenessine, and Oganesson.
These new labels are somewhat arbitrary. They don’t have the long cultural associations that elements such as gold and lead enjoy. It’s likely this is the first you’ve heard of them. Still, what matters is that they have names at all. They’re now things in the world, things we can discuss.
That’s the power of language, and it’s why it’s our first element.
Before we move on to the other three elements, I must issue a warning. I’ve been gushing about language, how amazing it is. But there’s a downside too: thinking in terms of language can make it difficult for us to see what is really there.
Languaging is abstracting. But reality isn’t abstract. Even as we master language, we must remember that these descriptions we layer on the world are not the world. As Alfred Korzybski said, “the map is not the territory.”
There’s a kind of fall from grace that happens when we language. We become unwitting reductionists and risk losing sight of the present whole. I’m reminded of a beautiful quote from Krishnamurti: “The day you teach the child the name of the bird, the child will never see that bird again.”
Because we are so immersed in language, and so adept at abstracting things, information architects must try extra hard to see what is really there.
One of the main things we do with language is making distinctions. That is the second element of information architecture.
By “distinction” I mean differences between things. People don’t experience reality as a continuous whole. Instead, we divide it into manageable concepts. This is not that, it’s something else. I am the writer, you are the reader. This is an article about information architecture. You’re reading it in a blog. These are all distinctions. In a sense, none of these things are real — yet we experience them very vividly in our minds, through language.
Most of the time, when we’re designing an information architecture, we’re looking to create crisp and clear distinctions. Ambiguity is our nemesis. I published a new podcast episode last Sunday. I shared the link to the show in LinkedIn, and one of my contacts loved it. Or at least I think he did.
LinkedIn, like other social networks, allows you to mark posts with one of several status flags. You can like a post, love it, celebrate it, or mark it as either insightful or curious. Ostensibly, the user knows the difference between these.
Note that you can’t say the post sucks. That’s not a distinction LinkedIn’s designers have deemed an appropriate response in that environment. As a result, I’m wary any time somebody marks one of my posts as “curious.” Are they really intrigued, or just dissing it in the only way LinkedIn makes possible: passive-aggressively?
Which is to say, the meaning of “curious” has shifted for me. I can’t help but do this, because distinctions are contextually dependent.
They also create contexts. The labels we use to describe system elements never stand on their own. We experience them in sets, which influence the way we understand them. Consider this list of labels from a website’s primary navigation bar:
Taken out of the list, any one of these terms triggers images in your mind — they evoke distinctions. But as a set, they establish this as a particular place. It’s different from other places online, a new distinction.
Of course, careful distinction-making isn’t just the domain of website navigation structures. All successful information architectures make deliberate use of distinctions.
The elements aluminum and silicon are next to each other in the periodic table. They aren’t the same — by definition. They’ve been set off each other, and given labels. Some of these identifiers, like the elements’ atomic numbers, matter a lot in the world of chemistry. Others, like their names and symbols are more arbitrary. They’re all marks of distinction.
If you search for the periodic table in Google, you’ll see renderings tend to be colorful. This is because the table includes groups of elements with particular characteristics. They’re distinguished using different background colors.
Distinctions are very important to understanding, and they’re elemental to information architecture. We must strive to establish meaningful distinctions that help people make sense of their world. That said, as with language, there is also a danger inherent in deploying distinctions.
People can latch on to differences, leading them to make broad assumptions that hinder their understanding. Our most pernicious -isms: sexism, racism, ageism, and so on, are the result of runaway distinctions. Because we are so attuned to distinctions, information architects must strive to see past them, to realize that reality is never black and white, but composed of a full spectrum of colors that coexist and blend with each other.
That brings us to the third element: relationships. Relationships are the flip-side of distinctions.
A big part of what information architects do is establish relationships between concepts. We rate them, rank them, nest them. We link to them. We use metaphors and analogies to bind concepts together. When we do these things, we make it possible for new meanings to emerge.
Let’s return to the navigation bar we saw earlier. By putting these labels next to each other, we create relationships between them. Scores, schedule, and stats now exist as part of the same ontological plane. They acquire new meanings from their neighbors, and as a set.
Richard Saul Wurman coined the acronym LATCH to describe the ways in which we relate concepts to each other: by location, alphabet, time, category, and hierarchy. Some of these are more malleable than others. We will not invent our own alphabetical ordering, but we may invent our own categorization scheme.
What is well within our remit is to deploy any — or several — of these organization schemes in service to better understanding. The more familiar, the better. This requires that we understand the concepts we’re organizing. But it also requires we understand the needs of the people our work will serve. What do they need to know? What do they know? How do they know?
The human nervous system is attuned to spotting patterns. We’ve evolved to see the forest and the trees. This can lead us astray, making us see relationships where there aren’t any. But it can also lead to new understanding.
Mendeleev and his colleagues noticed patterns — relationships — in the ways elements interact with each other. The periodic table is a manifestation of those patterns, which are inherent in nature. The elements in the table are organized according to groups — the columns on the table — and periods, which are the rows.
The relationships between groups and periods is very powerful. In the past, I’ve talked about information as anything that helps reduce uncertainty so you can make better predictions about outcomes. Well, the particular arrangement of elements in the periodic table fits this definition. The structure helps predict what will happen when elements interact with each other. Good information architectures make us smarter.
As I did with the other two elements, I will wrap the discussion of relationships with a warning: we must be mindful when establishing relationships between concepts. As I said before, such relationships create new meanings. These new meanings can overpower or erode the original meanings of the concepts we’re using.
For example, one of my bugbears is what’s happened to the word “news.” This is a critical concept for a democracy. News is supposed to be our feedback mechanism. But the word news has lost some of its meaning as we’ve started using it for new purposes.
“News feed” is a useful metaphor to help us understand a new concept. But I contend this new relationship has also eroded the original — and more important meaning of news, even if just a little. By seeing “news” paired with “feed,” we now inhabit a world in which news isn’t necessarily news: it can also be an endless stream of mostly trivial observations curated by algorithms designed to keep us engaged.
George Orwell warned us about messing around with language in this way. Of all the design disciplines, ours is the one most likely to tread Orwellian terrain. We must proceed very carefully as we set about forging new relationships.
The great Persian poet Rumi had a great saying that captures the power of relationships. He said,
You think because you understand “one” you must also understand “two,” because one and one make two. But you must also understand “and.”
Information architecture is about one and two — which are labels and distinctions — but it’s also about and.
The fourth, and final, element I will highlight is rules. Information architects care about the emergence, design, and evolution of order. We’re concerned with frameworks of language and distinctions and relationships, but also with the underlying principles that establish and govern them.
Ideally, the language structures we define — the relationships and distinctions they enable — continue to serve their purposes as they evolve and extend over time, sometimes long after the people who started them have moved on. This requires that we know how and why these configurations come into being, and what makes them an ongoing concern. It also requires that we get good at articulating these principles clearly.
The structure of the periodic table isn’t accidental. It emerged from patterns inherent in the atomic structure of the elements themselves. The relationships and distinctions between the elements on the table reflect rules that were unacknowledged until the table was codified. There’s a method to the madness.
That’s how Mendeleev could predict the characteristics of yet undiscovered elements. During the time he was working on it, chemists knew of around sixty elements. The table now has almost twice that many. There’s no question about where these new elements fit within the structure.
What there is question about is how to name new ones. And here, too, there are governance processes. They’re not perfect, and it can take a while. But eventually, consensus is reached. In other words, neither locating or naming new elements is a free-for-all.
In reality, free-for-all situations are rare. Most choices happen within structural constraints. The door determines where you enter the room. The road determines where you drive. The links on the page determine what you can click on.
Information architecture operates on the structural constraints that bind us to possible courses of action. There is immense power in this — and as the philosopher Stan Lee taught us, “With great power comes great responsibility.”
This means that information architects must have our values straight. We must also be politically savvy. Defining a taxonomy — a set of relationships and distinctions between terms — requires buy-in, sometimes from stakeholders with conflicting goals.
Since they define our choices, all taxonomies are political. Otto von Bismarck said, “politics is the art of the possible.” He could’ve been talking about IA. These taxonomies will need to change over time — that is, if they stick around at all. New needs will come up, and new concepts will be required to fill those needs. We’ll need new language to describe them.
If our structures are too rigid, they won’t adapt to changing needs and contextual conditions. If they’re too lax, over time they’ll lose their coherence and their ability to serve their original purposes. Thus, we most often aim for a middle path between the two.
The work ultimately calls for more than just designing the structures that support our experiences. We must also design the systems that produce and manage those structures in the long term. Often, we’re not designing walls, but the trellises on which other structures will grow.
Governance is the word used most often. Personally, I prefer the word stewardship. If information environments are the contexts where our key social interactions happen, the organizations who own those environments are their stewards. They’re responsible for enabling healthy interactions. They should approach that responsibility as we do the management of our physical environments: looking for long-term relevance and sustainability.
Because our work operates at such low levels, and lasts so long, information architects must be especially mindful to not enable systems that exploit our nervous system for extractive gains. Doing so requires that we fully embrace our role as definers of generative rules.
Ultimately, information architecture isn’t about nav bars and search engines and site maps. It is about order in service to understanding.
To effectively design order, we must look beneath the surface, to the elements that make IA distinct from other disciplines. These elements are language, distinctions, relationships, and rules. Information architects use them to create structures that help others understand. It’s elemental that we get good at this.
I will leave you with one final observation. The periodic table may be a work of information architecture. But Dmitri Mendeleev was not an information architect. He was a chemist. People have been creating structures that enable understanding for many centuries. We’ve only started self-identifying as information architects in the last fifty years or so.
I don’t expect a surge in jobs with the title “information architect.” I don’t know if that ever made sense. Information architecture isn’t fashionable, but it’s something that all of us should be good at. Especially now, that so much in our lives happens in information environments.
What does make sense is for us to help information architecture be a thing in the world — by codifying the practices we’ve learned in the last fifty years so all the people struggling out there — our own Mendeleevs – can make sense of their messes more easily, and so that they don’t unwittingly make messes bigger.
Information architecture helps us make sense of the world. As such, it makes us smarter. It’s elemental that everyone get good at it.
]]>The problem is exacerbated with multiple products in the mix. As successful organizations grow, they develop new offerings internally or expand through acquisition. In so doing, their portfolio becomes more complex. Suddenly, users must deal with common features and functionality, such as settings screens and login processes, that use different processes and labels.
Taken as a whole, these terms constitute an ontology: a particular set of meanings specific to that product or portfolio. Users know that words mean different things in different contexts. By carefully choosing terms, designers can make even complex UIs can feel natural and cohesive. But when terminology runs amok, trouble ensues.
Consider two complementary products. Product A refers to login details as an “account.” When interacting with that product, you see a label that says, “Login to your account.” This label communicates two conventions:
This example rests on familiarity: both “account” and “log in” are industry standards for this concept and action. But neither of these terms is inherent in the technology; like most of what we see when interacting with computers, they’re metaphors.
Obviously, your goal is consistency. You’ll confuse users if you refer to login details as an “account” in one part of the system and “user profile” in another. (It’s okay to use both phrases if they mean different things; “user profile” might refer to their profile picture while “account” refers to their username, password, and 2FA details.)
Aligning terminology within one product is challenging enough. But imagine what happens when a second product enters the picture. Product B is complementary to Product A but was developed by a competitor. Instead of “account,” Product B uses the word “profile” to refer to the user’s credentials in the system.
Because they’re complementary, some customers of Product A also use Product B. They cope with the cognitive dissonance because they know both products are developed by different organizations. But what happens if Product A’s organization acquires Product B? Now, the two sit alongside each other in the same portfolio. Since they’re owned by the same organization, users wonder why they operate differently.
Remember, what matters is what’s going on in users’ minds. Under the hood, all these things might be stored in the same database table. But users will perceive them differently if they have different names. The problem isn’t how things work but how they’re presented.
In situations like these, the acquiring company will often modify Product B so it looks more like Product A. But the change is skin-deep: colors, layout, typography, logos, etc. This is an important step, but further work must be done to align the underlying semantic structures and make the experience coherent.
Unfortunately, this is expensive. Redesigning a product’s aesthetics is like Botox: something you can do as an outpatient. Redesigning the system’s semantic structures is like major plastic surgery: it requires general anesthesia and cutting into bone. As a result, organizations delay it. Over time, the system enters an uncanny state where UIs look cohesive — with similar layouts, colors, fonts, logos, etc. — but are rife with semantic misalignments.
These inconsistencies make products harder to use. Users must remember how things are done differently in different parts of the system, adding to their cognitive load. Onboarding and day-to-day usage suffers. Sales processes take longer, support and training costs rise, and customer satisfaction plummets. In short, synergies are squandered to reduce redesign costs.
If the organization waits too long to fix these issues, it accrues what I call ontological debt. An analog to this is technical debt: consistently choosing to add new features rather than fix issues that emerged in previous iterations. This approach leads to problems down the line: a tangled mess to untangle later (hopefully by somebody else.)
Technical debt is demonstrably true for complex codebases. Something similar happens to the system’s semantic environment. If you introduce idiosyncrasies (an inevitable side effect of growth,) over time you’ll have a frustrating system that feels like what it is: a hodgepodge of parts made by different teams that haven’t communicated with each other.
Obviously, you want to avoid this. The first step is recognizing the problem. Design and research teams are often aware of the situation: site and search analytics can offer hints and user interviews provide firsthand reports. Support teams also get feedback from frustrated users.
But sometimes, internal teams aren’t aware of misalignments. I’ve been in projects where different teams use the same terms throughout and eventually realize they mean slightly different things. These realizations often come in redesign workshops where everyone’s in the same “room’ for the first time. (This is one reason why information architecture is a useful MacGuffin.)
So, what can you do about ontological debt? If your organization is buried in it, you must pay it down. This means redesigning (at least) parts of the system. But you can’t start with screen-level design. Instead, you must start with a model — in this case, a conceptual model of the whole system, including functionality shared between products. The idea is to nail down the key concepts users will encounter as they interact with the system.
All products, whether intentionally designed or not, have a such a model. It defines specific names for product features and the relationships between them, which allow users to accomplish tasks. It’s the “source of truth” for portfolio-wide concepts. What do we call a user account? Is it an “account”? If so, what happens if it’s a bank that provides users with bank accounts? Potential confusion arises.
You must address the model. But even though it’s a system-wide artifact, you shouldn’t attempt to redesign the entire portfolio in one go. A better approach is to start with one product while keeping the big picture in mind. This “T-shaped” approach means that you’ll deal with particulars while working on universals. It keeps the model grounded in reality. With a solid initial model, you can redesign other products and features to align them. You’ll tweak the model as you go.
As you may have guessed, this isn’t a project but an ongoing initiative. It’s a complement to design systems: both aim to bring consistency and coherence to the portfolio. The difference is that design systems focus on UI components, aesthetics, tone of voice, etc., whereas conceptual models focus on semantics.
Someone must take on the responsibility of evolving the model long-term. This role sits alongside others responsible for system-wide coherence. (E.g., the design system.) Again, the scope is portfolio-wide, even though particular UI interventions happen at the product or feature level.
Of course, this assumes that the organization is willing to fund such work. It’s a tall order: the benefits of system-wide coherence are harder to quantify than those of new features. UI rollout also happens in slower cycles; it might take a year or two before things start to improve.
In other words, paying down ontological debt is a leadership challenge. It mostly falls on product and design leaders, with support from the C-suite. But one way or another, someone must address this issue. Organizations grappling with ontological debt squander opportunities for alignment — not just in the UX, but internally as well.
Ironically, ontological debt is a good problem to have. Only growing organizations accrue ontological debt. But they must pay it down nonetheless. It’s a symptom of internal misalignment that causes users pain — and it becomes harder to address the longer you wait.
]]>Recently, Joey became one of the many people in tech who’s been laid off. That was the subject of our conversation.
This isn’t a common subject for my podcast, but I know many people who’ve lost their jobs and are looking for their next gig. They’d benefit from listening to Joey. He published a series of posts on Global Nerdy about this most recent layoff experience (his fifth) and his recommendations can help anyone navigating this situation.
There’s a stigma around layoffs; it’s often considered a taboo subject. Joey stressed the importance of discussing the situation openly. This not only lets people know you’re available, but can also help with the emotional fallout. It also helps de-stigmatize the situation:
by letting people know that you’re available, you’re just increasing the odds that you will be found by somebody who is looking for someone at a certain point. And I think the other thing is, I think it also surprises people into also talking about their experiences. There’s a kind of stigma, actually, where people are going, “Oh God, if I tell people, if I tell people I got laid off, I’m broadcasting that I have somehow failed.”
And at the same time, also there are people who have… I have received some emails and texts and messages from people who are going, “When I read your articles, I didn’t know whether to contact you or not, and I decided to do it because I didn’t want to bother you,” or it felt weird for some reason. And because it is a regular occupational hazard, probably not just in our industry but in the 21st-century working world, we probably should de-stigmatize this sort of thing and just normalize it: “Hey, you know what? My company let me go. Here’s my experience. I’m looking for the next thing.”
Joey maintained a professional demeanor throughout the process. This was most evident in the meeting where he was laid off. On the other side were an HR person and his manager’s manager, both of whom were dialing in from California, where it was 6:30 in the morning. Joey acknowledged that none of them liked being in the situation, and made things easier.
Joey also recommended stepping away from the desk to move the body and reconnect with the world. This tactic has served me well in all sorts of circumstances. But when dealing with a stressful situation, such as a layoff, taking care of your body is especially important. Going outside (weather permitting) helps.
When it comes to moving forward, Joey also had recommendations. He emphasized the importance of having an active network. Joey is an extrovert, which makes this easier. But networks are especially important in turbulent markets. It helps if these networks are comprised of people from diverse backgrounds:
I like to say life is a team sport… Generally, one of the reasons is that a network exposes what I like to think of as your opportunity surface area. The people you know may be exposed to other things just by virtue of being different people that you might not be exposed to, might hear of things that you might not hear about, and vice versa.
You will know and hear about things that people in your network haven’t heard about, and if you work collectively, synergistically, you can all lift each other up. And that’s why we form communities.
Joey also suggested learning and sharing publicly, both for personal growth and to become more discoverable by potential employers:
if you like using social media, even if you have nothing to write about yourself, I would say just at least say, “Hey, look at this thing I found,” or “Here’s this article I found interesting.” Or “Here’s a diagram that I think is relevant.” Start posting that regularly. I would especially say that on LinkedIn because what you’re doing is you’re generating signal that recruiters and hiring people who are paying for the $10,000-a-month version of LinkedIn, with all the search tools, will find.
… if you decide to upskill or learn, learn in public, share what you’re learning. Once again, it helps other people. It actually helps you learn, and it also generates more of that very valuable signal that will help you get found and help you land either your next gig or your next customer.
Ultimately, it’s people that matter. You help them, they help you. Large companies are especially impersonal. Connecting at a human level is essential to any career, but seldom more so than when there are mass layoffs.
These are difficult times for many. If you’ve been laid off, I wish you success in finding your next opportunity. I hope this conversation helps.
]]>When I booked the appointment for the demo, I had to answer questions about my eyesight. I wear prescription contact lenses; the app told me to bring my eyeglasses instead. That’s because the process at the store starts with an Apple employee (who was, as usual, courteous and helpful) scanning my eyeglasses for the prescription. Then, they scanned my face with an iPhone to determine the right fit.
I tried two different light shields and neither fit me exactly; I could see light entering the device from the bottom. This didn’t distract from the experience, but it surprised me given how the process makes a big deal about measuring my face. It also took a couple of tries before the Apple Store folk could get the device calibrated to my eyes. Eventually it worked and I went through the onboarding process. It was quick, but didn’t include the creation of a digital persona. (Which is part of the normal onboarding process.)
The person who walked me through the demo was new at the experience and had to ask for help from their colleagues several times. As a result, I had more time to explore than would’ve been otherwise the case. But the overall message was: this isn’t something you just pick up and use; it requires hand-holding.
AVP is the most customized tech product I’ve ever experienced. You can’t just pick one off the shelf. I doubt you ever will. It’s like buying a suit: you buy it to fit your measures. I can’t imagine the complex logistics behind the scenes: dozens of SKUs, myriad possible combinations of parts, between optical inserts, light shields, cushions, etc. Boggles the mind.
As with all Apple products, AVP is super high quality. Beautiful use of materials, astonishing fit and finish. It’s a delightful object from the future. It’s also larger and heavier than I expected. The official Apple travel case was sitting on the table in front of me, and it is huge. It would take half my carry-on bag. I can tell why it’s so big and puffy: AVP seems fragile; something that must travel inside a pillow.
I could feel pressure on my forehead and the bridge of my nose. I suspect this is partly due to the choice of holding the device in place with a band that wraps around the back of the head, which places all the pressure on the front.
When I took the device off at the end of the demo, my face had red blotches all over, especially on my forehead. It looked as though I’d been punched in the face, and this was only after fifteen minutes or so of wear. I can’t imagine wearing this thing for an hour or longer, at least not with the default behind-the-head strap. (I’d love to try the over-the-skull strap, but that wasn’t part of the demo.)
As everyone says, the passthrough video feed is very impressive. It looks as though you’re seeing the world through medium-gray ski goggles with a narrow field of view. But that’s impressive: if not for the digital things overlaid on the feed, you wouldn’t know you’re looking at video. UI elements are firmly pinned in place. If you move your head, they stay where they’re supposed to. The illusion that they’re things in the world is convincing.
Pointing to UI elements by looking at them takes practice. I couldn’t master it in the fifteen or so minutes I had with the device. Pinching to select is also not as intuitive as you’d expect. You must release the pinch for it to register; I had a tendency to keep my fingers clamped together.
I knew app windows could be positioned on the Z-axis (distance from you) in addition to the X- and Y-axes. What I didn’t know — and which surprised me — is that you can only position them within a relatively narrow band around you. Per my calculations, it comprises a ring from about a meter and a half to two meters around you. Meaning, you can’t position windows infinitely far away from you. Of course, you can also pin them in some part of the (physical) room and walk away, but I didn’t get the chance to try this. (The demo had me seated at a table in the Apple Store.)
Surprisingly, the most impressive part of the UI was sound. The side-mounted speakers are an astonishing accomplishment of audio engineering: I was utterly fooled into thinking I was hearing things in the real world. Also, my friend Alex was trying Vision Pro about two meters from where I was, and I couldn’t hear his audio feed. Very, very impressive.
I didn’t have a lot of time to play with AVP software. It’s mostly iPad apps that have been adapted to the new platform. They seemed to work well, but my demo only included photos and entertainment stuff. A pity, since my primary interest in AVP is for productivity. I didn’t get to try the virtual keyboard or what I most wanted to see: Mac screen sharing.
I did see a 3D clip from the recent Mario Bros. movie. It was good, but this isn’t a use case that interests me. Apple’s immersive experiences demo reel was more impressive: travel and nature stuff (a rhino you could almost touch) and a tightrope walker over a vertigo-inducing gorge. All impressive, but I’d seen similar things in VR before (if not as high quality.)
Spatial video was something else altogether. The 3D video of a family blowing out candles on a birthday cake made my jaw drop and almost brought tears to my eyes. An incredibly emotional experience. It wasn’t my family in the video, of course, but I could imagine what I’d feel if it was. Very impressive. That said, that video was shot using an AVP, which I couldn’t imagine doing IRL. The shots I saw taken with an iPhone (the more likely scenario) looked good, but were more like ViewMaster pictures.
The 3D video stuff is interesting, but not a reason to buy AVP. (At least not for me.) I’m still intrigued by the productivity use cases, considering that I work a lot on the iPad. I expect this would be similar, but with a much larger canvas to locate apps. It could be an excellent productivity environment given the right apps. The ability to dial reality out of the picture could be a boon for people who have trouble focusing on one task. I found the effect both convincing and relaxing. It changes the value of economy class airplane seats.
Is Apple Vision Pro the future of computing? I hope not. It’s the most personal computing experience I’ve ever had — as in, it’s meant only for me, an individual working and playing alone in my own virtual sandbox. But many of the most valuable things I do with computers involve other people.
The primary use case for AVP is entertainment: movie-watching and immersive experiences. But I watch movies and TV shows with my family almost exclusively. That’s because I want to have an experience with them. Shows are a MacGuffin for experiences with my family, not the other way around. AVP makes the experience the focus — and it’s a fundamentally individual experience.
WRT productivity, the primary use case is as a place for focus. David Sparks has written about using it as a virtual writer’s cabin, and that seems compelling. But a big part of what I do for work is meet with people, primarily over Zoom. I wouldn’t do this as a digital persona, at least not in its current form. For some use cases, it’s good to be alone in a completely digital environment. But not all. I’m concerned that this device will nudge people toward more isolation. It’s an impressive technological achievement but might be a step back socially.
Will I buy one? Perhaps. The digital writer’s cabin scenario is compelling to me. But the current version is very expensive for this sole use case. Of course, I design digital experiences for a living, so I should spend time exploring new interaction paradigms. AVP offers plenty, and that’s a compelling reason in itself. But I’d buy it knowing that it’s a device for isolation — at a time when we need human contact and collaboration more than ever.
]]>Of course, I didn’t. IA has been around for decades. Alas, the discipline hasn’t achieved mainstream acceptance. Which is a pity: Information architecture is needed now more than ever.
Information environments are where we work, shop, bank, learn, play, and communicate. Human experience mediated through information structures: that’s the object of design after software ate the world. People need information systems that meet them on their own terms, using language they understand. That’s why information architecture matters.
So, IA needs to be more widespread. But how? Can any designer do it? Yes, but it requires mastering a particular set of skills. I group them into three broad areas:
Let’s look at them more closely.
As I mentioned above, the object of design is human experience. The key outcomes of a design project are successful human interactions, such as finding and filling out the correct form, quickly locating the right product in a vast online store, and meeting a suitable mate.
Designers working on such systems must understand basic psychology: what motivates and repels people. But it doesn’t stop there. They must also understand how people think about the system domain — i.e., their mental models.
For example, I assume you and I have a similar mental model of “design” in the context of this essay. I expect you understand we’re focused on software-mediated experiences. But after my last post, a reader emailed me to complain about the state of physical design, including automobiles, industrial products, and packaging. I don’t write about these things, but I can’t assume readers know that. “Design” is a broad umbrella. My bad!
You can’t understand mental models by reading books. You must do research with real people. That said, Don Norman’s The Design of Everyday Things is a book I’d recommend to someone getting started with psychology in design.
Understanding people is only one half of the equation. Understanding how information is structured is the other half. I don’t mean this in the abstract: you must grok how things like databases and metadata work. You must also understand “information primitives”: basic ways to organize information so it can be useful and understandable. They include:
Information can be organized relationally, hierarchically, or in ad-hoc networks. The ways of doing this aren’t infinite. It behooves you to understand which ways work best for which purposes. One way to gain this knowledge is by implementing a project that requires it — e.g., an online store or other information repository that requires tagging data with multiple facets.
First-hand knowledge will help you understand the possibilities and constraints of different information structures. Building architects must understand the structural capabilities of steel, wood, and brick. You must do the same with information. Don’t just read about it, do it. Get a feel for how metadata works, how databases store data, and how taxonomies are managed. (That said, books can help: check out Heather Hedden’s The Accidental Taxonomist for the latter.)
The last area of focus is where design comes in. Design is a way of addressing challenges by abduction – that is, by making things and seeing how well (or not) they rise to the challenge. To oversimplify, design works by
The key is that you can’t test hypotheses in the abstract: you must put a prototype in front of real people. What you learn from these interactions will lead you to refine the prototype. You repeat the cycle as long as it takes to either produce a good fit or abandon the hypothesis. Alas, information architecture operates primarily in the abstract; its main deliverables are boxes-and-arrows diagrams that represent information primitives.
Many people find it hard to envision a house when they see floor plans, and they also find it hard to imagine an app or website when they see a site map. But helping envision outcomes isn’t the only reason to make prototypes. An untested IA is a gamble: it may make sense to you, but you have more sophisticated mental models about the system than your users. You can’t validate other people’s understanding by showing them diagrams; you must build things they can interact with.
But even that’s not the only reason to make “real” things. As with information structures, first-hand knowledge of UI will give you a sense of what can/can’t be done practically.Again, architects must understand the characteristics of steel and wood if the are to know which are best suited for particular purposes. You must do the same for the ways people interact with information structures.
I don’t have a reading recommendation here; this really requires that you roll up your sleeves and get a feel for UI. It’s especially important now when there’s so much variety and change in interaction paradigms. (E.g., how do you prototype a spatial experience in Vision Pro? I bet 2-D wireframes won’t cut it.)
Effective information architecture requires that designers embrace a multifaceted approach: they must understand human psychology, information structures, and how people interact with information systems. It’s not enough to understand these things in theory. You must get out and do the research, build and manage databases, and prototype new user experiences.
And what’s more, all three keep shifting. Although psychology is remarkably stable, the means for understanding it are evolving. (Consider the degree to which remote interviews have changed research.) And, of course, how we organize and interact with information (i.e., back-end and front-end) are evolving very fast.
Mastering three areas may feel like a big ask. But it gets harder. I’ve left out other skills that are table stakes: systems thinking, effective communication, analytical skills, active listening, and project management. All of these are essential, but more broadly so. (They’re not specific to IA; you can’t succeed in organizational settings without them.)
Yes, it’s a lot. That may be why IA isn’t yet a more mainstream discipline. But it’s only by developing these core skills that you can design systems that address people on their terms while providing more capable products and services. Given the increasing pervasiveness of information-mediated experiences, it’s an essential discipline for the future.
]]>The gist is that language is a powerful force in shaping and defining designed things. But this is especially true of digital products, which are more abstract than physical products such as chairs. ‘Language’ here doesn’t refer just to content, but also the contexts created when we make and label distinctions between things.
Elizabeth raised a useful distinction between ‘everyday language’ and ‘system language’:
Those terms came from a colleague of mine at Shopify, Quentin. And I really loved it as well. I just lifted it wholesale. He had this idea of ‘everyday language’ and ‘system language.’ I really feel that system language is what I thought of as the conceptual model, which is the agreed-upon set of terms that we’re using to define a problem set, an area. And this would be maybe what you were saying earlier, the containers that the language sits within, that the experience sits within. These are the kind of container sets.
And then, everyday language can be somewhat fluid beyond that, and that inflects to meet the audience where they are. Because you do need, again, coming back to this idea of the digital reality is very ephemeral, you need a certain kind of grounding and consistency. If you’re gonna call an ‘article’ product, you’re gonna call it an ‘article.’ You’re not gonna call it a ‘page’ in another part of the product. You need sort of a system reality, otherwise, the whole thing starts to fall apart and feel quite different.
Modeling — particularly modeling distinctions, grounded as they are in language — is the central practice of designers working on complex digital systems. The goal: excellent user experience driven by conceptual clarity — one that can only be achieved through intent.
The Informed Life episode 133: Elizabeth McGuane on Design by Definition
]]>