The Strategic Value of Information Architecture

Ultimately information architecture is about making distinctions; dividing things into categories. To do this effectively, designers need to take a comprehensive approach to understanding the problem space. This includes not just an organization’s content (including its products and services) and its customers, but also the context it’s operating in: the language people use to describe it, what its competitors are up to, market trends, and more.

These are strategic concerns. Developing a successful business strategy requires more than a deep commitment to the purpose of the enterprise and a firm belief in its ability to succeed; it also requires seeing clearly at the highest levels. Things that are clear in retrospect often emerge from ambiguous beginnings. Information architects are experts at disentangling the most complex of these messes, allowing organizations to see their current context more clearly.

Strategy also calls for envisioning possibilities. As with other design disciplines, IA makes the possible tangible. Specifically, IA makes tangible integrated sets of language structures and processes that influence how people perceive a particular part of the business. This can result in a navigation system for a complex website, but it can also result in a new structure for the company’s sales organization or a new approach to dealing with customer support. Information architecture operates at a more abstract level than other design disciplines, so its output is more broadly applicable.

Yes, IA is often in service to creating information environments that are easier to use; of making information easier to find and understand. But there is more to it than this. The process of understanding the problem space — and of establishing the distinctions that will make the environment coherent — forces strategic product conversations that are often overlooked, especially in fast-moving business contexts. And the act of modeling the information environment is often a powerful catalyst for clarifying strategy at the level of products and for the organization as a whole.

What’s the Purpose of Design Artifacts?

At a high level, the purpose of design artifacts is always the same: to communicate intent. However, audiences for artifacts vary widely, and they all want different things out of them. Hence, we have many different approaches to documenting design which vary in scope and degree of fidelity.

Audiences can include:

  • Stakeholders
  • The stakeholders’ bosses
  • Customers (who will be testing the system)
  • Developers
  • Other members of the design team
  • The designer herself

Purposes can include:

  • Understanding the general direction of the system
  • Exploring structural directions
  • Exploring possible interaction mechanisms
  • Exploring visual directions
  • Understanding decision-making
  • Providing construction guidance to developers
  • Testing with customers

Designers need to understand the needs of the audience(s) that will be using their design artifacts, and which artifacts work best for particular needs. Artifacts suited for communicating visual directions do little to communicate structural directions, and vice-versa, while those that provide construction guidance are not best for justifying decisions — or at least they don’t if they’re any good. A stakeholder may have little use for construction documents other than to know they exist and can be used to build the system. On the other hand, this same stakeholder may need documents that justify the reasoning behind design directions, something that would be of little use to users of the system.

It’s easy for us to fall into the trap of believing that artifacts are the design. I’ve seen situations where stakeholders specify upfront the types and quantity of “deliverables” for a design project, with no regard for what they will be used for. Designers willingly comply because they, too, tend to measure their progress based on the wireframes, sketches, prototypes or whatever else they’ve produced. This is a mistake. Artifacts are communication tools. They’re a sort of language we employ when communicating intent; a means to create ​a feedback loop between the design team and others in the world — which is to say, a means for bringing others into the design team. Using the wrong feedback loop with the wrong audience at the wrong time can do more harm than good.

Knowing which type of artifact is most appropriate to a particular audience for a specific purpose requires two-way agreement: both parties must negotiate the protocol. Ask people what they need, and know when you’re called to suggest alternatives. After you find out what works best for the people involved, you can communicate intent in ways that make it useful for the situation at hand.

The Instrument

One of my favorite pieces of music is Igor Stravinsky’s The Rite of Spring. Although meant as a ballet and scored for a classical orchestra, the Rite doesn’t sound anything like what you think of when you think of “classical” music. Instead of being genteel and melodic, it shifts from soft and sensuous to brutal, thundering, and atonal — and back again. It’s so different that it nearly caused a riot when it premiered in Paris in 1913.

One of my favorite recordings of The Rite of Spring is a four-handed piano version played by Fazil Say, a performance which recreates much of the color provided by a full orchestra with “just” a piano. (I say “just” in quotes because there’s lots of studio magic involved, starting with having Say accompany himself. He also grunts and hums throughout, and the piano has been “prepared,” a la John Cage. Still, the recording is astonishing.)

If you’ve ever had the opportunity to sit in front of a piano, you’ll know it’s relatively easy to make it produce sounds: all you have to do is press a key. On a piano, all the notes you need are easily accessible. However, there’s a wide gap between noodling around and playing something like the Rite, with its nuance, range, and percussive violence. To produce this performance, Say had to first master his instrument.

Musicians aren’t the only ones who use instruments; scientists have them too. Instead of using them to create art, scientists’ instruments allow them to see things the rest of us can’t see. In the 17th century, Galileo Galilei created a telescope that allowed him to look at the heavens in a new way, ushering a new understanding of the universe. Galileo, too, mastered this instrument.

A telescope is not the same type of instrument as a piano, but they do have some things in common. These are not mere tools. People like Galileo and Say spend a considerable part of their lives familiarizing themselves with their instruments. These instruments become extensions of themselves which they use to probe the universe — and put dents in it, too. They practice on their instruments, always trying to improve. They take great care of them, making sure they are in proper working order. They treat their instruments with great respect, perhaps even reverence.

As a designer, I think a lot about my instrument. You may be thinking I’m talking about a software tool like Adobe Illustrator, or maybe paper and pen — but I’m not. I consider my consciousness to be my primary instrument. My ability to be present — to bring my full awareness to a situation — is the one thing that is essential for me to do my job and to do it well.

I care for and respect this instrument. I avoid doing things with it that may damage it or make it “go out of tune.” I study its capabilities and nuances. I practice daily. (Mindfulness meditation, in case you’re wondering.) I think of it as a combination of a scientific instrument and a musical instrument: when functioning properly, it allows me both to perceive things more clearly and create new things. It also makes it possible for me to empathize and communicate better with other people.

I aim to master this instrument.

Discoverability in the Age of Touchscreens

When I was first getting started with computers, in the late 1970s, user interfaces looked like this:

Visicalc, the first spreadsheet program, required users to learn commands. Image: Wikipedia (
Visicalc, the first spreadsheet program, required users to learn commands. Image: Wikipedia

Getting the computer to do anything required learning arcane incantations and typing them into a command line. While some — such as LOAD and LIST — were common English words, others weren’t. Moving to a new computer often meant learning new sets of incantations.

As with all software, operating systems and applications based on command line interfaces implement conceptual models. However, these conceptual models are not obvious; the user must learn them either by trial and error — entering arbitrary (and potentially destructive) commands — or reading the manual. (An activity so despised it’s acquired a vulgar acronym: RTFM.) Studying manuals is time-consuming, cognitively taxing, and somewhat scary. This held back the potential of computers for a long time.

Continue reading

Selling Information Architecture

One of the most common concerns I hear from designers who are new to information architecture goes something like this:

I totally get how important semantic structures are to the success of my [ website | app | whatever ], but my [ boss | stakeholder | client | whatever ] just doesn’t get it. How can I convince [ him | her ] of the value of information architecture?

I start from the premise that you can’t convince anyone of anything. People are busy. They have incentives that are different from yours. They interpret persuasion as confrontation. Whatever the case, trying to convince people who are set in their views is often ineffective. What you can do is help them understand.

How does an information architect do this? As the man said, we create the structure or map of information which allows others to find their personal paths to knowledge. In other words, consider your speech act and subsequent dialog an exercise in information architecture. Approach it like you would a design project: with respect and understanding for both the person and the subject matter. Aim to create bridges between the two. Use language they’re comfortable with. Set boundaries.

This requires that you understand the person’s mental model. How do they see the role of design? How do they engage others in conversation? How open are they to new language? If they’re not open to new language, bombarding them at the beginning with challenging terms such as “taxonomy,” “ontology,” and even “information architecture” itself will be counter-productive.

Ultimately, what you’re trying to do is help them see the value of thinking through the system’s conceptual model and how it’s articulated through semantic structures — before they explore user interface directions. Whether they call this “information architecture,” “UX design,” or “whiteboard magic” is inconsequential at this point. What you want is to do the work, regardless of what it’s called. Engage people on their own terms. When they start seeing results you can ease them into the technical vocabulary.

This is not to diminish the value or importance of labels. Generative conversations call for using the right terms; efficiency requires that we call things by their proper names. You’ll arrive there eventually — but only if your interlocutors have first understood for themselves the value of the work.

Start With a Conceptual Model

The design of an information environment should start with its conceptual model. Doing so gives the team a high-level understanding of the tasks people will do there, the elements and actions required for them to complete those tasks, the sequence they need to happen in, and — ultimately — the purpose of the environment. You can’t gain this understanding from sketching user interfaces; you must begin at a higher level.

I’ve often seen designers jump to UI design too early in the process before they have a clear grasp of the system’s conceptual model. This is understandable: models are abstractions, and abstractions are difficult to talk about. People love seeing and discussing UI; it feels like “real” progress in the project. (This is true both for designers and stakeholders.) However, starting with UI causes the team to miss the big picture, and often leads to incoherence down the line — especially as systems get more complex.

So how do you get designers and stakeholders to discuss the conceptual model without reverting to sketching UI? One approach that works for me is to ask the team to stop thinking about the thing we’re designing as software and start thinking about it as a place. How did people fulfill these tasks before digital? What kind of place should this be? What spaces does it need for people to accomplish their goals? What would people expect to find there? What do they expect to do with those things? Etc. For example, if you were designing a workforce training web application, you could start by imagining it as a physical environment (say, a school) rather than as a website. Needs and spaces could vary by user type (prospective students could have different needs than current students), by the ​degree of sociability​ required (individual study rooms versus classrooms), or by a variety of other factors.

The point is not to design a physical school; team members know the ultimate goal is to design an information environment and that they don’t need to take the analogy literally. The point is to get them to break out of thinking in terms of UI elements so they can:

  1. identify the tasks, objects, groupings, relationships, actions, etc. that will comprise the system,
  2. define the names and attributes of those things, and
  3. think through the choreography required for people to fulfill tasks in the environment.

With a solid understanding of the conceptual model, the team will be able to have discussions about the UI and how serves (or doesn’t serve) user goals. They will be able to test, iterate, and refine the UI and the conceptual model itself. But this is only feasible if they start from the conceptual model and then move to UI — it doesn’t work the other way around.

“Product” is the Wrong Framing

In Silicon Valley and many large enterprises, the default framing for thinking about customer-facing digital things is that they are “products.” I often meet peers who describe themselves as product designers. It’s not unusual to hear of teams working towards a minimum viable product. These things have product features that are defined by a product manager. When they launch, they’re said to be in production.

This framing of the object of our work as a product is not surprising. We have roots in industrial design and graphic design, two disciplines in which the central object of concern is most definitely a product. (If you’re designing a mass-produced chair, you can say you’re working on a product.) Products are what companies have traditionally produced.

“Product” is an appropriate framing for some classes of digital things — but not all. Android is not a product. iTunes is not a product. Facebook is not a product. Slack is not a product. Salesforce is not a product. Weibo is not a product. They are information environments that host ecosystems. They create contexts that alter the ways people understand the world, think, and act. They are platforms where first-, second-, and third-parties can build and host products of their own. The list of stakeholders is long and extends well beyond the confines of the organizations that “manage” these ecosystems.

The word “product” has connotations that are unhelpful in these cases. A product can be centrally controlled and managed. A product can be replicated. Calculating the ROI of a product is straightforward. Products are expected to change often and quickly lest they are​ overtaken in the market. The boundaries of products are clearly defined. None of these things are true of ecosystems.

Digital products aspire to become ecosystems. It may be more useful to think of the people who “manage” them not as managers but as stewards. “Stewardship” implies a bias towards resilience, sustainability, and holistic value generation that these systems should aspire to — especially as we move more of our social functions into them.

Aspire to Ever-Fatter Markers


A design exploration for the Dominican Motherhouse by the architect Louis Kahn. Kahn surrounded himself with people who could realize his ideas at greater levels of fidelity. Image: Arcade


A design career is a progression from thin markers to fat markers.

When you’re starting out, someone else gives you direction. You’re expected to fill in the details using very fine lines. To do so, you must understand the characteristics of the materials you’re representing on the paper, whether they be code, words, images, or bricks.

Once you’ve mastered the details, you can graduate to Sharpies. You can’t get too granular with Sharpies. This is good since it allows you to focus on the relationships between elements without getting lost in the details. You now understand how things can fit together locally. You can also identify, define, and convey patterns that allow designers with finer markers to work faster.

Eventually, you move up to whiteboard markers. With these blunt tools, you explore systemic issues: how elements relate to each other at the highest levels, how the outside world interacts with the system, how the system will evolve resiliently, who is responsible for what. You do this with collaborators in real-time; this includes stakeholders with concerns that are very different than yours. You develop gravitas and political savvy. At the whiteboard, you have an audience, and the stakes are high.

This audience includes designers wielding Sharpies and fine markers. Now you’re the one giving direction. As the person wielding the fat marker, it’s your responsibility to nurture the people using markers finer than yours, so they move on to fatter markers. You must also bring in new people to take up the fine markers others have left behind.

And what if you’re a team of one? Then you must keep markers of varying widths at hand. You must know which work best in which conditions, and when you need to switch pens. (You must still work on the gravitas and political savvy, by the way.)

You can’t design exclusively using whiteboard markers any more than you can with only fine markers. You need a combination of both. Good design managers help their teams master their skills and broaden their perspectives, and keep a vibrant mix of line widths in play. As a leader, you don’t necessarily stop being a practitioner; you just move on to a fatter marker.

3 Placemaking Lessons From the Magic Kingdom

If you design software, you need to know about placemaking. Why? Because the websites and apps you design will create the contexts in which people shop, bank, learn, gossip with their friends, store their photos, etc. While people will experience these things primarily through screens in phones, tablets, and computers, they actually perceive them as places they go to to do particular things.

Your users need to be able to make sense of these information environments so they can get around in them and find and do the things they came for, just as they do with physical environments such as towns and buildings. People need to form accurate mental models of these environments if they are to use them skillfully.

As a discipline, software user interface design has only been around for about sixty years. However, we’ve been designing places for much longer. There’s much we can learn from architecture and urban design to help us create more effective apps and websites. This article is a short case study in the design of a particular physical environment that has valuable lessons for those of us who design information environments: Disneyland.

Continue reading