Recognizing the Need

As designers, we’re called to improve our clients’ conditions. This, in turn, requires that we improve their client’s conditions. The “ask” usually comes as a request to move some variable relevant to the business: increase engagement, improve conversions, drive revenue, etc. But these are symptoms, not causes. You don’t design to drive revenue; revenue is the result of successfully meeting customer needs.

Charles Eames said that recognizing the need is the primary condition for design. While the project may be striving to drive revenue, that’s not a need. Engagement is not a need, nor is conversion. Those are business goals. What’s a need? More often than not, that’s up to the designer to uncover. You may speculate — but must research. You’re changing the state of the world. Who will benefit? How will they benefit? How does the change map to their current understanding — if at all?

Sometimes the need is obvious, but often it can be quite subtle. In some cases, you’ll be addressing a range of needs, some more relevant than others. Being clear on what they are — and having clear priorities — can be challenging. Different stakeholders may place more emphasis on some than others. Some needs will appear to conflict with others. Recognizing and clarifying the need is critical, and it won’t come with the “ask.” Framing the question correctly is essential if one is to produce a relevant answer — and that’s up to designers.

Falling In Love (Again) With Your Work

A career requires occasional renewal and refreshment. Re-engaging with the subject, perhaps at a different level; falling in love with it all over again. This is especially important if you aspire to have a long career, and doubly so if your chosen field involves technology.

Throughout my career, my primary professional identity has been as an information architect. The popularity of this term has waxed and waned over the years. In the late ‘90s, there was a lot of energy around the term. Then there was a time (about a decade later) when friends and colleagues started moving away. Some continued doing the work but not calling it “information architecture;” while others changed tracks entirely.

Although I’d like to say that external validation isn’t necessary, the truth is that a vibrant community matters. So these were challenging times. While I still loved what I did, I yearned for renewal. I revisited the book that introduced me to information architecture, Richard Saul Wurman’s Information Architects. The cover of the book defines terms:

In•for•ma•tion Ar•chi•tect [L infotectus] n. 1) the individual who organizes the patterns inherent in data, making the complex clear. 2) a person who creates the structure or map of information which allows others to find their personal paths to knowledge. 3) the emerging 21st century professional occupation addressing the needs of the age focused upon clarity, human understanding and the science of the organization of information. — In•for•ma•tion Ar•chi•tec•ture.

“There it is!,” I thought. “Regardless of what my peers decide to do with their own careers, this area of practice matters. People will need to find personal paths to knowledge, regardless of whether people are calling themselves information architects or not. It’s important to people and important to the world — and it’s what I do.” I re-engaged and re-committed to my practice — wherever that may lead.

Re-visiting the text that had introduced me to the field released a tremendous amount of energy. As an artifact, the book hadn’t changed. But I had changed, as had the context around me. I didn’t have as much to prove as I did earlier in my career. I could now see the book (and my work) through beginner’s eyes — but looking through the lens of experience. That’s a powerful combination!

I keep my copy of Information Architects close at hand. Every once in a while, I open it again. Some parts have aged more gracefully than others. Overall, it remains a powerful reminder of why I do the work I do; it inspires me to continue evolving. What about you? Do you have something that reminds you of why you do what you do, that helps you grow while moving towards the direction you initially set upon? How do you refresh your commitment to your work?

Book Notes: “Factfulness”

Factfulness: Ten Reasons We’re Wrong About the World — and Why Things Are Better Than You Think
By Hans Rosling, Ola Rosling, and Anna Rosling Rönnlund
Flatiron Books, 2018

Like many people, I first heard about Hans Rosling via his popular TED talk, where he showed using animated bubble charts multiple ways in which data point to an improving world. Factfulness — which Bill Gates called one of the most important books he’s ever read — is like a paper-based version of that presentation: It does, indeed, use data to explain how things are getting better. But it does more than that: It also explains why we find that so hard to believe.

The book divided into ten chapters corresponding to biases or “instincts” that delude us:

  1. The Gap Instinct: Our tendency towards polarizing what we see.
  2. The Negativity Instinct: Our tendency to spread bad news over good news. (I.e., “Good news is not news.”)
  3. The Straight Line Instinct: Our tendency to project future trends based on current trends.
  4. The Fear Instinct: Our (deeply hard-wired) tendency to pay more attention to frightening things.
  5. The Size Instinct: Out tendency to gravitate towards impressively large or small numbers, losing our sense of proportion.
  6. The Generalization Instinct: Our tendency to generalize and categorize data.
  7. The Destiny Instinct: Our tendency to not perceive change when it happens slowly and gradually.
  8. The Single Perspective Instinct: Our tendency to see things only from our angle.
  9. The Blame Instinct: Our tendency to find scapegoats to blame for the way things are.
  10. The Urgency Instinct: Our tendency to react to changing conditions by intervening immediately.

Looking at data objectively, it’s hard not to see how humanity has made enormous progress. But you wouldn’t know this if you look at the news or interact with others in social media. This is in part because most people argue from a perspective that is distorted by these “instincts.” This book shows you how to overcome these biases so you can understand things more objectively. It’s not a dose of optimism, but a dose of possibilism, a word Rosling coined (and which I’ve written about before.)

Hans Rosling died of pancreatic cancer before finishing the Factfulness. His collaborators — his son Ola and daughter-in-law Anna — conclude the book with a note regarding the impact he hoped it would have: to help us work towards a fact-based worldview. Given the power and outreach of our technologies, we need it more than ever. This book is an important contribution in that direction.

Buy it in Amazon.com

Not an Optimist

I’ve written earlier about the importance of being optimistic. I still think it’s important to have an optimistic outlook. However, I recently came across a description that better captures the position I aspire to. It’s in Hans Rosling’s great book Factfulness: Ten Reasons We’re Wrong About the World — and Why Things Are Better Than You Think:

“I’m not an optimist. That makes me sound naïve. I’m a very serious ‘possibilist.’ That’s something I made up. It means someone who neither hopes without reason, nor fears without reason, someone who constantly resists the overdramatic worldview. As a possibilist, I see all this progress, and it fills me with conviction and hope that further progress is possible. This is not optimistic. It is having a clear and reasonable idea about how things are. It is having a worldview that is constructive and useful.”

I like the distinction Rosling draws between optimism and “possibilism.” For many people, optimism implies keeping a sunny disposition in spite of (or in ignorance of) the facts. That’s not healthy. What we want is a “clear and reasonable idea of how things are.” Seeing clearly, free from distortions.

The progress Rosling is alluding to is the subject of the book: factual data that shows how, overall, things have been getting better for humanity and for the world over time. If this sounds counter-intuitive, it’s because of several cognitive biases that affect how we understand reality (and which Rosling skillfully dismantles.)

Our effectiveness as designers (or citizens, or co-workers, or parents, or…) requires that we understand these biases and how they influence how we perceive things (and therefore, how we act.) How can you propose any kind of intervention into a system or situation when you don’t yet have a “clear and reasonable” understanding of it?

That’s why research is so important to design. But research is not enough. If you’re trying to see clearly something that is very small or very large or very distant, you must have the right instruments, and have them in proper working order. But much also depends on whether you know which instruments are called for to begin with, how to configure them, how to point them in the right direction, and what the “right” direction is. This requires that you be in proper working order — among other things, free from an “overdramatic worldview.”

Book Notes: “Ten Arguments for Deleting Your Social Media Accounts Right Now”

Ten Arguments for Deleting Your Social Media Accounts Right Now
By Jaron Lanier
Henry Holt and Company, 2018

Jaron Lanier is not just a VR pioneer. He’s also one of the earliest critics of the technological and economic conditions that have led to our current social media-instigated malaise. His book You Are Not a Gadget: A Manifesto (2010) was more than prescient: it diagnosed the broken fundamentals of the advertising business model years before most of us understood the pernicious effects of moving important social interactions to environments that are financed by attention-mongering.

Lanier’s latest book doesn’t pull punches. True to its title, it consists of ten short arguments for quitting social media cold turkey. Quoting the back cover, these arguments are:

  1. You are losing your free will.
  2. Quitting social media is the most finely targeted way to resist the insanity of our times.
  3. Social media is making you into an asshole.
  4. Social media is undermining truth.
  5. Social media is making what you say meaningless.
  6. Social media is destroying your capacity for empathy.
  7. Social media is making you unhappy.
  8. Social media doesn’t want you to have economic dignity.
  9. Social media is making politics impossible.
  10. Social media hates your soul.

Some of these are more effective than others. (For me, at least — Lanier offers several YMMV disclaimers.) While the overall impression is that we do have a problem with social media, I’m put off by the book’s overly confrontational approach. For example, early on, Lanier proposes an acronym to describe advertising-supported internet companies: BUMMER. From then on, he refers to companies such as Facebook and Google as BUMMER companies.

This is a short book and something of a rant, so I didn’t expect nuance. That said, the argument could’ve been stronger if it acknowledged more of the genuine value people get from some of these information environments. One of the book’s underlying premises is that we become addicted to these environments by design. That’s truer of some than others. For example, WhatsApp is where my family and I catch up with each other. I’m not compulsively drawn to that environment in the same way that I am to Twitter and Facebook. Yes, there are other reasons why WhatsApp may not be good for me, but I do find some value there. The book doesn’t delve enough into these types of distinctions.

The combined effect of the confrontational stance and ranty nature of the work detract from the seriousness of its subject. That said, it’s a short read, and well worth your while: It will make you think about the way you approach your online interactions, and where.

Buy it on Amazon.com

The Meaning Vampires

Let’s look at a painting:

Mona Lisa

Surely you’re familiar with this one. It’s very famous! By the time I was a university student, I’d seen reproductions of this painting many hundreds (if not thousands) of times in a great variety of contexts: history books, art books, magazines, print ads, TV commercials, etc. In my last year of university, I visited The Louvre, where the “original” is on display. It was on a wall, behind thick glass.

My first thought upon seeing the “real” Mona Lisa was that it’s smaller than I expected, perhaps because I was very far away. There was a large crowd surrounding the painting. Immediately in front of me was a man with a child on his shoulders; he’d propped him up so he could see. “Son,” he said, “there it is! The most famous painting in the world!”

The most famous painting in the world. Not just a beautiful portrait of a particular individual, an excellent example of Renaissance painting or of the sfumato oil painting technique: This painting represents something else, something broader. It’s the uber-painting, the one that represents all other paintings. If you were to stop a random person on the street and ask them to name a painting, the odds are high this is the one they’ll mention.

Mona Lisa’s fame has resulted in (and been the result of) endless reproduction and re-contextualization. What was once a stand-in for a real person is now used to convey lots of other meanings: Renaissance, painting, art, genius, and so much more. With each new reproduction, each new act of association between the painting and an idea, a little bit of its original meaning erodes.

Painting, art, and genius are abstractions; they need signifiers to help us discuss them in concrete ways. Mona Lisa fills this need beautifully. We latch on to the new signification and repeat it. In so doing, the painting’s original meaning wanes. As familiar as you are with the image itself, I’m willing to bet you’re not as familiar with the name Lisa Gherardini, the ostensible subject of the painting. (This post, too, layers new meaning onto the Mona Lisa, nudging it ever slightly further from its original role and meaning; the irony isn’t lost on me.)

Of course, this doesn’t just happen with paintings; all signifiers can be subject to this sort of meaning shift. For most of our history, it’s happened relatively slowly: Mona Lisa’s meaning changed over a long period of time. However, in digital information environments — and social media, in particular — this process can happen much faster.

Consider the appearance of the word “BREAKING” at the beginning of a news item. I still remember a time in my life when seeing this word in this context would cause my heart to skip a beat. “Oh no!,” I’d wonder, “What terrible thing has happened now?” “BREAKING” used to mean something like, “an event has occurred that is important enough for us to immediately ask for your attention.” It’s useful to have a signifier that serves this purpose; sometimes we need to stop what we’re doing to pay attention. (“BREAKING: 100-foot waves spotted heading towards San Francisco as a result of a ​tsunami. Evacuate now!”) “BREAKING” is thus a sort of incantation that makes us prop up.

Or rather, it used to be. You see, this usage doesn’t come for free: it only works as intended if we use it infrequently and for really important things. When we overuse the word, its meaning starts to erode — and once gone, we’ve lost it. At some point, somebody recognized the power of “BREAKING” to get us to pay attention​ and used it for something slightly less than truly urgent or relevant. Other people picked up on the usage, and from then on it was a slippery slope to irrelevance:

The original meaning of the Mona Lisa has mostly eroded. But that’s OK; Lisa Gherardini and the people who loved her (and would’ve cared about the verisimilitude of her representation) are long gone. The painting is now serving a new role. This shift in meaning is useful to us​, even if we can no longer appreciate the painting as such and only do so as an abstract concept. However, in diminishing the potency of “BREAKING” through repetition and re-contextualization, we’ve done ourselves a disservice: we’ve lost a powerful signifier that used to play a useful (and necessary) role.

Those of us who structure information environments are continually repeating and re-contextualizing signifiers, often for novel purposes. We must approach the task with great respect to the power of language, being mindful not to trivialize the words, phrases, and symbols we use. It’s frighteningly easy to drain signifiers of their meaning, and once that’s happened, their potency (and effectiveness) cannot be regained; the life is sucked out of them.

Note: I first learned about the erosion of meaning through reproduction—using Mona Lisa as an example—from one of my university professors. I wish I could remember her name so I could give her credit for it.

TAOI: Personalized Facebook Navigation Bar

The architecture of information:

A big change is coming to the Facebook mobile app over the next few weeks. Yesterday, the company announced that it’s redesigning their mobile app navigation bar. This is the bar you see at the bottom of the iOS app (and the top of the Android app) with shortcuts to the most important parts of the Facebook information environment:

The issue, of course, is that “most important” is relative: what’s most important to me may be completely irrelevant to you. For example, see that “shop” icon in the center of that screenshot above? That’s for the Facebook Marketplace, the company’s eBay competitor. I never use Marketplace — in fact, I sometimes annoyingly end up there because I mis-tap that center icon. I wish I could have something else instead of Marketplace in that nav bar, and that indeed seems to be what’s coming.

Soon, Facebook will start rolling out a version of the app that changes the primary shortcuts for each user of the application. Initially, the choices will be based on usage, but eventually the app will allow you to choose which shortcuts you want on the bar. (Well, all except three. You won’t be able to change the shortcuts to the newsfeed, notifications, and main menu. That makes sense, as those are undoubtedly the most important parts of the Facebook environment.)

While we haven’t gotten a glimpse of how Facebook plans to implement this feature, I’m reminded of the customizable main navigation bar of my favorite Twitter client, Tweetbot:

See those up and down arrows in the two rightmost icons? That means you can long-press those to get more options:

This allows you to customize two of the five shortcuts in the navigation bar. It’s not something you use all the time, but it’s convenient to have; a detail that makes the app more useful and personal. Is this what Facebook is doing? I don’t know — but I hope it is. I have very little use for the Marketplace and Video links in the navigation bar, and wish I could change them to something more useful.

Via CNET

Ad Hoc

Systems emerge, sometimes unwittingly. Unplanned, unstructured, intended only to address current needs, they evolve in fits and starts; a bit of duct tape here, a script there. Before you know it, you’re dealing with critical — and often, unmanageable — infrastructure.

I start a new Sketch file, meaning to illustrate ideas for a website’s navigation system. “I should turn this button into a symbol,” I say to myself, “so I can re-use it.” And so it begins. How much flexibility should I build into the symbol? How many variations will I be dealing with? I don’t yet know. Start simple, with just enough complexity to get the current job done. Grow from there; adjust as required. The system evolves. Edge cases are accommodated; complexity creeps in. “I’ll make a new symbol for this.” Problem solved — for now. And so, exceptions accumulate.

That said, I know the tradeoffs involved in allowing such anomalies. The time I save now will come with a price later. So I’m a mindful steward of the system. Before long, I have a file with many dozens of Sketch symbols, some quite elaborate. I’m fussy about the structure and naming of layers, but even so, I’m amazed at the complexity that ensues. Still, I now have a modular system that can illustrate a variety of situations very quickly.

In the best cases, these little ad hoc systems serve our immediate needs and then retire to a backup drive somewhere in our archives. But what if the system must continue serving future (still unspecified) needs? Then it must become more formal. Exceptions must be ironed out, or at least documented; patterns must be normalized; implicit structures concretized. (But not too much — the system must still be flexible.)

It all takes time, energy, and resources. At some point, managing the system becomes more onerous than working without one. So a careful balance must be struck and continuously re-adjusted. Constant vigilance is required: knowing how much to work on the work versus how much to work on the work that produces the work.