Working in a Second Language

Love this exchange between Daniel Kahneman and Tyler Cowen:

COWEN: Do you think that working outside of your native language in any ways influenced your ideas on psychology? It makes you more aware of thinking fast versus thinking slow? Or not?
KAHNEMAN: It’s something I used to think about in the context . . . I’m from Israel, and it was thinking whether there was something in common to Israeli intellectuals operating in a second language. And I thought that, in a way, it can be an advantage to operate in a second language, that there are certain things . . . that you can think about the thing itself, not through the words.
COWEN: It’s like lower sunk costs in a way.
KAHNEMAN: I don’t know exactly how to explain it, but I thought that this was not a loss for me, to do psychology in a second language.

This resonates with my experience as somebody who operates primarily using a second language. Working in English has made my work better, not worse. It’s been a forcing function that has made me more aware of the contingency of language; a significant effect given how central language is to information architecture.

Daniel Kahneman on Cutting Through the Noise (Ep. 56- Live at Mason)

The Role of Advertisers in Shaping Digital Ecosystems

Marc Pritchard, Procter & Gamble’s long-serving Chief Brand Officer, on how P&G helped create the modern digital media ecosystem:

when we first worked with them they were platforms for communication with people. They had no advertising business. We essentially worked with Facebook to figure out how to place media, how to do reach and frequency, within Facebook.

YouTube came along, and we thought this could be interesting. Not sure where it’s going to go, [but we] ended up monetizing it. What’s interesting about that is that [the founders of Facebook and YouTube] didn’t build these platforms for advertising. Some of the challenges that they’ve had recently, I think, have been because they were built for another purpose. Whereas other media companies, the TV and the radio, they started off and they built advertising in.

I’m not sure that the advertising business model is inherent to either TV or radio, but overall Mr. Pritchard is correct: digital information environments such as Facebook, Twitter, and YouTube weren’t designed primarily to persuade. The advertising-based business model they’ve latched on to (in part driven by demand from clients such as P&G) has created incentives that have harmed society as a whole.

That said, P&G recognizes that it has great power as one of the world’s leading advertisers, and has been taking steps to wield that power responsibly:

10 years ago [P&G] went down a purpose path. When I first started my job we had purpose-inspired, benefit-driven brands. What was interesting about that though, is that it was too disconnected from our business. Over the course of the last few years what we’ve done is we’ve gone back in at it, we’ve really become a citizenship platform. We’re building social responsibility into the business. It includes gender equality, community impact and environmental sustainability based on a foundation of ethics and responsibility.

Laudable! But what happens if/when tough choices are called for? If it came to it, would the company be willing to sacrifice growth and profit for sustainability?

The Biggest Voice In Advertising Finds Its Purpose

Cybernetics and Planned Economies

Great New Yorker story on project Cybersyn, Salvador Allende’s failed effort to create a cybernetic system to centralize control of Chile’s economy. Stafford Beer—an important figure in the cybernetics world—consulted on the design of the system. He left Chile before Allende was overthrown in the coup that led to the Pinochet dictatorship. But Beer had become disillusioned with the project before then:

One of the participating engineers described the factory modelling process as “fairly technocratic” and “top down”—it did not involve “speaking to the guy who was actually working on the mill or the spinning machine.” Frustrated with the growing bureaucratization of Project Cybersyn, Beer considered resigning. “If we wanted a new system of government, then it seems that we are not going to get it,” he wrote to his Chilean colleagues that spring. “The team is falling apart, and descending to personal recrimination.” Confined to the language of cybernetics, Beer didn’t know what to do. “I can see no way of practical change that does not very quickly damage the Chilean bureaucracy beyond repair,” he wrote.

This doesn’t surprise me. While I can see the allure of using computer technology to gather real-time feedback on the state of an economy, the control aspect of the Cybersyn project eludes me—especially given the enormous practical and technical constraints the team was facing. What would lead intelligent people to think they could 1) build an accurate model of an entire country’s economy, 2) that would be flexible enough to adapt to the changes that invariably happen in such a complex system, 3) that they could control, 4) using early 1970s computing technology? Hubris? Naïveté? Wishful thinking? More to the point, what lessons does this effort hold for today’s world, which we are wiring up with feedback and control mechanisms Cybersyn’s designers could only dream of? (I’ve long had the book Cybernetic Revolutionaries, which deals with the project’s history, on my reading queue. This New Yorker story has nudged me to move it up a couple of places in the line.)

The Planning Machine: Project Cybersyn and the original of the Big Data nation

Designing for the Brilliant Cacophony

Mike Monteiro writing for the Adobe Blog:

When I was a little baby designer I was taught that good design meant simplifying. Keep it clean. Keep it simple. Make the system as efficient as possible. As few templates as possible. I’m sure the same goes for setting up style sheets, servers, and all that other shit we do. My city would run more efficiently if we simplified everything.

But I wouldn’t want to live there.

My city is a mess. My country is a mess. The internet is a mess. But in none of those cases is the answer to look for efficiencies, but rather to celebrate the differences. Celebrate the reasons the metro stops aren’t all the same. Celebrate the crooked streets. Celebrate the different voices. Celebrate the different food smells. Understand that other people like things you don’t. And you might like things they don’t. And it’s all cool! That’s what makes this city, and all cities, a blast. And when all these amazing people, some of them who we don’t understand at all, go online they are going to behave as inefficiently in there as they do out there. And that is awesome.

And your job, the glorious job you signed up for when you said you wanted to be a designer, is to support all of these people. Make sure none of these incredible voices get lost. And to fight against those who see that brilliant cacophony as a bug and not the greatest feature of all time.

You are our protection against monsters.

The call for diversity resonates with me. (It’s the subject of the keynote I’ll be delivering at World IA Day 2019.) Being aware of the distinctions we are creating (or perpetuating) is particularly important for designers who are working on the information architecture of these systems, since the structures we create tend to be longer-lived than other parts of the information environment.

That said, it’s impossible for the systems we create—and the structures that underlie them—to represent every point of view. Designers must make choices; we must take positions. How do we determine what voices to heed among the cacophony? In order to know, we must ask another set of questions: what is this information environment ultimately in service to? What am I in service to? Are the two aligned?

Who Do Designers Really Work For

The Eponymous Laws of Tech

Dave Rupert has a great compendium of “Laws” we frequently encounter when working in tech. This includes well-known concepts like Moore’s Law, Godwin’s Law, and Dunbar’s Number alongside some I hadn’t heard before, such as Tessler’s Law:

"Every application must have an inherent amount of irreducible complexity. The only question is who will have to deal with it."
Tessler’s Law or the “Law of Conservation of Complexity” explains that not every piece of complexity can be hidden or removed. Complexity doesn’t always disappear, it’s often just passed around. Businesses need to fix these complex designs or that complexity is passed on to the user. Complex things are hard for users. 1 minute wasted by 1 million users is a lot of time where as it probably would have only taken a fraction of those minutes to fix the complexity. It cringe thinking about the parts of my products that waste users’ time by either being brokenly complex or by having unclear interactions.

Good to know!

The Eponymous Laws of Tech

Hugh Dubberly’s Approach to Understanding Problems

I’m a fan of Hugh Dubberly and the work of Dubberly Design Office. Not only have I learned much from Mr. Dubberly throughout my career, I’ve also had the honor of having him write the foreword for Living in Information and the privilege of teaching alongside him at CCA. A post on the Design Practices & Paradigm’s blog summarizes his career and approach to design, and includes this amazing list:

Dubberly’s approach to understanding problems is heavily influenced by Horst Rittel’s definition of simple and wicked problems. They key traits are listen here:

  • Simple problems (problems which are already defined) are easy to solve, because defining a problem inherently defines a solution.
  • The definition of a problem is subjective; it comes from a point of view. Thus, when defining problems, all stake-holders, experts, and designers are equally knowledgeable (or unknowledgeable).
  • Some problems cannot be solved, because stake-holders cannot agree on the definition. These problems are called wicked, but sometimes they can be tamed.
  • Solving simple problems may lead to improvement—but not innovation. For innovation, we need to re-frame wicked problems.
  • Because one person cannot possibly remember or keep track of all the variables (of both existing and desired states) in a wicked problem, taming wicked problems requires many people.
  • These people have to talk to each other; they have to deliberate; they have to argue.
  • To tame a wicked problem, they have to agree on goals and actions for reaching them. This requires knowledge about actions, not just facts.
  • Science is concerned with factual knowledge (what-is); design is concerned with instrumental knowledge (how what-is relates to what-ought-to-be), how actions can meet goals.
  • The process of argumentation is the key and perhaps the only method of taming wicked problems.
  • This process is political.
  • Design is political.

The whole post is worth your attention.

The “Magic” Behind the Curtain: Understanding the Design Process of Hugh Dubberly

Digital Nutrition

Digital information environments are still a relatively new thing for our species. We’re still looking for ways of understanding what these things are and how they affect us. Inevitably, we talk about them through the lenses of what’s come before—the things we’re familiar with. If (like me) your background is in architecture, you may think of them as types of places. But of course, that’s not the only way to think about them.

In an essay in Medium, Michael Philip Moskowitz and Hans Ringertz draw the analogy with food. They identify five pillars (diet, exercise, sleep hygiene, vocation, and interpersonal relationships) that are important for good mental health, which they identify as the leading cause of disability worldwide. To this they add a possible sixth:

What we, the authors of this essay, have come to believe through our separate paths of enquiry over the past decade is that a possible sixth pillar of health has emerged in the era of smartphone addiction and ubiquitous computing. We call this element “digital nutrition,” and in our view, healthy digital habits urgently warrant adoption.

What is digital nutrition?

We define digital nutrition as two distinct but complementary behaviors. The first is the healthful consumption of digital assets, or any positive, purposeful content designed to alleviate emotional distress or maximize human potential, health, and happiness. The second behavior is smarter decision-making, aided by greater transparency around the composition and behavioral consequences of specific types of digital content.

The way I’m reading it: you are what you eat—and also what you read online. Your digital media diet may be making you sick.

It’s an interesting analogy, but not one that I’m fully on board with. Information (what the authors call “digital assets”) is very different from food. We can identify what foods that nurture or harm us, but even with our advanced science this knowledge is still imperfect. (Think of the continually shifting opinions about the benefits of foods like red wine and coffee.) Still, it’s possible to envision nutritional science getting more accurate over time. How would the equivalent work for information, digital or otherwise? Information is too contextually dependent—too personal—for us to define universal guidelines of what constitutes a good digital diet at anything but the highest levels.

To the point: the authors anticipate the emergence of a “new labeling system for digital content — one that goes further than today’s film and TV ratings and more closely resembles nutritional information on food packaging.” This seems like dangerous ground. Who would do the labeling? To what ends?

It’s Time to Embrace Digital Nutrition

Kranzberg’s Laws of Technology

Michael Sacasas explains Kranzberg’s Six Laws of Technology, “a series of truisms deriving from a longtime immersion in the study of the development of technology and its interactions with sociocultural change”:

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity—and so is the history of technology.

A nuanced take on technology’s role in shaping our lives and societies.

Kranzberg’s Six Laws of Technology, a Metaphor, and a Story