The Urgent Design Questions of Our Time

George Dyson, from his 2019 EDGE New Year’s Essay:

There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun.

For a long time, the central objects of concern for designers have been interfaces: the touchpoints where users interact with systems. This is changing. The central objects of concern now are systems’ underlying models. Increasingly, these models aren’t artifacts designed in the traditional sense. Instead, they emerge from systems that learn about themselves and the contexts they’re operating in, adapt to those contexts, and in so doing change them.

The urgent design questions of our time aren’t about the usability or fitness-to-purpose of forms; they’re about the ethics and control of systems:

  • Are the system’s adaptation cycles virtuous or vicious?
  • Who determines the incentives that drive them?
  • How do we effectively prototype emergent systems so we can avoid unintended consequences?
  • Where, when, and how do we intervene most effectively?
  • Who intervenes?

Childhood’s End: The digital revolution isn’t over but has turned into something else

A Bold Example of Semantic Pollution

Sometimes language changes slowly and inadvertently. The meaning of words can change over time as language evolves. That’s how many semantic environments become polluted: little by little. But sometimes change happens abruptly and purposefully. This past weekend, AT&T gave us an excellent example of how to pollute a semantic environment in one blow.

Today’s mobile phone networks work on what’s known as 4G technology. It’s a standard that’s widely adopted by the mobile communications industry. When your smartphone connects to a 4G network, you see a little icon on your phone’s screen that says either 4G or LTE. These 4G networks are plenty fast for most uses today.

However, the industry is working on the next generation network technology called — you guessed it — 5G. The first 5G devices are already appearing on the market. That said, widespread rollout won’t be immediate: the new technology requires new hardware on phones, changes to cell towers, and a host of other changes. It’ll likely be a couple of years before the new standard becomes mainstream.

Despite these technical hurdles, last weekend AT&T started issuing updates to some Android phones in their network that change the network label to 5G. Nothing else is different about these devices; their hardware is still the same and they still connect using the same network technology. So what’s the reason for the change? AT&T has decided to label some advanced current-generation technologies “5G E.” When the real 5G comes around, they’ll call that “5G+.”

This seems like an effort to make the AT&T network look more advanced than those of its competitors. The result, of course, is that this change confuses what 5G means. It erodes the usefulness of the term; afterward, it’ll be harder for nontechnical AT&T customers to know what technology they’re using. It’s a bold example of how to co-opt language at the expense of clarity and understanding.

AT&T decides 4G is now “5G,” starts issuing icon-changing software updates

Framing the Problem

Jon Kolko on problem framing:

The goal of research is widely claimed to be about empathy building and understanding so we can identify and solve problems, and that’s not wrong. But it ignores one of the most important parts of research as an input for design strategy. Research helps produce a problem frame.

A conundrum: The way we articulate design problems implies solutions. At the beginning of a project, we often don’t know enough to communicate the problem well. As a result, we could do an excellent job of solving the wrong thing.

Addressing complex design problems — “solving” them — requires that we define them; that we put a frame around the problem space. This frame emerges from a feedback loop: a round of research leads to some definition, which in turn focuses the next round of research activities, which leads to more definition, etc.

Framing the problem in the way described by Mr. Kolko — by using research to define boundaries and relevant context, and using the resulting insights to guide further research — is a practical way to focus ill-structured problems. It’s an often overlooked part of the design process, and — especially in complex problems — a critical one.

Problem Framing, Not Problem Finding

Four Types of Prototypes

Prototypes are central to the design process; they’re the means by which designers establish the feedback loops that allow artifacts to evolve. But there’s a pitfall when discussing prototypes: there are different types and uses of prototypes, but only one word (“prototype”) to describe them. This unacknowledged variety can cause confusion and false expectations. Talking about prototype resolution and fidelity is as far as many designers go, but often that’s not far enough.

In a paper (PDF) published in the Handbook of Human-Computer Interaction (2nd Ed) (1997), Apple designers Stephanie Houde and Charles Hill resolve this issue by identifying four categories of prototypes, based on the dimension of the design project they’re meant to illuminate:

  • Role prototypes
  • Look and feel prototypes
  • Implementation prototypes
  • Integration prototypes

(Following quotes are from the paper.)

Role prototypes

Role prototypes are those which are built primarily to investigate questions of what an artifact could do for a user. They describe the functionality that a user might benefit from, with little attention to how the artifact would look and feel, or how it could be made to actually work.

In other words, this dimension is meant to cover the “jobs to be done” angle of the product or system. How will people use it? Would it be useful in its intended role? What other purposes could it serve for them?

Look and feel prototypes

Look and feel prototypes are built primarily to explore and demonstrate options for the concrete experience of an artifact. They simulate what it would be like to look at and interact with, without necessarily investigating the role it would play in the user’s life or how it would be made to work.

This dimension doesn’t require much explanation; these are prototypes that are meant to explore UI possibilities and refine possible interaction directions. (I sense that many people focus on this aspect of prototyping above the others; look and feel is perhaps the most natural target for critique.)

Implementation prototypes

Some prototypes are built primarily to answer technical questions about how a future artifact might actually be made to work. They are used to discover methods by which adequate specifications for the final artifact can be achieved — without having to define its look and feel or the role it will play for a user. (Some specifications may be unstated, and may include externally imposed constraints, such as the need to reuse existing components or production machinery.)

These prototypes often seek to answer questions about feasibility: Can we make this? What challenges will we face in producing such a thing? How will a particular technology affect its performance? Etc.

Integration prototypes

Integration prototypes are built to represent the complete user experience of an artifact. Such prototypes bring together the artifact’s intended design in terms of role, look and feel, and implementation.

As its name suggests, this final category includes prototypes that are meant to explore the other three dimensions. Their comprehensive nature makes them more useful in simulating “real” conditions for end users, but it also makes them more difficult and expensive to build.

The authors illustrate all four categories with extensive examples. (Mostly charming 1990s-era software projects, some of them prescient and resonant with today’s world.) A running theme throughout is the importance of being clear on who the audience is for the prototype and what purpose it’s meant to serve. A prototype meant to help the internal design team explore a new code library will have a very different form than one meant to excite the public-at-large about the possibilities afforded by a new technology.

The paper concludes with four suggestions for design teams that acknowledge this empathic angle:

  • Define “prototype” broadly.
  • Build multiple prototypes.
  • Know your audience.
  • Know your prototype; prepare your audience.

Twenty-plus years on, this paper remains a useful articulation of the importance of prototypes, and a call to using them more consciously to inform the design process.

What do Prototypes Prototype? (PDF)

How to Measure Network Effects

Li Jin and D’Arcy Coolican, writing for Andreessen Horowitz:

Network effects are one of the most important dynamics in software and marketplace businesses. But they’re often spoken of in a binary way: either you have them, or you don’t. In practice, most companies’ network effects are much more complex, falling along a spectrum of different types and strengths. They’re also dynamic and evolve as product, users, and competition changes.

They go on to outline sixteen ways in which network effects can be measured, grouped into five categories:


  • Organic vs. paid users
  • Sources of traffic
  • Time series of paid customer acquisition cost


  • Prevalence of multi-tenanting
  • Switching or multi-homing costs


  • User retention cohorts
  • Core action retention cohorts
  • Dollar retention & paid user retention cohorts
  • Retention by location/geography
  • Power user curves

Marketplace metrics

  • Match rate (aka utilization rate, success rate, etc.)
  • Market depth
  • Time to find a match (or inventory turnover, or days to turn)
  • Concentration or fragmentation of supply and demand


  • Pricing power
  • Unit economics

I love it when somebody adds granularity and nuance to a concept I previously understood only in binary terms. This post does that for network-centric businesses.

16 Ways to Measure Network Effects

An Economy of Deception

Max Read, writing for New York magazine:

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

And it’s not just traffic. The article highlights other aspects of online life that aren’t what they appear to be, from businesses and content to the people behind them. As participants in digital information environments, we must increasingly grapple with thorny philosophical questions. What is real? Who is a person? What’s trustworthy?

This situation isn’t inherent to digital information environments. It’s the result of bad incentive structures. Trafficking in advertising — the buying and selling of human attention — has had pernicious effects on the internet. It’s created an economy of deception in one of the most beautiful systems our species has created.

How Much of the Internet Is Fake?

A Community In Search of a Home

Earlier this year, Google announced plans to shutter Google+, its failed social network. While this decision won’t affect many of us, some folks consider Google+ home. A post on Medium by Steven T. Wright highlights the plight of these communities that will go away when Google+ shuts down in April of 2019. Can they find a new place to meet?

The story profiles developer John Lewis, who built a group on Google+ with the purpose of trying to find a replacement information environment for these folks. Some are finding their trust in large companies as stewards of their information environments has eroded as a result of the way Google (mis)managed G+:

“Some are going to platforms similar to Facebook like MeWe, some are going to open-source sites like different Diaspora pods,” he says. “I think people are a bit wary of the big companies, after seeing what the rest of Google did to Google+. With their divided attention, Facebook was able to take all of their cool features and cannibalize them. I think we want something that will last for a while, that won’t be shut down by some exec.”

People invest real time and energy in these places. While on one level they are “products” to be “managed,” they’re also infrastructure where people build out important parts of their lives.

The Death of Google+ Is Tearing Its Die Hard Communities Apart

Working in a Second Language

Love this exchange between Daniel Kahneman and Tyler Cowen:

COWEN: Do you think that working outside of your native language in any ways influenced your ideas on psychology? It makes you more aware of thinking fast versus thinking slow? Or not?
KAHNEMAN: It’s something I used to think about in the context . . . I’m from Israel, and it was thinking whether there was something in common to Israeli intellectuals operating in a second language. And I thought that, in a way, it can be an advantage to operate in a second language, that there are certain things . . . that you can think about the thing itself, not through the words.
COWEN: It’s like lower sunk costs in a way.
KAHNEMAN: I don’t know exactly how to explain it, but I thought that this was not a loss for me, to do psychology in a second language.

This resonates with my experience as somebody who operates primarily using a second language. Working in English has made my work better, not worse. It’s been a forcing function that has made me more aware of the contingency of language; a significant effect given how central language is to information architecture.

Daniel Kahneman on Cutting Through the Noise (Ep. 56- Live at Mason)

The Role of Advertisers in Shaping Digital Ecosystems

Marc Pritchard, Procter & Gamble’s long-serving Chief Brand Officer, on how P&G helped create the modern digital media ecosystem:

when we first worked with them they were platforms for communication with people. They had no advertising business. We essentially worked with Facebook to figure out how to place media, how to do reach and frequency, within Facebook.

YouTube came along, and we thought this could be interesting. Not sure where it’s going to go, [but we] ended up monetizing it. What’s interesting about that is that [the founders of Facebook and YouTube] didn’t build these platforms for advertising. Some of the challenges that they’ve had recently, I think, have been because they were built for another purpose. Whereas other media companies, the TV and the radio, they started off and they built advertising in.

I’m not sure that the advertising business model is inherent to either TV or radio, but overall Mr. Pritchard is correct: digital information environments such as Facebook, Twitter, and YouTube weren’t designed primarily to persuade. The advertising-based business model they’ve latched on to (in part driven by demand from clients such as P&G) has created incentives that have harmed society as a whole.

That said, P&G recognizes that it has great power as one of the world’s leading advertisers, and has been taking steps to wield that power responsibly:

10 years ago [P&G] went down a purpose path. When I first started my job we had purpose-inspired, benefit-driven brands. What was interesting about that though, is that it was too disconnected from our business. Over the course of the last few years what we’ve done is we’ve gone back in at it, we’ve really become a citizenship platform. We’re building social responsibility into the business. It includes gender equality, community impact and environmental sustainability based on a foundation of ethics and responsibility.

Laudable! But what happens if/when tough choices are called for? If it came to it, would the company be willing to sacrifice growth and profit for sustainability?

The Biggest Voice In Advertising Finds Its Purpose