The Role of Structure in Digital Design

Andy Fitzgerald, in A List Apart:

design efforts that focus on creating visually effective pages are no longer sufficient to ensure the integrity or accuracy of content published on the web. Rather, by focusing on providing access to information in a structured, systematic way that is legible to both humans and machines, content publishers can ensure that their content is both accessible and accurate in these new contexts, whether or not they’re producing chatbots or tapping into AI directly.

Digital designers have long considered user interfaces to be the primary artifacts of their work. For many, the structures that inform these interfaces have been relegated to a secondary role — that is, if they’ve been considered at all.

Thanks to the revolution sparked by the iPhone, today we experience information environments through a variety of device form factors. Thus far, these interactions have mostly happened in screen-based devices, but that’s changing too. And to top things off, digital experiences are becoming ever more central to our social fabric.

Designing an information environment in 2019 without considering its underlying structures — and how they evolve — is a form of malpractice.

Conversations with Robots: Voice, Smart Agents & the Case for Structured Content

TAOI: Adding More Context to Tweets

The architecture of information:

According to a report on The Verge, Twitter will soon start testing new ways of displaying tweets that should give them more context. Some features clarify messages’ positions in conversations using reply threads:

I’m more intrigued by two other features: availability indicators and context tags. The former are green bubbles next to the user’s name that indicate whether s/he is online and using the app at any given time. (Much like other chat systems do.) The latter are tags that allow users to indicate what a tweet refers to. Having a bit more context on what a tweet is about should help avoid non-sequiturs. (I assume it would also make it easier to filter out things you don’t want to bother with.)

Image: Twitter
Image: Twitter

Features like these should drive engagement in Twitter and add clarity for users; a case of alignment between the company’s goals and those of its users.

Twitter is rolling out speech bubbles to select users in the coming weeks

The Urgent Design Questions of Our Time

George Dyson, from his 2019 EDGE New Year’s Essay:

There is now more code than ever, but it is increasingly difficult to find anyone who has their hands on the wheel. Individual agency is on the wane. Most of us, most of the time, are following instructions delivered to us by computers rather than the other way around. The digital revolution has come full circle and the next revolution, an analog revolution, has begun.

For a long time, the central objects of concern for designers have been interfaces: the touchpoints where users interact with systems. This is changing. The central objects of concern now are systems’ underlying models. Increasingly, these models aren’t artifacts designed in the traditional sense. Instead, they emerge from systems that learn about themselves and the contexts they’re operating in, adapt to those contexts, and in so doing change them.

The urgent design questions of our time aren’t about the usability or fitness-to-purpose of forms; they’re about the ethics and control of systems:

  • Are the system’s adaptation cycles virtuous or vicious?
  • Who determines the incentives that drive them?
  • How do we effectively prototype emergent systems so we can avoid unintended consequences?
  • Where, when, and how do we intervene most effectively?
  • Who intervenes?

Childhood’s End: The digital revolution isn’t over but has turned into something else

A Bold Example of Semantic Pollution

Sometimes language changes slowly and inadvertently. The meaning of words can change over time as language evolves. That’s how many semantic environments become polluted: little by little. But sometimes change happens abruptly and purposefully. This past weekend, AT&T gave us an excellent example of how to pollute a semantic environment in one blow.

Today’s mobile phone networks work on what’s known as 4G technology. It’s a standard that’s widely adopted by the mobile communications industry. When your smartphone connects to a 4G network, you see a little icon on your phone’s screen that says either 4G or LTE. These 4G networks are plenty fast for most uses today.

However, the industry is working on the next generation network technology called — you guessed it — 5G. The first 5G devices are already appearing on the market. That said, widespread rollout won’t be immediate: the new technology requires new hardware on phones, changes to cell towers, and a host of other changes. It’ll likely be a couple of years before the new standard becomes mainstream.

Despite these technical hurdles, last weekend AT&T started issuing updates to some Android phones in their network that change the network label to 5G. Nothing else is different about these devices; their hardware is still the same and they still connect using the same network technology. So what’s the reason for the change? AT&T has decided to label some advanced current-generation technologies “5G E.” When the real 5G comes around, they’ll call that “5G+.”

This seems like an effort to make the AT&T network look more advanced than those of its competitors. The result, of course, is that this change confuses what 5G means. It erodes the usefulness of the term; afterward, it’ll be harder for nontechnical AT&T customers to know what technology they’re using. It’s a bold example of how to co-opt language at the expense of clarity and understanding.

AT&T decides 4G is now “5G,” starts issuing icon-changing software updates

Framing the Problem

Jon Kolko on problem framing:

The goal of research is widely claimed to be about empathy building and understanding so we can identify and solve problems, and that’s not wrong. But it ignores one of the most important parts of research as an input for design strategy. Research helps produce a problem frame.

A conundrum: The way we articulate design problems implies solutions. At the beginning of a project, we often don’t know enough to communicate the problem well. As a result, we could do an excellent job of solving the wrong thing.

Addressing complex design problems — “solving” them — requires that we define them; that we put a frame around the problem space. This frame emerges from a feedback loop: a round of research leads to some definition, which in turn focuses the next round of research activities, which leads to more definition, etc.

Framing the problem in the way described by Mr. Kolko — by using research to define boundaries and relevant context, and using the resulting insights to guide further research — is a practical way to focus ill-structured problems. It’s an often overlooked part of the design process, and — especially in complex problems — a critical one.

Problem Framing, Not Problem Finding

Four Types of Prototypes

Prototypes are central to the design process; they’re the means by which designers establish the feedback loops that allow artifacts to evolve. But there’s a pitfall when discussing prototypes: there are different types and uses of prototypes, but only one word (“prototype”) to describe them. This unacknowledged variety can cause confusion and false expectations. Talking about prototype resolution and fidelity is as far as many designers go, but often that’s not far enough.

In a paper (PDF) published in the Handbook of Human-Computer Interaction (2nd Ed) (1997), Apple designers Stephanie Houde and Charles Hill resolve this issue by identifying four categories of prototypes, based on the dimension of the design project they’re meant to illuminate:

  • Role prototypes
  • Look and feel prototypes
  • Implementation prototypes
  • Integration prototypes

(Following quotes are from the paper.)

Role prototypes

Role prototypes are those which are built primarily to investigate questions of what an artifact could do for a user. They describe the functionality that a user might benefit from, with little attention to how the artifact would look and feel, or how it could be made to actually work.

In other words, this dimension is meant to cover the “jobs to be done” angle of the product or system. How will people use it? Would it be useful in its intended role? What other purposes could it serve for them?

Look and feel prototypes

Look and feel prototypes are built primarily to explore and demonstrate options for the concrete experience of an artifact. They simulate what it would be like to look at and interact with, without necessarily investigating the role it would play in the user’s life or how it would be made to work.

This dimension doesn’t require much explanation; these are prototypes that are meant to explore UI possibilities and refine possible interaction directions. (I sense that many people focus on this aspect of prototyping above the others; look and feel is perhaps the most natural target for critique.)

Implementation prototypes

Some prototypes are built primarily to answer technical questions about how a future artifact might actually be made to work. They are used to discover methods by which adequate specifications for the final artifact can be achieved — without having to define its look and feel or the role it will play for a user. (Some specifications may be unstated, and may include externally imposed constraints, such as the need to reuse existing components or production machinery.)

These prototypes often seek to answer questions about feasibility: Can we make this? What challenges will we face in producing such a thing? How will a particular technology affect its performance? Etc.

Integration prototypes

Integration prototypes are built to represent the complete user experience of an artifact. Such prototypes bring together the artifact’s intended design in terms of role, look and feel, and implementation.

As its name suggests, this final category includes prototypes that are meant to explore the other three dimensions. Their comprehensive nature makes them more useful in simulating “real” conditions for end users, but it also makes them more difficult and expensive to build.

The authors illustrate all four categories with extensive examples. (Mostly charming 1990s-era software projects, some of them prescient and resonant with today’s world.) A running theme throughout is the importance of being clear on who the audience is for the prototype and what purpose it’s meant to serve. A prototype meant to help the internal design team explore a new code library will have a very different form than one meant to excite the public-at-large about the possibilities afforded by a new technology.

The paper concludes with four suggestions for design teams that acknowledge this empathic angle:

  • Define “prototype” broadly.
  • Build multiple prototypes.
  • Know your audience.
  • Know your prototype; prepare your audience.

Twenty-plus years on, this paper remains a useful articulation of the importance of prototypes, and a call to using them more consciously to inform the design process.

What do Prototypes Prototype? (PDF)

How to Measure Network Effects

Li Jin and D’Arcy Coolican, writing for Andreessen Horowitz:

Network effects are one of the most important dynamics in software and marketplace businesses. But they’re often spoken of in a binary way: either you have them, or you don’t. In practice, most companies’ network effects are much more complex, falling along a spectrum of different types and strengths. They’re also dynamic and evolve as product, users, and competition changes.

They go on to outline sixteen ways in which network effects can be measured, grouped into five categories:

Acquisition

  • Organic vs. paid users
  • Sources of traffic
  • Time series of paid customer acquisition cost

Competitors

  • Prevalence of multi-tenanting
  • Switching or multi-homing costs

Engagement

  • User retention cohorts
  • Core action retention cohorts
  • Dollar retention & paid user retention cohorts
  • Retention by location/geography
  • Power user curves

Marketplace metrics

  • Match rate (aka utilization rate, success rate, etc.)
  • Market depth
  • Time to find a match (or inventory turnover, or days to turn)
  • Concentration or fragmentation of supply and demand

Economics-related

  • Pricing power
  • Unit economics

I love it when somebody adds granularity and nuance to a concept I previously understood only in binary terms. This post does that for network-centric businesses.

16 Ways to Measure Network Effects

An Economy of Deception

Max Read, writing for New York magazine:

How much of the internet is fake? Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake. They called this hypothetical event “the Inversion.”

And it’s not just traffic. The article highlights other aspects of online life that aren’t what they appear to be, from businesses and content to the people behind them. As participants in digital information environments, we must increasingly grapple with thorny philosophical questions. What is real? Who is a person? What’s trustworthy?

This situation isn’t inherent to digital information environments. It’s the result of bad incentive structures. Trafficking in advertising — the buying and selling of human attention — has had pernicious effects on the internet. It’s created an economy of deception in one of the most beautiful systems our species has created.

How Much of the Internet Is Fake?

A Community In Search of a Home

Earlier this year, Google announced plans to shutter Google+, its failed social network. While this decision won’t affect many of us, some folks consider Google+ home. A post on Medium by Steven T. Wright highlights the plight of these communities that will go away when Google+ shuts down in April of 2019. Can they find a new place to meet?

The story profiles developer John Lewis, who built a group on Google+ with the purpose of trying to find a replacement information environment for these folks. Some are finding their trust in large companies as stewards of their information environments has eroded as a result of the way Google (mis)managed G+:

“Some are going to platforms similar to Facebook like MeWe, some are going to open-source sites like different Diaspora pods,” he says. “I think people are a bit wary of the big companies, after seeing what the rest of Google did to Google+. With their divided attention, Facebook was able to take all of their cool features and cannibalize them. I think we want something that will last for a while, that won’t be shut down by some exec.”

People invest real time and energy in these places. While on one level they are “products” to be “managed,” they’re also infrastructure where people build out important parts of their lives.

The Death of Google+ Is Tearing Its Die Hard Communities Apart