Abstraction and Implementation

In his book Where the Action Is, Paul Dourish surfaces a key distinction in software: that of the user interface as an abstraction of the implementation details that underly it:

The essence of abstraction in software is that it hides implementation. The implementation is in some ways the opposite of the abstraction; where the abstraction is the gloss that describes how something can be used and what it will do, the implementation is the part under the covers that describes how it will work. If the gas pedal and the steering wheel are the abstraction, then the engine, power train, and steering assembly are the implementation.

Designers often focus on this abstraction of the system — the stuff users deal with. As a result, we spend a lot of cycles understanding users. But for the interface to be any good, designers must also understand the implementation — the system’s key elements, how they interact with each other, its processes, regulation mechanisms, etc.

Sometimes, as with a new (and perhaps unprecedented) system, this implementation itself is in flux, evolving subject to the system’s goals and the needs of the people who will interact with the system. That is, it’s not all front-end: the implementation is part of the design remit; both the implementation and its abstraction are the object of design.

Causes of (and Remedies for) Bias in AI

James Manyika, Jake Silberg, and Brittany Presten writing for the Harvard Business Review:

AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.

The phrase “artificial intelligence” is leading us astray. For some folks, it’s become a type of magical incantation that promises to solve all sorts of problems. Much of what goes by AI today isn’t magic — or intelligence, really; it’s dynamic applied statistics. As such, “AI” is highly subject to the data being analyzed and the structure of that data. Garbage in, garbage out.

It’s important for business leaders to learn about how AI works. The HBR post offers a good summary of the issues and practical recommendations for leaders looking to make better decisions when implementing AI-informed systems — which we all should be:

Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.

What Do We Do About the Biases in AI?

Tending Your Industry’s Ecosystem

Organizations never exist on their own; they’re part of an ecosystem, a web of relationships that make it possible for things to get done. Your decisions affect the ecosystem, and the decisions of others affect you.

This has always been so, of course, but the internet has made ecosystems more visible and susceptible to disruption. Transacting has become easier and faster. Changes are often immediate, have more impact, and lead to greater network effects. The balance of power shifts: organizations can leverage connections to go straight to consumers. Alternatively, intermediaries can create new roles for themselves, becoming purveyors of information as much as goods.

There are great opportunities for organizations that can affect system dynamics. But there are also risks — to themselves and to the ecosystem. For example, in a recent interview with economist Tyler Cowen, music critic Ted Gioia talked about the impact internet streaming has had on the music industry:

Continue reading

Design for the Relationship

I’m currently reading Brad Stone’s The Everything Store, a history of Amazon.com. One of the early chapters is about the very early days of the company, which at that point was only selling books. In addition to showing information about products, founder Jeff Bezos wanted the site to include customer reviews of individual books.

Of course, some customer reviews were negative. Mr. Bezos received an angry letter from a book publishing executive, arguing that Amazon was in the business of selling books, not trashing them. But that was not the Amazon way. Per Mr. Bezos,

When I read that letter, I thought, we don’t make money when we sell things. We make money when we help customers make purchase decisions.

These two sentences struck me as a key insight: the particular sale isn’t the ultimate goal of the interaction; building the overall relationship with the customer is.

Long-term thinking is rare in business — especially in a fast-paced environment such as the early web. Nascent Amazon was under a great deal of pressure to prove itself, to grow. Driving more immediate sales would’ve seemed the more prudent approach. And yet, the team chose the long-term relationship. That’s values in action.

In your work, you may sometimes be called to choose between a feature that “drives the needle” in the short term versus one that builds an ongoing relationship. How do you choose? How do you measure the cost either way?

Photo by Steve Jurvetson via Wikimedia

Quantum Supremacy

Earlier this week, Google researchers announced a major computing breakthrough in the journal Nature:

Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.

Quantum supremacy heralds an era of not merely faster computing, but one in which computers can solve new types of problems. There was a time when I’d expect such breakthroughs from “information technology” companies such as IBM. But Google’s tech is ultimately in service to another business: advertising.

TAOI: Facebook Hiding Likes

The Architecture of Information:

Likes are one of the most important concepts of the Facebook experience. Giving users the ability to cast their approval (or disapproval) on a post or comment — and to see how others have “voted” — is one of the most engaging aspects of the system, both for users and content authors. Facebook even uses the Like icon as a symbol of the company as a whole:

fbwm_cw_07
The sign outside the main entrance to Facebook headquarters. (Photo: Facebook.)

However, according to a report in the NY Times, Facebook is experimenting with hiding post measurements:

On [September 26], the social network said it was starting a test in Australia, where people’s Likes, video view counts and other measurements of posts would become private to other users. It is the first time the company has announced plans to hide the numbers on its platform.

Why would they do this? Because seeing these metrics may have an impact on users’ self-esteem. According to a Facebook spokesperson quoted in the article, the company will be testing the change to see if it helps improve people’s experiences. A noble pursuit. But, I wonder: How would this impact user engagement? If it benefits users but hurts advertising revenue, will Facebook discontinue the experiment?

Facebook Tests Hiding ‘Likes’ on Social Media Posts

Collaborating by Default

Writing in his blog, Benedict Evans highlights the new wave of startups focused on personal productivity, “dozens of companies that remix some combination of lists, tables, charts, tasks, notes, light-weight databases, forms, and some kind of collaboration, chat or information-sharing.”

The cycle of bundling and unbundling functionality isn’t new:

There’s an old joke that every Unix function became an internet company – now every Craigslist section, or LinkedIn category, or Excel template, becomes a company as well. Depending on the problem, that might be a new collaboration canvas, or a new networked app, or a new network or marketplace, and you might switch from one form to the other. Github is a developer tool that also became a network – it became LinkedIn for developers.

What is new is the social nature of the experience. Old-school computing was lonely: the user interacted with his/her computer alone. Even if the system included communications software, such as email, interactions with other people were limited to that software alone. Today, we expect web-based applications to be collaborative by default.

We experience software differently when we assume other people will be sharing the place with us. As I’ve written before, we may ultimately discover that the purpose of social media was to teach us how to collaborate with people in information environments.

New Productivity

Understanding Customer Mental Models

How well do you understand your customers? Do you know how they make decisions? How they see your business’s domain? What makes them tick?

Everyone understands things a bit differently. Nobody has a perfect, complete understanding of the whole of reality. A neurosurgeon may understand the human nervous system but be unable to successfully configure the security settings of her smartphone. Knowledge of one domain doesn’t necessarily translate to another.

You carry around in your mind internal representations of how things work. These representations are called mental models. Wikipedia has a “good enough” definition:

A mental model is an explanation of someone’s thought process about how something works in the real world. It is a representation of the surrounding world, the relationships between its various parts and a person’s intuitive perception about his or her own acts and their consequences.

The more accurately these representations mirror the way things really are, the more skillfully you can act. If you understand the distinctions between the components that define your phone’s security and how they relate to each other, you’ll be able to make good predictions about the consequences of your decisions.

Forming good mental models for complex domains isn’t easy. Modeling calls for thinking in abstract terms. You may be tempted to apply a model from one domain you understand well to another you don’t. (E.g., “I bet this works just like x.”) We aren’t formally trained to model the world. Instead, we form mental representations ad hoc, filling out the broader picture as we go along. Thus, we have imperfect models of much of reality.

Ideally, you want your customers to have good mental models of your business’ domain. This is easier to do in well established domains than in new ones. For example, more people are likely to have good mental models of the process of renting a car than securing their smartphone.

It’s important that you understand your customers’ mental models for your domain. This isn’t something you can ask them about in an interview. We don’t express our mental models overtly. Instead, they manifest indirectly in our actions. What to do?

One way to go about it is to observe them interacting with prototypes and making note of how they interpret its major concepts and their relations to each other. Another is to engage customers in co-creation sessions to design solutions for the domain.

In this second approach, we don’t expect the solutions that emerge to lead directly to products or features. Instead, the artifact functions as a MacGuffin that allows us to map the customers’ mental models of the domain. This approach is especially useful in early stages of the design process, when we don’t yet have a prototype to test.

With a better understanding of how customers see the domain, we can design solutions that allow them to make more skillful decisions. This may call for producing means for them to adjust their mental models to more closely align to reality. Or it may require that we adjust the system we’re designing to better match the models users bring with them.

In either case, we’re not starting from a blank slate: we must meet people’s understanding of the domain. This requires that we understand their mental models.

From Monolithic to Distributed Architectures

Amazon CTO Werner Vogels on how the company transitioned from a monolithic application architecture to a distributed one:

We created a blueprint for change with our “Distributed Computing Manifesto.” This was an internal document describing a new architecture. With this manifesto, we began restructuring our application into smaller pieces called “services” that enabled us to scale Amazon dramatically.

But changing our application architecture was only half the story. Back in 1998, every Amazon development team worked on the same application, and every release of that application had to be coordinated across every team.

To support this new approach to architecture, we broke down our functional hierarchies and restructured our organization into small, autonomous teams, small enough that we could feed each team with only two pizzas. We focused each of these “two-pizza teams” on a specific product, service, or feature set, giving them more authority over a specific portion of the app. This turned our developers into product owners who could quickly make decisions that affected their individual products.

Breaking down our organization and application structures was a bold idea, but it worked. We were able to innovate for our customers at a much faster rate, and we’ve gone from deploying dozens of feature deployments each year to millions, as Amazon has grown. Perhaps more dramatically, our success in building highly scalable infrastructure ultimately led to the development of new core competencies and resulted in the founding of AWS in 2006.

Technological change requires new ways of working — especially when the change is happening at the structural level. Decentralizing the implementation at the technical level isn’t enough; decision-making must be decentralized as well. I read Amazon’s transition to two-pizza teams as a push towards bottom-up systemic interventions.

This strikes me as a more appropriate response to today’s complex challenges than the top-down hierarchies of the past. Alas, many designers and product managers are still operating within organizational structures that emerged during the industrial revolution, and which don’t easily accommodate bottom-up decision-making.

Modern applications at AWS – All Things Distributed