The Treachery of Deepfakes

Ninety years ago, René Magritte painted a pipe. I’m sure you’ve seen the work; it’s among his most famous. Written under the rendering of the object are the words Ceci n’est pas une pipe — “This is not a pipe.” Huh? Well, it isn’t; it’s a representation of a pipe. Clever stuff.

The Treachery of Images

The painting is called La Trahison des images — “The Treachery of Images.” Treachery means to deceive; to betray our trust. The painting tricks us by simulating a familiar object. Aided by the charming image, our mind conceives the pipe. We recall experiences with the real thing — its size, weight, texture, the smell of tobacco, etc. Suddenly we’re faced with a conundrum. Is this a pipe or not? At one level it is, but at another it isn’t.

The Treachery of Images requires that we make a conceptual distinction between the representation of an object and the object itself. While it’s not a nuanced distinction – as far as I know, nobody has tried to smoke Magritte’s painting – it’s important since it highlights the challenges inherent in using symbols to represent reality.

The closer these symbols are to the thing they’re representing, the more compelling the simulation. Compared to many of Magritte’s contemporaries, his style is relatively faithful to the “real world.” That said, it’s not what we call photo-realistic. (That is, an almost perfect two-dimensional representation of the real thing. Or rather, a perfectly rendered representation of a photograph of the real thing.)

Magritte’s pipe is close enough. I doubt the painting would be more effective if it featured a “perfect” representation; its “painting-ness” is an important part of what makes it effective. The work’s aim isn’t to trick us into thinking that we’re looking at a pipe, but to spark a conversation about the difference between an object and its symbolic representation.

The distance between us and the simulation is enforced by the medium in which we experience it. You’re unlikely to be truly misled while standing in a museum in front of the physical canvas. That changes, of course, if you’re experiencing the painting in an information environment such as the website where you’re reading these words. Here, everything collapses onto the same level.

There’s a photo of Magritte’s painting at the beginning of this post. Did you confuse it with the painting itself? I’m willing to bet that at one level you did. This little betrayal serves a noble purpose; I wanted you to be clear on which painting I was discussing. I also assumed that you’d know that that representation of the representation wasn’t the “real” one. (There was no World Wide Web ninety years ago.) No harm meant.

That said, as we move more of our activities to information environments, it becomes harder for us to make these distinctions. We get used to experiencing more things in these two-dimensional symbolic domains. Not just art, but also shopping, learning, politics, health, taxes, literature, mating, etc. Significant swaths of human experience collapsed to images and symbols.

Some, like my citing of The Treachery of Images are relatively innocent. Others are actually and intentionally treacherous. As in: designed to deceive. The rise of these deceptions is inevitable; the medium makes them easy to accept and disseminate, and simulation technologies keep getting better. That’s why you hear in the news about increasing concern for deepfakes.

Recently, someone commercialized an application that strips women of their clothes. Well, not really — it strips photographs of women of their clothes. That makes it only slightly less pernicious; such capabilities can do very real harm. The app has since been pulled from the market, but I’m confident that won’t be the last we see of this type of treachery.

It’s easy to point to that case as an obvious misuse of technology. Others will be harder. Consider “FaceTime Attention Correction,” a new capability coming in iOS 13. Per The Verge, this seemingly innocent feature corrects a long-standing issue with video calls:

Normally, video calls tend to make it look like both participants are peering off to one side or the other, since they’re looking at the person on their display, rather than directly into the front-facing camera. However, the new “FaceTime Attention Correction” feature appears to use some kind of image manipulation to correct this, and results in realistic-looking fake eye contact between the FaceTime users.

What this seems to be doing is re-rendering parts of your face on-the-fly while you’re on a video call so the person on the other side is tricked into thinking you’re looking directly at them.

While this sounds potentially useful, and the technology behind it is clever and cool, I’m torn. Eye contact is an essential cue in human communication. We get important information from our interlocutor’s eyes. (That’s why we say the eyes are the “windows to the soul.”) While meeting remotely using video is nowhere near as rich as meeting in person, we communicate better using video than when using voice only. Do we really want to mess around with something as essential as the representation of our gaze?

In some ways, “Attention Correction” strikes me as more problematic than other examples of deep fakery. We can easily point to stripping clothes off photographs, changing the cadence of politician’s speeches in videos, or simulating an individual’s speech patterns and tone as either obviously wrong or (in the latter case) at least ethically suspect. Our repulsion makes them easier to regulate or shame off the market. It’s much harder to say that altering our gaze in real-time isn’t ethical. What’s the harm?

Well, for one, it messes around with one of our most fundamental communication channels, as I said above. It also normalizes the technologies of deception; it puts us on a slippery slope. First the gaze, then… What? A haircut? Clothing? Secondary sex characteristics? Given realistic avatars, perhaps eventually we can skip meetings altogether.

Some may relish the thought, but not me. I’d like more human interactions in information environments. Currently, when I look at the smiling face inside the small glass rectangle, I think I’m looking at a person. Of course, it’s not a person. But there’s no time (or desire) during the interaction to snap myself out of the illusion. That’s okay. I trust that there’s a person on the other end, and that I’m looking at a reasonably trustworthy representation. But for how much longer?

Design for Long-Term Relevance

Richard Saul Wurman in an interview for Interior Design magazine:

One of the reasons [my firm] went out of business was the ideal piece of architecture at that time was a Michael Graves building and he ruined architecture. I know he’s dead, but when he was alive he was smart and drew well and was a nice person, but he ruined architecture because all the critics made him the king architect doing these decorative buildings that won’t even be a footnote in 20 years. I’m putting this in context. Architects are as good as their clients and what they’re demanding. So, they are doing bling buildings. Look at what just got put up by thoughtful, bright architects—I’ve met every single one of them—in Hudson Yards. The idea of Hudson Yards is that it looks good from a helicopter and New Jersey. Walking around is the opposite of Piazza San Marco. It just isn’t interesting. It’s a fiction that all the architects during the Renaissance were great. What has held up is buildings that people want to occupy.

The Portland Building in August 1982. Photo by Steve Morgan.
Image by Steve Morgan CC BY-SA 3.0 via Wikimedia

I was in architecture school at a time when Graves’ architecture was still hot. I remember poring over his beautiful drawings and thinking how much better they looked than photographs of the ensuing buildings. That was then; now, both look stale. Not the effect you want when designing something meant to be as durable as a building.

Relatively few things stand the test of time. Those that do — buildings, books, household objects, technologies, etc. — are worth paying attention to. If they remain relevant after taste and popular opinion have moved on, it’s because at some level they address universal needs.

Aspiration: design for long-term relevance. Hard to do for creatures dazzled by an endless array of new capabilities and embedded in cultures that place a premium on innovation.

10 Questions With… Richard Saul Wurman (h/t Dan Klyn)

On “Content”

Must-read post by Om Malik:

“Content” is the black hole of the Internet. Incredibly well-produced videos, all sorts of songs, and articulate blog posts — they are all “content.” Are short stories “content”? I hope not, since that is one of the most soul-destroying of words, used to strip a creation of its creative effort.

The World Wide Web is the most powerful medium for learning, sharing, and understanding our species has created. Our descendants will judge us harshly on the first thing we tried to do with it: commoditize our attention by packaging our insights and humanity into transactional units.

(The optimist’s take: It’s still early days; we haven’t yet tapped the web’s full potential.)

The Problem With “Content” — On my Om

Information Metaphors

The ways we deal with information since the advent of the web are new. Although people have dealt with information in the past — through spoken language, print media, in the environment, etc. — the web changed how we produce and use information. We don’t yet have precise language to describe the effects of this change upon us as individuals and societies.

Language reveals how we think about things. Given the newness of the experience, I’m curious about the metaphors we use to talk about how we use of information online. I’ve noticed three come up often:

  • information as resource,
  • information as sustenance, and
  • information as an environment.

Let’s look at them in more detail.

Information as Resource

Under this metaphor, we see information as something to be bought, sold, mined, traded, shared, etc. We can own information, gain access to it, stream it. We must protect our information lest it fall into the wrong hands.

Examples:

“A new commodity spawns a lucrative, fast-growing industry, prompting antitrust regulators to step in to restrain those who control its flow. A century ago, the resource in question was oil. Now similar concerns are being raised by the giants that deal in data, the oil of the digital era.” — The Economist

“Think twice about sharing your social security number with anyone, unless it’s your bank, a credit bureau, a company that wants to do a background check on you or some other entity that has to report to the IRS. If someone gets their hands on it and has information such your birth date and address they can steal your identity and take out credit cards and pile up other debt in your name.” — Christina DesMarais, TIME

Information as Sustenance

This metaphor posits that information is like food and drink; it changes us as we consume it. Information enters you and transforms you. You are what you eat; you are what you read online. As with food, you have the ability to say “no” to information, to change your consumption patterns. You could go on an “information diet” if you wished.

Examples:

“We monitor what we eat and drink, optimizing our diet for health and performance, not just enjoyment–and yet we can be heedless about what we read, watch, and listen to. Our information diet is often the result of accident or happenstance rather than thoughtful planning. Even when we do choose deliberately, the intent behind much of our media consumption is simply to soothe or distract ourselves, not to nourish or enrich. It’s like having french fries for every meal.” — Ed Batista

“We define digital nutrition as two distinct but complementary behaviors. The first is the healthful consumption of digital assets, or any positive, purposeful content designed to alleviate emotional distress or maximize human potential, health, and happiness. The second behavior is smarter decision-making, aided by greater transparency around the composition and behavioral consequences of specific types of digital content.” —
Michael Phillips Moskowitz

Information as Environment

Another metaphor is that of information as something you inhabit; an environment. Under this metaphor, information defines the boundaries of spaces where we interact. We’ve been using this type of language from very early in the online revolution; we’ve been talking of “chat rooms” and “home pages” for a long time.

Examples:

“When all discussion takes place under the eye of software, in a for-profit medium working to shape the participants’ behavior, it may not be possible to create the consensus and shared sense of reality that is a prerequisite for self-government. If that is true, then the move away from ambient privacy will be an irreversible change, because it will remove our ability to function as a democracy.” — Maciej Cegłowski

“Dark forests like newsletters and podcasts are growing areas of activity. As are other dark forests, like Slack channels, private Instagrams, invite-only message boards, text groups, Snapchat, WeChat, and on and on. This is where Facebook is pivoting with Groups (and trying to redefine what the word ‘privacy’ means in the process).

These are all spaces where depressurized conversation is possible because of their non-indexed, non-optimized, and non-gamified environments. The cultures of those spaces have more in common with the physical world than the internet.” — Yancey Strickler

While all three metaphors are valid, you won’t be surprised to learn I favor the “environment” metaphor — as evidenced by the title of my book.

The “resource” metaphor brings with it the language of ownership and trade. The “sustenance” metaphor reduces our agency to which types of information we choose to let in. (After all, most of us don’t produce our own food.) While both are valid, they miss an important angle: the fact that our interactions with each other and our social institutions are increasingly mediated through information. The language of inhabitation nudges us to consider the pervasive influence of information on our actions and empowers us to reconfigure our information structures to affect outcomes. It gives us agency with regards to information while acknowledging the degree to which it influences our decisions.

Have you found other information metaphors? Please let me know.

The Optimism of Design

I’ve been accused of being optimistic. I say “accused” because the word is often uttered with disdain. It seems de rigeur for some folks to think of these as the worst of times. The environment is going to hell, political institutions and the rule of law are under attack, injustice and inequality seem to be on the rise, resources are dwindling, etc. How can one be optimistic under such circumstances?

It seems an unpopular and old-fashioned perspective, but I remain steadfast: things can get better — and designers have an important role to play in improving them.

Design is an inherently optimistic practice. It requires an open mind about the possibilities for creating a better future. I’ll say it again: design is about making the possible tangible. “Making the possible tangible” means testing alternate ways of being in the world. The point is making things better. If you don’t believe there’s room for improvement, why design? And if things can be improved, why despair?

This doesn’t mean designers must be naive about the state of the world. To the contrary: we can’t begin to design a better future if we don’t clearly understand the present. At least that’s what we’ve been telling clients; at this point, many have bought into the idea that a solid design process begins with understanding the problem domain through research.

What research informs your worldview? If your understanding comes primarily from sources incentivized to capture your attention (read: advertising-supported media), then be wary. Good news doesn’t sell; rage is an excellent way of keeping you tuned in. Misery loves company, and there are many lonely people out there looking for someone to friend. “A lie travels halfway around the world before truth puts on its boots.” (Churchill) — and with social media, we’ve built a teleporter.

The challenge our forebears faced in understanding the world was a lack of information. That’s not our problem; we have information to spare. Our challenges are deciding what is true and who to believe. We can be more selective today than ever before about the facts that inform our worldview; in seconds I can call up a counter-fact to every fact you can muster. As a result, our attitude towards the possibilities matters more than ever; it’s never been more important to cultivate a beginner’s mind.

Again, this doesn’t imply naiveté. It implies seeing reality for what it is and keeping an open mind towards the possibilities. I’m reminded of this exchange between Bill Moyers and Joseph Campbell:

CAMPBELL: There’s a wonderful formula that the Buddhists have for the Boddhisattva. The Bodhisattva, the one whose being — satra — is illumination — bodhi — who realizes his identity with eternity, and at the same time his participation in time. And the attitude is not to withdraw from the world when you realize how horrible it is, but to realize that this horror is simply the foreground of a wonder, and come back and participate in it. “All life is sorrowful,” is the first Buddhist saying, and it is. It wouldn’t be life if there were not temporality involved, which is sorrow, loss, loss, loss.

MOYERS: That’s a pessimistic note.

CAMPBELL: Well, I mean, you got to say, “yes” to it and say, “it’s great this way.” I mean, this is the way God intended it.

MOYERS: You don’t really believe that?

CAMPBELL: Well, this is the way it is, and I don’t believe anybody intended it, but this is the way it is. And Joyce’s wonderful line, you know, “History is a nightmare from which I’m trying to awake.” And the way to awake from it is not to be afraid and to recognize, as I did in my conversation with that Hindu guru or teacher that I told you of, that all of this as it is, is as it has to be, and it is a manifestation of the eternal presence in the world. The end of things always is painful; pain is part of there being a world at all.

MOYERS: But if one accepted that, isn’t the ultimate conclusion to say, “well, I won’t try to reform any laws or fight any battles.”

CAMPBELL: I didn’t say that.

MOYERS: Isn’t that the logical conclusion one could draw, though, the philosophy of nihilism?

CAMPBELL: Well, that’s not the necessary thing to draw. You could say, “I will participate in this row, and I will join the army, and I will go to war.”

MOYERS: I’ll do the best I can on earth.

CAMPBELL: I will participate in the game. It’s a wonderful, wonderful opera, except that it hurts. And that wonderful Irish saying, you know, “Is this a private fight, or can anybody get into it?” This is the way life is, and the hero is the one who can participate in it decently, in the way of nature, not in the way of personal rancor, revenge or anything of the kind.

With our practice centered on making things better, designers are heroes in society. We can choose to be. Kvetching is unbecoming.

TAOI: YouTube Subscriber Counts

The architecture of information:

A headline on The Verge: YouTube is changing how subscriber counts are displayed, possibly shifting its culture.

One of the most famous aphorisms in management is Peter Drucker’s observation that “if you can’t measure it, you can’t improve it.” This phrase succinctly captures an important idea: when deciding the way forward, data is your friend. Rather than discussing directions in the abstract, this concept encourages us to break down problems into impartial facets we can trace over time.

However, as useful as it is, there’s a flip side to this concept: with a compelling enough measure, we can lose sight of the ultimate “it” we’re trying to improve. The point of losing weight isn’t to read a lower number on a scale; it’s to get healthier. The number is a proxy for health — and an imperfect one at that. “Health” is a complex subject with lots of nuances. Articulating it as a single number can make it easier to understand, but oversimplifies a complex whole.

We compound the problem when we base incentives on these numbers. Let’s say you’re promised a $500 bonus if you lose a certain amount of weight by a particular date. At that point “health” is twice abstracted: your goal is now neither health nor weight but the money. The numbers start to become more important than the ultimate thing we want to achieve. The map is not the territory, but we’re being incentivized to navigate the map.

We hope to get to the goal on the real ground the map represents. But sometimes we don’t. Sometimes the map is so compelling that it becomes the territory. This has happened with measures in social media such as follower counts on Twitter.

Back to The Verge article. High-level summary: After a recent kerfuffle between two “creators,” YouTube is changing how its system displays subscriber counts. Creators compete for subscribers, and their fortunes wax and wane accordingly. In this system, follower counts are a proxy for popularity. It’s an imperfect measure, but it’s clear and compelling, and so emerges as the locus of attention for an economy of influence. I didn’t realize it until reading about this issue, but there’s a secondary market on these stats: websites like Social Blade exist solely to track how these people are doing relative to each other. It’s a big deal.

But what’s the ultimate goal here? What social function is this system enabling? (What’s the equivalent of “health”?) Is it entertainment? Commerce? Both?

Proudshamed

Paul Ford, writing for WIRED:

NERDS, WE DID it. We have graduated, along with oil, real estate, insurance, and finance, to the big T. Trillions of dollars. Trillions! Get to that number any way you like: Sum up the market cap of the major tech companies, or just take Apple’s valuation on a good day. Measure the number of dollars pumped into the economy by digital productivity, whatever that is. Imagine the possible future earnings of Amazon.

THE THINGS WE loved — the Commodore Amigas and AOL chat rooms, the Pac-Man machines and Tamagotchis, the Lisp machines and RFCs, the Ace paperback copies of Neuromancer in the pockets of our dusty jeans—these very specific things have come together into a postindustrial Voltron that keeps eating the world. We accelerated progress itself, at least the capitalist and dystopian parts. Sometimes I’m proud, although just as often I’m ashamed. I am proudshamed.

This piece captures a mood I’ve perceived among my cohort of techie designers: A radical swing from the unbridled optimism many of us felt in the 1990s — the sense that the internet was a transformational force comparable only to Gutenberg — to moroseness and guilt at the effects of these changes on society.

The transition from the Middle Ages to the modern era was anything but smooth. Gutenberg’s innovation wrought tremendous upheaval: Long-standing mental models collapsed; social and political systems were replaced. The technological changes of the last five decades — the wiring up of the planet into a real-time nervous system that democratizes access to the world’s information — are in some ways more radical than those of the 15th-16th Centuries. We’ve not just changed the ways we interact with each other and the world, we’ve changed change itself — scaling and speeding it up in ways that lead to unpredictable outcomes.

The article frames (digital) technology as an industry alongside others such as energy and finance. That’s a common underestimation spurred by the pervasive mental model of our time: that of the market economy. Yes, tech is an industry in that sense. But tech is also a meta-industry: it changes the character of the other industries thoroughly. The call to more responsible design is urgent not because tech requires it, but because we are re-building society atop tech.

Why should we expect such radical changes to be easy or comfortable? People of my vintage (I’m squarely Gen X) and younger in the developed world have thus far led lives of relative peace and stability. Cold War notwithstanding, we came of age inside a certainty bubble. When dealing with (deep) disruption, we fail to account both for the fragility of social institutions and the resilience of individuals under such conditions.

Mr. Ford concludes:

I was exceptionally lucky to be born into this moment. I got to see what happened, to live as a child of acceleration. The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.

I, too, feel lucky. Yes, there is lots of work to do. But the miracle is far from over; it’s ongoing. Responding skillfully to the changes it bring requires being present; that we see clearly so we can use our (real!) abilities towards increasing agency and compassion.

WHY I (STILL) LOVE TECH: IN DEFENSE OF A DIFFICULT INDUSTRY

Informing and Persuading

As more things become digital, those of us who design digital things — apps, websites, software — increasingly define how people understand and interact with the world. It’s not uncommon for digital designers to make difficult choices on behalf of others. This requires an ethical commitment to doing the right thing.

For information architects, the critical decisions involve structuring information in particular ways. Choices include:

  • What information should be present
  • How information should be presented (i.e., in what format or sequence)
  • How information should be categorized

The objective is to make information easier to find and understand.

At least in theory. Often, the objective is to make some information easier to find than others. For example, it recently came to light that tax filing software makers such as Intuit and H&R Block set out to steer customers away from their free offerings. Intuit even tweaked its site, apparently to keep public search engines from indexing the product. The goal in this case seems to be not to make information more findable, but less so — while still technically complying with a commitment to “inform.”

The same is true for understandability. A few years ago, when the Affordable Care Act was being debated in the U.S., a diagram was put forth that purported to explain the implications of the new law:

Understanding Obamacare chart

This is not a neutral artifact. Its primary design objective isn’t to make the ACA more understandable, but to highlight its complexity. (It succeeds.) This diagram intentionally confuses the viewer. As such, it’s ethically compromised.

IA challenges fall on a continuum. On one end of the spectrum, you’re aiming to inform the people who interact with your artifact about a particular domain. On the other end, you’re trying to persuade them.

Inform - Persuade

By “inform,” I mean giving people the information they need so they can make reasonable decisions within a conceptual domain, and presenting this information to them in ways they can understand given their level of expertise. By “persuade,” I mean giving people the information they need so they can behave how we want them to, and presenting it to them in ways that nudge them in that direction.

Informing and persuading are different objectives. In one, you’re setting out to increase the person’s knowledge so they can make their own decisions. In the other, you’re setting out move them towards specific, predetermined outcomes. In both cases, you’re trying to alter behavior — but the motives are different. By informing, you make people smarter. By persuading, you make them acquiescent.

I’m not judging by observing this distinction. If someone is engaged in self-defeating or otherwise destructive courses of action (e.g., smoking, gambling, driving while intoxicated), setting out to change their behavior could be the compassionate, ethical thing to do. So persuasion isn’t bad per se. Also, few projects fall on either extreme in the continuum; most lie somewhere in the middle. (Is it ever possible to not persuade when structuring information? I.e., all taxonomies are political. Even this post is an exercise in persuasion.)

That said, if your goal is to make information more findable and understandable, you will sometimes be tested by the need to persuade. If the offering truly adds value to clients and to the world, and aligns with your own values, you’re unlikely to face a tough ethical call. Such offerings “sell themselves” — i.e., the more you know about them and their competitors, the more desirable they become. The problem comes when you’re asked to sell a lemon or to nudge people towards goals that are misaligned with their goals, your goals, or society’s goals. There’s no ethical way to bring balance to such situations; often the appropriate response is to take a “hard pass.” (I.e., not engage in the work at all.)

A Change of Mindset

An eye-opening story in Bloomberg offers a glimpse into the workings of YouTube and how its business model incentivizes the spread of misinformation:

The conundrum isn’t just that videos questioning the moon landing or the efficacy of vaccines are on YouTube. The massive “library,” generated by users with little editorial oversight, is bound to have untrue nonsense. Instead, YouTube’s problem is that it allows the nonsense to flourish. And, in some cases, through its powerful artificial intelligence system, it even provides the fuel that lets it spread.

Why does this happen?

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

In 2012, YouTube set out a new objective to reach one billion hours of viewing per day; this led to a new recommendation algorithm designed to increase engagement. The company achieved its billion-hour per day goal in 2016—not coincidentally the year when it became apparent the degree to which such engagement-driven systems were influencing politics (and society as a whole.)

Yesterday I was teaching my students about Donella Meadow’s fantastic essay, Places to Intervene in a System. In this work, Ms. Meadows offers a hierarchy of “leverage points” — things you can tweak to make systems work differently. They are, in order of least to most impactful:

  1. Numbers (e.g., taxes, standards)
  2. Material stocks and flows
  3. Regulating negative feedback loops
  4. Driving positive feedback loops
  5. Information flows
  6. The rules of the system (e.g., incentives, punishment, constraints)
  7. The power of self-organization
  8. The goals of the system
  9. The mindset or paradigm out of which the system arises

Note the prominent position of goals in this list. Few things are as influential in shaping a system as setting clear goals and incentivizing people to reach them. By setting engagement as a goal, YouTube’s leadership created the conditions that led to fostering misinformation in their system. We’re all paying the price for the damage caused by outrage-mongers on systems like YouTube. The erosion in our ability to hold civic discourse, political polarization, the spread of maladaptive memes, etc. are externalities unaccounted for in these companies’ bottom lines.

As Ms. Meadows points out, the only thing more powerful than goals is the paradigm out of which the goals emerge. YouTube emerged from a worldview that precedes the internet, big data, and “smart” algorithms. These things add up to something that isn’t a bigger/faster version of earlier communication systems — it’s a paradigm shift. We’re undergoing a transformation at least as significant as that wrought by the movable type printing press, which precipitated significant social and economic changes (especially in the West, but ultimately around the world.)

We’re still too close to the beginning of our current transformation to know what socioeconomic structures will ultimately emerge. But one thing is sure: a shift in mindset is required. The scale of the potential social impact of systems like YouTube calls for re-visiting things we’ve long taken for granted, such the role of for-profit companies in society and the meaning of key concepts such as freedom of speech and the rights of the individual. The question isn’t whether we’ll have to change our mindset — rather, it’s how much turbulence and suffering we’ll experience as a result. We should do all we can to minimize both.