Table Stakes

Yesterday I was running an errand with my daughter. Our conversation drifted towards Mel Blanc. I explained how Mr. Blanc voiced most of the Looney Tunes characters and how I’d seen a hilarious interview years before in which he went through various voices. A “you had to be there” experience.

Then something amazing happened. Rather than (inevitably) mangle the retelling of Mr. Blanc’s amazing abilities, we pulled out my iPhone. Within seconds she was looking at the interview, which is available — along with so much else — in YouTube. She chuckled along. Our conversation continued. When, she wondered, was Mel Blanc alive? I said I thought he’d died in the early 90s, but that we may as well check. I long-pressed the phone’s home button to evoke Siri. I said, “When did Mel Blanc die?” The reply came almost immediately: “Mel Blanc died July 10, 1989 at age 81 in Los Angeles.”

One of my favorite quotes is from Charles Eames:

Eventually everything connects — people, ideas, objects. The quality of the connections is the key to quality per se.

I’ve been using an iPhone for over a decade. Even so, I’m still astonished at the quality of connections I can make from this device I carry in my pocket. And what’s more, having such a device isn’t a luxury afforded to a fragment of the population. Almost everybody has similar access.

Alas, the ubiquity of the experience has made it table stakes; we take it for granted. Of course you shot 4K video of the birthday party. Of course you cleared your inbox while waiting in public transport. Of course you know how to get there. (What with all the maps of the world and a GPS receiver in your pocket!) Everybody does.

How do we account for everyone having instant access to any piece of information anywhere at any time? Surely not with measures established in and for the world that existed before the small glass rectangles.

Striving for Simplicity

Over a decade and a half ago, I was at an office party. (This was during the brief part of my career when I was in-house at a large corporation.) Among other amenities, the party featured a cartoonist; the type of artist you see drawing quick, exaggerated portraits at fairs. The artist was hired to draw each team member, highlighting striking things about us: quirky hobbies, particular styles of dress or grooming, tics, etc. I don’t remember if it was at my suggestion or that of my co-workers, but my cartoon showed me surrounded by electronic gadgets: mobile phones, MP3 players (a new thing at the time), notebook computers, cameras, etc. That’s how I saw myself and how my colleagues thought of me; tech was a primary part of my identity.

I’ve long been representative of the “early adopter” demographic. Being alive (and privileged enough to have some discretionary income) during the Moore’s Law years has meant seeing (and benefiting from) tremendous advances in many areas of life. Consider the way we listen to music. In the span of a few years, my entire music collection went from heavy boxes filled with clunky cassette tapes to a few light(er) CD cases to a tiny device not much bigger than a single cassette tape. Portable MP3 players represented a tangible improvement to that part of my life. The same thing has happened with photography, movies, reading, etc. It’s been exciting for me to stay up-to-date with technology.

That said, as I’ve grown older, I’ve become more aware of the costs of new things. I’m not just talking about the money needed to acquire them; every new thing that comes into my life adds some cognitive cost. For example, there’s the question of what to do with the thing(s) it replaces. (I still have cases full of plastic discs in my attic. I’m unsure what to do with them, considering the amount of money I’ve already sunk into them.)

Continue reading

The Treachery of Deepfakes

Ninety years ago, René Magritte painted a pipe. I’m sure you’ve seen the work; it’s among his most famous. Written under the rendering of the object are the words Ceci n’est pas une pipe — “This is not a pipe.” Huh? Well, it isn’t; it’s a representation of a pipe. Clever stuff.

The Treachery of Images

The painting is called La Trahison des images — “The Treachery of Images.” Treachery means to deceive; to betray our trust. The painting tricks us by simulating a familiar object. Aided by the charming image, our mind conceives the pipe. We recall experiences with the real thing — its size, weight, texture, the smell of tobacco, etc. Suddenly we’re faced with a conundrum. Is this a pipe or not? At one level it is, but at another it isn’t.

The Treachery of Images requires that we make a conceptual distinction between the representation of an object and the object itself. While it’s not a nuanced distinction – as far as I know, nobody has tried to smoke Magritte’s painting – it’s important since it highlights the challenges inherent in using symbols to represent reality.

The closer these symbols are to the thing they’re representing, the more compelling the simulation. Compared to many of Magritte’s contemporaries, his style is relatively faithful to the “real world.” That said, it’s not what we call photo-realistic. (That is, an almost perfect two-dimensional representation of the real thing. Or rather, a perfectly rendered representation of a photograph of the real thing.)

Magritte’s pipe is close enough. I doubt the painting would be more effective if it featured a “perfect” representation; its “painting-ness” is an important part of what makes it effective. The work’s aim isn’t to trick us into thinking that we’re looking at a pipe, but to spark a conversation about the difference between an object and its symbolic representation.

The distance between us and the simulation is enforced by the medium in which we experience it. You’re unlikely to be truly misled while standing in a museum in front of the physical canvas. That changes, of course, if you’re experiencing the painting in an information environment such as the website where you’re reading these words. Here, everything collapses onto the same level.

There’s a photo of Magritte’s painting at the beginning of this post. Did you confuse it with the painting itself? I’m willing to bet that at one level you did. This little betrayal serves a noble purpose; I wanted you to be clear on which painting I was discussing. I also assumed that you’d know that that representation of the representation wasn’t the “real” one. (There was no World Wide Web ninety years ago.) No harm meant.

That said, as we move more of our activities to information environments, it becomes harder for us to make these distinctions. We get used to experiencing more things in these two-dimensional symbolic domains. Not just art, but also shopping, learning, politics, health, taxes, literature, mating, etc. Significant swaths of human experience collapsed to images and symbols.

Some, like my citing of The Treachery of Images are relatively innocent. Others are actually and intentionally treacherous. As in: designed to deceive. The rise of these deceptions is inevitable; the medium makes them easy to accept and disseminate, and simulation technologies keep getting better. That’s why you hear in the news about increasing concern for deepfakes.

Recently, someone commercialized an application that strips women of their clothes. Well, not really — it strips photographs of women of their clothes. That makes it only slightly less pernicious; such capabilities can do very real harm. The app has since been pulled from the market, but I’m confident that won’t be the last we see of this type of treachery.

It’s easy to point to that case as an obvious misuse of technology. Others will be harder. Consider “FaceTime Attention Correction,” a new capability coming in iOS 13. Per The Verge, this seemingly innocent feature corrects a long-standing issue with video calls:

Normally, video calls tend to make it look like both participants are peering off to one side or the other, since they’re looking at the person on their display, rather than directly into the front-facing camera. However, the new “FaceTime Attention Correction” feature appears to use some kind of image manipulation to correct this, and results in realistic-looking fake eye contact between the FaceTime users.

What this seems to be doing is re-rendering parts of your face on-the-fly while you’re on a video call so the person on the other side is tricked into thinking you’re looking directly at them.

While this sounds potentially useful, and the technology behind it is clever and cool, I’m torn. Eye contact is an essential cue in human communication. We get important information from our interlocutor’s eyes. (That’s why we say the eyes are the “windows to the soul.”) While meeting remotely using video is nowhere near as rich as meeting in person, we communicate better using video than when using voice only. Do we really want to mess around with something as essential as the representation of our gaze?

In some ways, “Attention Correction” strikes me as more problematic than other examples of deep fakery. We can easily point to stripping clothes off photographs, changing the cadence of politician’s speeches in videos, or simulating an individual’s speech patterns and tone as either obviously wrong or (in the latter case) at least ethically suspect. Our repulsion makes them easier to regulate or shame off the market. It’s much harder to say that altering our gaze in real-time isn’t ethical. What’s the harm?

Well, for one, it messes around with one of our most fundamental communication channels, as I said above. It also normalizes the technologies of deception; it puts us on a slippery slope. First the gaze, then… What? A haircut? Clothing? Secondary sex characteristics? Given realistic avatars, perhaps eventually we can skip meetings altogether.

Some may relish the thought, but not me. I’d like more human interactions in information environments. Currently, when I look at the smiling face inside the small glass rectangle, I think I’m looking at a person. Of course, it’s not a person. But there’s no time (or desire) during the interaction to snap myself out of the illusion. That’s okay. I trust that there’s a person on the other end, and that I’m looking at a reasonably trustworthy representation. But for how much longer?

A Data Primer for Designers

My friend Tim Sheiner, writing for the Salesforce UX blog:

demand is high for designers who can create experiences that display data in useful and interesting ways. In my personal experience this became much, much easier to do once I’d learned to speak the crisp, precise and slightly odd language used by technical people for talking about data.

What follows is a phenomenal post that clearly explains much of what you need to know to understand and speak competently about data. A must-read for anybody involved in designing for digital information environments.

Designer’s Field Guide to Data

Uses for YouTube

YouTube has long been in the “guilty pleasure” category for me: a source of vacuous entertainment. There’s the hit of nostalgia upon discovering old episodes of a show you enjoyed as a child, vicarious consumption through unboxing videos, the mildly voyeuristic thrill of peeking down other people’s rabbit holes. While enjoyable, I’ve always felt somewhat guilty about these uses for YouTube; it’s been a (mostly) pleasant, if not entirely harmless, waste of time.

But something has changed recently: I’ve found myself getting real value from YouTube. Instead of (or rather, in addition to) turning to the platform for mindless distraction, I’m coming to it more for task-specific training. For example, yesterday I learned how to mend a pair of jeans that had a hole in them. I’ve also used YouTube to learn about the characteristics of different types of fountain pen inks, the proper form for a yoga pose I find particularly challenging, how to play one of my favorite songs (Rush’s Subdivisions) on the piano, and critical information that helped me with various work projects.

Which is to say, I’m increasingly using YouTube not just for entertainment, but also for education. Learning these things in video format has been much more efficient than doing so by other means. I can see what the other person is showing me, rewind, pause, replay to go at my own pace. There are often several options to choose from, with varying levels of skill. (Skill at both the activity I’m trying to learn and capability of the presenter as an instructor.)

Most of these educational videos aren’t slickly produced by professional educators, but by individuals who are sharing their passions. They often make up for their lack of professionalism and structure with charm and passion. In short, they’re educational and entertaining. But it’s a new type of entertainment, very different from the prime time TV programming of old.

YouTube offers an ad-free tier called YouTube Premium. I’ve long resisted paying for it given how many other streaming entertainment channels I’m already paying for. But thinking about how I’m using these things, I’ve decided to give it a go. If I had to choose between two paid streaming services, should I go with the one that only shows me slickly produced movies and TV shows, or should I go with the one where I’ll be learning useful life skills?

(One complaint I have about YouTube Premium right now is that it seems to aspire to become another “just entertainment” medium. Rather than foist second-tier movies on me, I wish it’d be better at helping me discover new things to learn.)

Wikipedia as Information Infrastructure

Wikipedia is more than a publication. As I point out in Living in Information, Wikipedia is also the place where this publication is created. At its scale, it couldn’t happen otherwise. But Wikipedia is more than that: increasingly, it’s also a key part of our society’s information infrastructure. Other systems increasingly rely on it for the “authoritative” versions of particular concepts.

This works well most of the time. But it’s not perfect, and can lead to weird, unexpected consequences. For example, a Wikipedia entry is part of the reason why Google says I’m dead. More recently, a Wikipedia hack led to Siri showing a photo of a penis whenever a user asked about Donald Trump. While the former example is probably due to bad algorithms on Google’s part, the latter seems to be a fault with Wikipedia’s security mechanisms.

The people who manage Wikipedia are in an interesting situation. Over time they’ve created a fantastic system that allows for the efficient creation of organized content from the bottom-up at tremendous scale. They’ve been incredibly successful. Alas, with success comes visibility and influence. The more systems there are that depend on Wikipedia content, the more of a target it becomes for malicious actors.

This will require that the team re-think some of the openness and flexibility of the system in favor of more top-down control. How will this scale? Who will have a say on content decisions? How will Wikipedia’s governance structures evolve? These discussions are playing out right now. Wikipedia is a harbinger of future large-scale generative information environments, so it behooves us all to follow along.

The Eponymous Laws of Tech

Dave Rupert has a great compendium of “Laws” we frequently encounter when working in tech. This includes well-known concepts like Moore’s Law, Godwin’s Law, and Dunbar’s Number alongside some I hadn’t heard before, such as Tessler’s Law:

"Every application must have an inherent amount of irreducible complexity. The only question is who will have to deal with it."
Tessler’s Law or the “Law of Conservation of Complexity” explains that not every piece of complexity can be hidden or removed. Complexity doesn’t always disappear, it’s often just passed around. Businesses need to fix these complex designs or that complexity is passed on to the user. Complex things are hard for users. 1 minute wasted by 1 million users is a lot of time where as it probably would have only taken a fraction of those minutes to fix the complexity. It cringe thinking about the parts of my products that waste users’ time by either being brokenly complex or by having unclear interactions.

Good to know!

The Eponymous Laws of Tech

Kranzberg’s Laws of Technology

Michael Sacasas explains Kranzberg’s Six Laws of Technology, “a series of truisms deriving from a longtime immersion in the study of the development of technology and its interactions with sociocultural change”:

  1. Technology is neither good nor bad; nor is it neutral.
  2. Invention is the mother of necessity.
  3. Technology comes in packages, big and small.
  4. Although technology might be a prime element in many public issues, nontechnical factors take precedence in technology-policy decisions.
  5. All history is relevant, but the history of technology is the most relevant.
  6. Technology is a very human activity—and so is the history of technology.

A nuanced take on technology’s role in shaping our lives and societies.

Kranzberg’s Six Laws of Technology, a Metaphor, and a Story

The Allure of Novelty

It’s that time of year again: Tech companies are announcing new products in preparation for the holiday season. Over the past month, a slate of new phones, tablets, computers, and accessories have been announced. You may be considering buying one or more of these new devices. It’s worth thinking about whether or not you really need them.

As an Apple customer (and something of a gadget junkie), I’ve been intrigued by the new Apple Watch and the new iPad Pro. I already own earlier editions of both devices and was perfectly happy with them just a few months ago. But now I’m not. Now, when I look at my Apple Watch, I wonder: what if I could use it to play podcasts when I go running? What if its battery lasted the whole day? What if it was a little bit faster? What if… ? I know about the newer model, and can’t help but think about all the neat things it can do that mine can’t.

The iPad is a different story. While the new one looks quite nice, it’s not as clear to me how it would make my life better in ways the one I own can’t. Most of the new models’ features seem to be cosmetic: larger screens, smaller bezels, slightly different form factors, etc. Perhaps the new models are also a bit faster, but not in ways that would make much difference; my current iPad is plenty fast. The new Apple Pencil—the accessory I use most with the iPad—also looks much nicer than the old one, but seems functionally similar to the one I already own.

Would it be cool to have new devices for the holidays? Sure, it’d be fun. But it’s worth considering the tradeoffs that come with them. The most obvious, of course, is money. These things aren’t cheap! But there’s also the time they require: Time to research what to buy, time to set things up/migrate from older devices, time dealing with support if things go wrong. (I purchased a MacBook Pro earlier this year, and it’s already been back to Apple for service four times!) New tech can be quite a time sink.

How do you determine if the tradeoffs are worth it? For me, it comes down to figuring out whether I really need a new device or not. These questions help:

  • Does the new model enable me to do something I currently can’t?
  • Does the new model enable me to do something I can do now, but significantly faster or more efficiently?
  • Is there something I already own (or have access to) that could help me accomplish similar results, even if a little less conveniently?
  • Do I have the money/time to mess around with this stuff now? Or are there other things that require my money/attention with more urgency?
  • What do the new devices do worse than the old ones? (I.e., there are a few things about the new iPads that work better in the model I currently own!)
  • Am I using my current devices to the fullest of their capacity?

Novelty can be very alluring, especially during this time of year when advertising is in full force. But when I reflect upon these questions, I often realize that I may be better served by keeping my current devices longer and investing my time and money in other things.