The Golden Age of Learning Anything

Last weekend I did something I’d never done before: I reupholstered a chair. Here’s a photo of the final result:

Unfortunately, I don’t have a “before” photo to share. But take my word for it: my efforts improved this chair’s condition significantly. Before this weekend, it was unpresentable. Little fingers love to tug on tiny tears in vinyl until they become large, unsightly tears. Alas, it’s cheaper to buy new reproductions such as this one than to have them professionally reupholstered. But my conscience doesn’t let me throw away an otherwise good piece of furniture because of a fixable imperfection.

I’m sharing my weekend project here not to seek your approbation. Instead, I want to highlight that we live in a time when we can learn almost any skill on the internet. I learned to reupholster in a couple of hours through a combination of websites and YouTube videos. I researched and ordered the required materials on Amazon. It took some effort on my part, but it was worth it. I’m surprised at how well the chair worked out, given it was my first time.

As we head into a new year, I keep seeing pundits on Twitter claiming “big tech” is ruining everything. Of course, the real world isn’t as simple as these folks render it. Sure, there are negative aspects to our current large tech platforms — but there are positive ones too. The ability to pick up new knowledge and skills anytime at our own pace very cheaply is among the positives.

That Syncing Feeling

There was a time, many years ago, when I used only one computer for my day-to-day work. It was a laptop, and it was with me most of the time, at least during the workday. I accessed my digital information exclusively on this device: email, files, etc. I kept my calendar on a (paper-based) Franklin Planner. For mobile communications, I used a beeper. I told you it was a long time ago — a simpler time.

Then a new device came on the market, the Palm Pilot:

Image: Wikimedia. (https://en.wikipedia.org/wiki/PalmPilot#/media/File:Palm-IMG_7025.jpg)
Image: Wikimedia.

It was like the paper planner, only digital: it could store your calendar, address book, to-dos, and such. You’d write into it using a gesture alphabet called Graffiti, which you had to learn so you could use the device. But most importantly, you could also sync it with your computer’s calendar, address book, etc. You did this by sitting it on a cradle that came with the device and pushing a button. You connected the cradle to the computer using a serial cable and installed an app on your computer to manage communications between the devices. It was crude and complex, and I loved it. The prospect of having my personal information in digital format with me anywhere was very compelling.

Continue reading

Being Open to Unsettling Changes

My post about watching movies at double-speed elicited strong reactions. Some folks seem convinced that giving people the ability to watch movies faster will diminish the viewing experience, and not just for them — for everyone. Why? Because such changes inevitably influence the medium itself.

Consider the way sound changed movies. “Talking” pictures did away with title cards. That was a significant change to the medium, which was wrought by advances in technology. Once it was possible to synchronize sound and pictures, irreversible changes to the medium were inevitable.

Are movies with sound better or worse than what came before? That’s a judgment call. It depends on your point of view. You and I grew up in a world of talking pictures; the silent ones with their title cards seem old and clunky. But they have merits too. Silent films had literate value. Many featured live musical performances, which made them into more of an event than pre-recorded movies. I can imagine somebody who grew up with silent movies could become attached to the way they were.

Continue reading

Quantum Supremacy

Earlier this week, Google researchers announced a major computing breakthrough in the journal Nature:

Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times—our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.

Quantum supremacy heralds an era of not merely faster computing, but one in which computers can solve new types of problems. There was a time when I’d expect such breakthroughs from “information technology” companies such as IBM. But Google’s tech is ultimately in service to another business: advertising.

Perspective on “Digital”

Twenty-five years ago, I left my career in architecture. I’d been working in the building-design trade for about a year. Then something happened that led me to abandon the profession I’d trained for and which I’d pined for only a few years earlier: I saw Mosaic, the first widely available web browser.

I’d been aware of the Internet while I was a university student. I thought of it as a command-line system, one that was mostly useful for email. But the web was different. It was visual. It easy to use, both for end-users and content authors. Anyone could publish at global scale, cheaply. It was clear to me that making the world’s information available through the web would change our reality in significant ways. I cast my lot with “digital” and haven’t looked back.

A quarter of a century later, I still love the web. I love what it’s become – even with its (many) flaws. But more importantly, I love what it can be — its latent potential to improve the human condition. There’s so much work to be done.

But there’s a lot of negativity centered on digital technology these days. Here’s a sample of headlines from major publications from the last few months:

These stories are representative of a melancholic tone that’s become all too common these days. Our pervasive digital technologies have wrought significant changes in old ways of being in the world. Some people miss the old ways; others are perplexed or alarmed. It’s understandable, but it couldn’t be otherwise. The internet and the constellation of services it enables are profoundly disruptive.

Social structures don’t remain unchanged after encountering such forces. The introduction of the printing press led to social, political, and scientific revolutions — including the Reformation. These weren’t small, incremental changes to the social fabric; they shattered then-current reality and re-configured it in new and surprising ways. The process wasn’t smooth or easy.

Digital is more radically transformative than the printing press. It’s folly to expect long-established social structures will stand as before. The question isn’t whether our societies will change, but whether the change will be for better or worse. That’s still an open question. I’m driven by the idea that those of us working in digital have the opportunity to help steer the outcome towards humane ends.

Table Stakes

Yesterday I was running an errand with my daughter. Our conversation drifted towards Mel Blanc. I explained how Mr. Blanc voiced most of the Looney Tunes characters and how I’d seen a hilarious interview years before in which he went through various voices. A “you had to be there” experience.

Then something amazing happened. Rather than (inevitably) mangle the retelling of Mr. Blanc’s amazing abilities, we pulled out my iPhone. Within seconds she was looking at the interview, which is available — along with so much else — in YouTube. She chuckled along. Our conversation continued. When, she wondered, was Mel Blanc alive? I said I thought he’d died in the early 90s, but that we may as well check. I long-pressed the phone’s home button to evoke Siri. I said, “When did Mel Blanc die?” The reply came almost immediately: “Mel Blanc died July 10, 1989 at age 81 in Los Angeles.”

One of my favorite quotes is from Charles Eames:

Eventually everything connects — people, ideas, objects. The quality of the connections is the key to quality per se.

I’ve been using an iPhone for over a decade. Even so, I’m still astonished at the quality of connections I can make from this device I carry in my pocket. And what’s more, having such a device isn’t a luxury afforded to a fragment of the population. Almost everybody has similar access.

Alas, the ubiquity of the experience has made it table stakes; we take it for granted. Of course you shot 4K video of the birthday party. Of course you cleared your inbox while waiting in public transport. Of course you know how to get there. (What with all the maps of the world and a GPS receiver in your pocket!) Everybody does.

How do we account for everyone having instant access to any piece of information anywhere at any time? Surely not with measures established in and for the world that existed before the small glass rectangles.

Striving for Simplicity

Over a decade and a half ago, I was at an office party. (This was during the brief part of my career when I was in-house at a large corporation.) Among other amenities, the party featured a cartoonist; the type of artist you see drawing quick, exaggerated portraits at fairs. The artist was hired to draw each team member, highlighting striking things about us: quirky hobbies, particular styles of dress or grooming, tics, etc. I don’t remember if it was at my suggestion or that of my co-workers, but my cartoon showed me surrounded by electronic gadgets: mobile phones, MP3 players (a new thing at the time), notebook computers, cameras, etc. That’s how I saw myself and how my colleagues thought of me; tech was a primary part of my identity.

I’ve long been representative of the “early adopter” demographic. Being alive (and privileged enough to have some discretionary income) during the Moore’s Law years has meant seeing (and benefiting from) tremendous advances in many areas of life. Consider the way we listen to music. In the span of a few years, my entire music collection went from heavy boxes filled with clunky cassette tapes to a few light(er) CD cases to a tiny device not much bigger than a single cassette tape. Portable MP3 players represented a tangible improvement to that part of my life. The same thing has happened with photography, movies, reading, etc. It’s been exciting for me to stay up-to-date with technology.

That said, as I’ve grown older, I’ve become more aware of the costs of new things. I’m not just talking about the money needed to acquire them; every new thing that comes into my life adds some cognitive cost. For example, there’s the question of what to do with the thing(s) it replaces. (I still have cases full of plastic discs in my attic. I’m unsure what to do with them, considering the amount of money I’ve already sunk into them.)

Continue reading

The Treachery of Deepfakes

Ninety years ago, René Magritte painted a pipe. I’m sure you’ve seen the work; it’s among his most famous. Written under the rendering of the object are the words Ceci n’est pas une pipe — “This is not a pipe.” Huh? Well, it isn’t; it’s a representation of a pipe. Clever stuff.

The Treachery of Images

The painting is called La Trahison des images — “The Treachery of Images.” Treachery means to deceive; to betray our trust. The painting tricks us by simulating a familiar object. Aided by the charming image, our mind conceives the pipe. We recall experiences with the real thing — its size, weight, texture, the smell of tobacco, etc. Suddenly we’re faced with a conundrum. Is this a pipe or not? At one level it is, but at another it isn’t.

The Treachery of Images requires that we make a conceptual distinction between the representation of an object and the object itself. While it’s not a nuanced distinction – as far as I know, nobody has tried to smoke Magritte’s painting – it’s important since it highlights the challenges inherent in using symbols to represent reality.

The closer these symbols are to the thing they’re representing, the more compelling the simulation. Compared to many of Magritte’s contemporaries, his style is relatively faithful to the “real world.” That said, it’s not what we call photo-realistic. (That is, an almost perfect two-dimensional representation of the real thing. Or rather, a perfectly rendered representation of a photograph of the real thing.)

Magritte’s pipe is close enough. I doubt the painting would be more effective if it featured a “perfect” representation; its “painting-ness” is an important part of what makes it effective. The work’s aim isn’t to trick us into thinking that we’re looking at a pipe, but to spark a conversation about the difference between an object and its symbolic representation.

The distance between us and the simulation is enforced by the medium in which we experience it. You’re unlikely to be truly misled while standing in a museum in front of the physical canvas. That changes, of course, if you’re experiencing the painting in an information environment such as the website where you’re reading these words. Here, everything collapses onto the same level.

There’s a photo of Magritte’s painting at the beginning of this post. Did you confuse it with the painting itself? I’m willing to bet that at one level you did. This little betrayal serves a noble purpose; I wanted you to be clear on which painting I was discussing. I also assumed that you’d know that that representation of the representation wasn’t the “real” one. (There was no World Wide Web ninety years ago.) No harm meant.

That said, as we move more of our activities to information environments, it becomes harder for us to make these distinctions. We get used to experiencing more things in these two-dimensional symbolic domains. Not just art, but also shopping, learning, politics, health, taxes, literature, mating, etc. Significant swaths of human experience collapsed to images and symbols.

Some, like my citing of The Treachery of Images are relatively innocent. Others are actually and intentionally treacherous. As in: designed to deceive. The rise of these deceptions is inevitable; the medium makes them easy to accept and disseminate, and simulation technologies keep getting better. That’s why you hear in the news about increasing concern for deepfakes.

Recently, someone commercialized an application that strips women of their clothes. Well, not really — it strips photographs of women of their clothes. That makes it only slightly less pernicious; such capabilities can do very real harm. The app has since been pulled from the market, but I’m confident that won’t be the last we see of this type of treachery.

It’s easy to point to that case as an obvious misuse of technology. Others will be harder. Consider “FaceTime Attention Correction,” a new capability coming in iOS 13. Per The Verge, this seemingly innocent feature corrects a long-standing issue with video calls:

Normally, video calls tend to make it look like both participants are peering off to one side or the other, since they’re looking at the person on their display, rather than directly into the front-facing camera. However, the new “FaceTime Attention Correction” feature appears to use some kind of image manipulation to correct this, and results in realistic-looking fake eye contact between the FaceTime users.

What this seems to be doing is re-rendering parts of your face on-the-fly while you’re on a video call so the person on the other side is tricked into thinking you’re looking directly at them.

While this sounds potentially useful, and the technology behind it is clever and cool, I’m torn. Eye contact is an essential cue in human communication. We get important information from our interlocutor’s eyes. (That’s why we say the eyes are the “windows to the soul.”) While meeting remotely using video is nowhere near as rich as meeting in person, we communicate better using video than when using voice only. Do we really want to mess around with something as essential as the representation of our gaze?

In some ways, “Attention Correction” strikes me as more problematic than other examples of deep fakery. We can easily point to stripping clothes off photographs, changing the cadence of politician’s speeches in videos, or simulating an individual’s speech patterns and tone as either obviously wrong or (in the latter case) at least ethically suspect. Our repulsion makes them easier to regulate or shame off the market. It’s much harder to say that altering our gaze in real-time isn’t ethical. What’s the harm?

Well, for one, it messes around with one of our most fundamental communication channels, as I said above. It also normalizes the technologies of deception; it puts us on a slippery slope. First the gaze, then… What? A haircut? Clothing? Secondary sex characteristics? Given realistic avatars, perhaps eventually we can skip meetings altogether.

Some may relish the thought, but not me. I’d like more human interactions in information environments. Currently, when I look at the smiling face inside the small glass rectangle, I think I’m looking at a person. Of course, it’s not a person. But there’s no time (or desire) during the interaction to snap myself out of the illusion. That’s okay. I trust that there’s a person on the other end, and that I’m looking at a reasonably trustworthy representation. But for how much longer?

A Data Primer for Designers

My friend Tim Sheiner, writing for the Salesforce UX blog:

demand is high for designers who can create experiences that display data in useful and interesting ways. In my personal experience this became much, much easier to do once I’d learned to speak the crisp, precise and slightly odd language used by technical people for talking about data.

What follows is a phenomenal post that clearly explains much of what you need to know to understand and speak competently about data. A must-read for anybody involved in designing for digital information environments.

Designer’s Field Guide to Data