Via Kenny Chen’s newsletter, I learned about Tricycle, a set of tools “that help you design products powered by AI.” I remember seeing tweets last year from Jordan Singer (Tricycle’s creator) that highlighted some of this functionality. Now it looks like Singer is productizing a bundle of GPT-3-powered Figma automation tools.Continue reading
Adobe’s Patrick Hebron, in an interview for Noema (from September 2020):
If you’re building a tool that gets used in exactly the ways that you wrote out on paper, you shot very low. You did something literal and obvious.
The relationship between top-down direction and bottom-up emergence is a central tension in the design of complex systems. Without some top-down direction, the system won’t fulfill its purposes. However, if it doesn’t allow for bottom-up adjustments, the system won’t adapt to conditions on the ground — i.e., it won’t be actualized as a real thing in the world. What’s needed is a healthy balance between bottom-up and top-down.Continue reading
James Manyika, Jake Silberg, and Brittany Presten writing for the Harvard Business Review:
AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.
The phrase “artificial intelligence” is leading us astray. For some folks, it’s become a type of magical incantation that promises to solve all sorts of problems. Much of what goes by AI today isn’t magic — or intelligence, really; it’s dynamic applied statistics. As such, “AI” is highly subject to the data being analyzed and the structure of that data. Garbage in, garbage out.
It’s important for business leaders to learn about how AI works. The HBR post offers a good summary of the issues and practical recommendations for leaders looking to make better decisions when implementing AI-informed systems — which we all should be:
Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.
From an insightful (and terrifying) article in The Atlantic by Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher about the potential impact of AI on our civilization:
The challenge of absorbing this new technology into the values and practices of the existing culture has no precedent. The most comparable event was the transition from the medieval to the modern period. In the medieval period, people interpreted the universe as a creation of the divine and all its manifestations as emanations of divine will. When the unity of the Christian Church was broken, the question of what unifying concept could replace it arose. The answer finally emerged in what we now call the Age of Enlightenment; great philosophers replaced divine inspiration with reason, experimentation, and a pragmatic approach. Other interpretations followed: philosophy of history; sociological interpretations of reality. But the phenomenon of a machine that assists—or possibly surpasses—humans in mental labor and helps to both predict and shape outcomes is unique in human history. The Enlightenment philosopher Immanuel Kant ascribed truth to the impact of the structure of the human mind on observed reality. AI’s truth is more contingent and ambiguous; it modifies itself as it acquires and analyzes data.
The passage above reminded me of this gem by E.O. Wilson:
We are drowning in information, while starving for wisdom. The world henceforth will be run by synthesizers, people able to put together the right information at the right time, think critically about it, and make important choices wisely.
True but for the “people” bit?
Me: Ooh, X looks interesting. I wonder if I can find a short video about X. [Finds a video on X and watches to the end.]
Recommendation algorithm: Oh, s/he watched X! I know what s/he likes. X! Like? Nay! X is the bread on his/her table, the air s/he breathes, his/her raison d’être. S/he has a visible X tattoo on his/her body. His/her firstborn will be/is named after X. X in continuous rotation, 24 x 7! More X! More X! MORE X!
Me: Whoa, whoa! [Looks around for a way to say “no more X.” Finds a link to hide video about X. Clicks it. The video disappears from the recommendations feed.]
Me: [Idly visits video site.]
Recommendation algorithm: New X video! Oh, and here are three others you may have missed. And these two are kinda like X.
Me: Hmmm. I thought I said no more X. How does this thing work? [Clicks on hide links for three other videos about X. Reloads page.]
Recommendation algorithm: New X video! Oh, and here are three others you may have missed. And these two are kinda like X. Oh, and here are some about Y and Z, just in case.
Me: Really?! [Clicks on hide link for another X video. Reloads page.]
Recommendation algorithm: New X video! Oh, and here are three others you may have missed. And these two are kinda like X. Oh, and here are some Ys and Zs, just in case.
Me: Sigh. [Clicks on video about Z. Watches to the end.]
Recommendation algorithm: Oh, s/he watched Z! I know what s/he likes. Z! Like? Nay! Z is the bread on his/her table, the air s/he breathes, his/her raison d’être. S/he has a visible Z tattoo on his/her body. His/her firstborn will be/is named after Z. Z in continuous rotation, 24 x 7. More Z! More Z! MORE Z!
Ninety years ago, René Magritte painted a pipe. I’m sure you’ve seen the work; it’s among his most famous. Written under the rendering of the object are the words Ceci n’est pas une pipe — “This is not a pipe.” Huh? Well, it isn’t; it’s a representation of a pipe. Clever stuff.
The painting is called La Trahison des images — “The Treachery of Images.” Treachery means to deceive; to betray our trust. The painting tricks us by simulating a familiar object. Aided by the charming image, our mind conceives the pipe. We recall experiences with the real thing — its size, weight, texture, the smell of tobacco, etc. Suddenly we’re faced with a conundrum. Is this a pipe or not? At one level it is, but at another it isn’t.
The Treachery of Images requires that we make a conceptual distinction between the representation of an object and the object itself. While it’s not a nuanced distinction – as far as I know, nobody has tried to smoke Magritte’s painting – it’s important since it highlights the challenges inherent in using symbols to represent reality.
The closer these symbols are to the thing they’re representing, the more compelling the simulation. Compared to many of Magritte’s contemporaries, his style is relatively faithful to the “real world.” That said, it’s not what we call photo-realistic. (That is, an almost perfect two-dimensional representation of the real thing. Or rather, a perfectly rendered representation of a photograph of the real thing.)
Magritte’s pipe is close enough. I doubt the painting would be more effective if it featured a “perfect” representation; its “painting-ness” is an important part of what makes it effective. The work’s aim isn’t to trick us into thinking that we’re looking at a pipe, but to spark a conversation about the difference between an object and its symbolic representation.
The distance between us and the simulation is enforced by the medium in which we experience it. You’re unlikely to be truly misled while standing in a museum in front of the physical canvas. That changes, of course, if you’re experiencing the painting in an information environment such as the website where you’re reading these words. Here, everything collapses onto the same level.
There’s a photo of Magritte’s painting at the beginning of this post. Did you confuse it with the painting itself? I’m willing to bet that at one level you did. This little betrayal serves a noble purpose; I wanted you to be clear on which painting I was discussing. I also assumed that you’d know that that representation of the representation wasn’t the “real” one. (There was no World Wide Web ninety years ago.) No harm meant.
That said, as we move more of our activities to information environments, it becomes harder for us to make these distinctions. We get used to experiencing more things in these two-dimensional symbolic domains. Not just art, but also shopping, learning, politics, health, taxes, literature, mating, etc. Significant swaths of human experience collapsed to images and symbols.
Some, like my citing of The Treachery of Images are relatively innocent. Others are actually and intentionally treacherous. As in: designed to deceive. The rise of these deceptions is inevitable; the medium makes them easy to accept and disseminate, and simulation technologies keep getting better. That’s why you hear in the news about increasing concern for deepfakes.
Recently, someone commercialized an application that strips women of their clothes. Well, not really — it strips photographs of women of their clothes. That makes it only slightly less pernicious; such capabilities can do very real harm. The app has since been pulled from the market, but I’m confident that won’t be the last we see of this type of treachery.
It’s easy to point to that case as an obvious misuse of technology. Others will be harder. Consider “FaceTime Attention Correction,” a new capability coming in iOS 13. Per The Verge, this seemingly innocent feature corrects a long-standing issue with video calls:
Normally, video calls tend to make it look like both participants are peering off to one side or the other, since they’re looking at the person on their display, rather than directly into the front-facing camera. However, the new “FaceTime Attention Correction” feature appears to use some kind of image manipulation to correct this, and results in realistic-looking fake eye contact between the FaceTime users.
What this seems to be doing is re-rendering parts of your face on-the-fly while you’re on a video call so the person on the other side is tricked into thinking you’re looking directly at them.
While this sounds potentially useful, and the technology behind it is clever and cool, I’m torn. Eye contact is an essential cue in human communication. We get important information from our interlocutor’s eyes. (That’s why we say the eyes are the “windows to the soul.”) While meeting remotely using video is nowhere near as rich as meeting in person, we communicate better using video than when using voice only. Do we really want to mess around with something as essential as the representation of our gaze?
In some ways, “Attention Correction” strikes me as more problematic than other examples of deep fakery. We can easily point to stripping clothes off photographs, changing the cadence of politician’s speeches in videos, or simulating an individual’s speech patterns and tone as either obviously wrong or (in the latter case) at least ethically suspect. Our repulsion makes them easier to regulate or shame off the market. It’s much harder to say that altering our gaze in real-time isn’t ethical. What’s the harm?
Well, for one, it messes around with one of our most fundamental communication channels, as I said above. It also normalizes the technologies of deception; it puts us on a slippery slope. First the gaze, then… What? A haircut? Clothing? Secondary sex characteristics? Given realistic avatars, perhaps eventually we can skip meetings altogether.
Some may relish the thought, but not me. I’d like more human interactions in information environments. Currently, when I look at the smiling face inside the small glass rectangle, I think I’m looking at a person. Of course, it’s not a person. But there’s no time (or desire) during the interaction to snap myself out of the illusion. That’s okay. I trust that there’s a person on the other end, and that I’m looking at a reasonably trustworthy representation. But for how much longer?
“The information machines were ranged side by side against the far wall, and [Alystra] chose one at random. As soon as the recognition signal lit up, she said: ‘I am looking for Alvin; he is somewhere in this building. Where can I find him?’
“‘He is with the Monitors,’ came the reply. It was not very helpful, since the name conveyed nothing to Alystra. No machine ever volunteered more information than it was asked for, and learning to frame questions properly was an art which often took a long time to acquire.”
— Arthur C. Clarke, The City and the Stars (1956)
In our age of pseudo-smart information machines, Alystra’s predicament sounds all-too-familiar. When we interact with a new system, we’re faced with a double challenge: its semantic environment is unknown to us and it can’t grok our context. As a result, our interactions are initially awkward and ineffective. As we gain experience (by trial and error), we become more conversant in the system’s technical vocabulary, its cadence, its rules, its internal model for understanding our roles in the interaction. (Is it assisting me? Enabling me? Teaching me? Am I teaching it? All of the above?)
“What’s the weather like today?” is very close to a question I’d ask a person. But I don’t venture beyond statements much more complicated than that; I’m likely to be disappointed, so I curtail my words. (Btw, I’d avoid “curtail” when talking with the information machines. I’d also avoid “btw.”) I also pace myself, because I’ve learned my interlocutor needs more structure than a person: it must know when I’ve started issuing a statement it should respond to and when I’ve stopped; it then needs to process the statement and formulate a coherent reply. All of this takes time. It’s awkward, but you get used to it.
And that’s the key: you get used to it.
Clarke’s information machine is a) clever enough to understand the question, but b) not clever enough to know Alystra lacks the context to make sense of the correct answer. Our machines have gotten pretty good at a) but still suck at b); they cannot infer information from our body language, tone of voice, and so many other subtleties that make interpersonal interactions so rich. I look forward to the day when the semantic environment we share with these systems dips into the uncanny valley. For the time being, it’s up to us to adapt to theirs.