A version of this post first appeared in my newsletter. Subscribe to receive posts like this in your inbox every other Sunday.

Like many others in tech, I’m cautiously excited about AI. ChatGPT in particular feels like a milestone: the moment when these things went from intriguing curiosities to practical tools. If you haven’t played with it yet, give it a go.

Now, the critical question is: how will you use it?

Among other things, this question raises thorny issues about authorship and ethics. For example, many folks lament the end of essay-based school assignments. Some have noted that ChatGPT (and its ilk) are to text-based assignments as calculators are to math problems.

What does it mean for knowledge workers to have a tool that does for writing what calculators do for math problems? More pointedly, how do you use these things without losing your soul?

One way to go about it is to think of their role as being on a continuum ranging from “no impact whatsoever” to “total replacement.”

A line indicating a continuum between two extremes: ‘no impact‘ on one end and ‘replacement‘ on the other.

The “no impact” side is the world pre-AI: nothing changes. Obviously, this position is moot now. The other extreme (“complete replacement”) is not yet feasible — or, in my opinion, desirable. So, you’re looking at a point between these two.

There might several worth exploring. I’ve identified three, which I’ll map to three roles previously played by humans:

  1. AI as editor
  2. AI as ghostwriter
  3. AI as amanuensis

Let’s examine them in more detail.

AI as editor

This means using AI to check your work. It might include looking for spelling and grammar errors, something many of us have done for a long time without qualms.

The previous continuum showing a mark near ‘no impact‘ which reads ‘editor‘

For example, I’ve used Grammarly to check these newsletters all along and hadn’t yet felt compelled to share that fact. (I hope you don’t feel cheated!)

A more advanced system could alert you to gaps in reasoning or logical fallacies. Such a tool (Thinkerly?) would offer suggestions and let you decide what to do about them. Over time, it would hone your reasoning.

To wit, Grammarly has trained me to spot recurring flaws in my work, a more relevant and compelling context for learning grammatical rules than abstract scenarios. I’m a better writer as a result.

I expect Thinkerly would do the same to our ability to argue cogently. The ethics are a bit trickier than a grammar/spelling checker, but not by much.

AI as ghostwriter

This position is why essay-based assignments are going away. Here, you might have an idea and hand it over to the AI (perhaps as a set of bullet points,) which then composes the finished text for you.

The previous continuum showing a mark near ‘replacement‘ which reads ‘ghostwriter‘

Take a few minutes to ask ChatGPT to write a sonnet on a particular topic or compose a short essay on the merits of some position. It’s uncanny — and a bit terrifying, given the possibilities inherent in mass-producing convincing texts.

There are many ethical concerns with this position. I worry that unattributed AI-generated texts will soon flood our information environments. (One possible silver lining suggested by Ben Thompson: we’ll have to hone our BS detectors.)

I’m planning to avoid using the robots in this capacity.

AI as amanuensis

This position is perhaps the least clear from an ethical perspective — and the one that most excites me. I think of it as being somewhere in the middle.

The previous continuum showing a mark in the middle which reads ‘amanuensis‘

During the early modern era, scholars hired personal secretaries into their households. These so-called amanuenses mostly took dictation but also summarized and cleaned up texts, translated sources, compiled indices, followed up on references, ran errands, and more, freeing scholars to focus on more important things.

Over the last few weeks, I’ve used ChatGPT and GPT-3 as amanuenses. I think of them as research assistants and sounding boards: I feed them questions and sometimes snippets of my writing and ask for summaries or suggestions for further reading. Their responses help clarify my thinking.

For example, I asked ChatGPT to summarize the first part of this essay. Here’s what it responded:

The author of this essay is cautiously excited about AI and ChatGPT in particular. They discuss the ethical implications of using AI to assist with text-based assignments, and propose three ways in which AI can be used in the writing process: as an editor, ghostwriter, or amanuensis. The author believes that while complete replacement of human writing is not yet feasible or desirable, AI can be a valuable tool in the writing process.

This interaction validates the clarity of my writing. And asking for similar summaries of others’ writings helps me determine whether they’re worth my time. (I’m testing the Readwise Reader beta, which uses GPT-3 to summarize articles in this manner.)

While I’m not planning to pass AI-generated texts as my own (i.e., the ghostwriter position,) the thinking behind my writing is now augmented by interactions with AI assistants. They suggest different avenues for exploration, research materials, and even words and ideas I may have missed. (E.g., the phrase “sounding boards” above was suggested by ChatGPT.)

I’ve also started including AI-generated clippings in my notes, much as I include quotes and citations from human authors. (To keep things clear, I prepend Robot: to any texts produced by AIs.)

This is to say, even though my meaty fingers typed these words, the thinking behind them is a hybrid concoction. Which I think is OK — and perhaps even necessary, given the information overload we’re facing. But who knows?

One way or another, we’re blurring authorship in weird and interesting ways. At any given moment, we might employ these tools in one of these roles given our particular needs. Personally, I plan to avoid ghostwriting. But it’s a choice, and one we’ll all be called to make soon.