The information machines were ranged side by side against the far wall, and [Alystra] chose one at random. As soon as the recognition signal lit up, she said: “I am looking for Alvin; he is somewhere in this building. Where can I find him?”


“He is with the Monitors,” came the reply. It was not very helpful, since the name conveyed nothing to Alystra. No machine ever volunteered more information than it was asked for, and learning to frame questions properly was an art which often took a long time to acquire.

— Arthur C. Clarke, The City and the Stars (1956)

In our age of pseudo-smart information machines, Alystra’s predicament sounds all-too-familiar. When we interact with a new system, we’re faced with a double challenge: its semantic environment is unknown to us and it can’t grok our context. As a result, our interactions are initially awkward and ineffective. As we gain experience (by trial and error), we become more conversant in the system’s technical vocabulary, its cadence, its rules, its internal model for understanding our roles in the interaction. (Is it assisting me? Enabling me? Teaching me? Am I teaching it? All of the above?)

“What’s the weather like today?” is very close to a question I’d ask a person. But I don’t venture beyond statements much more complicated than that; I’m likely to be disappointed, so I curtail my words. (Btw, I’d avoid “curtail” when talking with the information machines. I’d also avoid “btw.”) I also pace myself, because I’ve learned my interlocutor needs more structure than a person: it must know when I’ve started issuing a statement it should respond to and when I’ve stopped; it then needs to process the statement and formulate a coherent reply. All of this takes time. It’s awkward, but you get used to it.

And that’s the key: you get used to it.

Clarke’s information machine is a) clever enough to understand the question, but b) not clever enough to know Alystra lacks the context to make sense of the correct answer. Our machines have gotten pretty good at a) but still suck at b); they cannot infer information from our body language, tone of voice, and so many other subtleties that make interpersonal interactions so rich. I look forward to the day when the semantic environment we share with these systems dips into the uncanny valley. For the time being, it’s up to us to adapt to theirs.