Automation doesn’t just do work for us: it changes how we experience work, sometimes leading to unexpected consequences. Technology can make us complacent, leading us to make mistakes.

In episode 34 of Traction Heroes, Harry told of a time when he drifted off course by following GPS navigation instructions. Instead of the airport, he ended in a remote part of Virginia — and almost missed his flight. Things like this happen. Harry also cited a compelling story from Nicholas Carr’s The Glass Cage:

Automation complacency has been documented in many high-risk situations, from battlefields to industrial control rooms, to bridges of ships and submarines. In one classic case involving a 1,500-passenger ocean liner named The Royal Majesty, which in the spring of 1995 was sailing from Bermuda to Boston on the last leg of a one week cruise. The ship was outfitted with a state-of-the-art automated navigation system that used GPS signals to keep it on course. An hour into the voyage, the cable for the GPS antenna came loose, and the navigation system lost its bearings. It continued to give readings, but they were no longer accurate. For more than thirty hours, the ship slowly drifted off its appointed route. The captain and crew remained oblivious to the problem despite clear signs that the system had failed. At one point, a mate on watch was unable to spot an important location buoy that the ship was due to pass. He failed to report the fact. His trust in the navigation system was so complete, that he assumed the buoy was there and he just didn’t see it. Nearly twenty miles off course, the ship finally ran aground.

Like many technologies, GPS works well most of the time, so we don’t question it. But things can go wrong. Our trust in the technology’s capabilities can lead to a misplaced sense of security. The results can range from inconvenient to catastrophic.

This is obviously highly relevant in the age of AI. Not only is the technology fallible, but it also presents its outputs with high degree of self-confidence. So we must be especially vigilant. That said, always second-guessing results can cost valuable time and resources.

As I explained, we must use AI with a clear understanding of our own competence in the domain in question. We can be a bit less vigilant in domains where we have enough expertise to judge the quality of the output, but must be more skeptical when we don’t know what we don’t know.

Knowing where and when to apply critical thinking is key to avoiding setbacks. What’s required is literacy: understanding how the technology works under the hood. AI isn’t magical. If you understand how it can fail, you’ll be less likely to accept results uncritically — and when it’s ok to trust them.

Traction Heroes episode 34: Automation Complexity