James Manyika, Jake Silberg, and Brittany Presten writing for the Harvard Business Review:

AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.

The phrase “artificial intelligence” is leading us astray. For some folks, it’s become a type of magical incantation that promises to solve all sorts of problems. Much of what goes by AI today isn’t magic — or intelligence, really; it’s dynamic applied statistics. As such, “AI” is highly subject to the data being analyzed and the structure of that data. Garbage in, garbage out.

It’s important for business leaders to learn about how AI works. The HBR post offers a good summary of the issues and practical recommendations for leaders looking to make better decisions when implementing AI-informed systems — which we all should be:

Bias is all of our responsibility. It hurts those discriminated against, of course, and it also hurts everyone by reducing people’s ability to participate in the economy and society. It reduces the potential of AI for business and society by encouraging mistrust and producing distorted results. Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.

What Do We Do About the Biases in AI?