AI technology is prone to the same biases that humans, either consciously or unconsciously, weave into their perceptions of reality.
Some of the smartest people in the world are working on artificial intelligence (AI), but they’re running into a big problem: bias.
According to the McKinsey Global Institute, AI technology is prone to the same biases that humans, either consciously or unconsciously, weave into their perceptions of reality. Black American defendants in criminal cases are victims of machine bias based on the alleged risks they pose to society. One company using an algorithm to help them hire effectively found that it penalized applicants from women’s colleges.
McKinsey’s solution? We better understand how these biases are formed (in humans and AI). Then, we need to build better algorithms to remove them. Simple, right? Well, not really.
Removing those biases isn’t just going to be a hard path. If it was simply a difficult endeavor led to a positive outcome, it would be well worth it. This actually seems to be an impossible problem, with a moving target that we will never reach.
Consider the fact that as a society we can agree about very little. I like classical music. My daughter likes hip hop. There are Bernie Bros and Trump fans and everything in between. Strict rationalists and anti-vaxxer naturopaths.
So, if we build AI, whose biases will win out? Even worse, it could lead to clear moral hazards when humans use it to make decisions.
Let me illustrate with a simple thought experiment …
Should our fate (or someone else’s) rest in the hands of AI?
One of the most talked-about AI applications of the future is the self-driving car. Ride-hailing companies like Uber and Lyft, tech giants like Google and Apple, and innovative car companies like Tesla are spending billions of dollars building this technology.
I want to make it clear: I’ve got nothing against trying to develop this technology. This is amazing, cutting-edge innovation. But the challenges here are extraordinary. Regardless of how well a system in an autonomous vehicle works, it still may ultimately have to decide someone’s fate in a crash.
Let’s imagine something goes wrong. A truck trailer hitch breaks on the highway and its cargo spills on the road. In the car just behind the accident, sensors alert the onboard AI, and it takes split-second countermeasures.
Here’s where the bias comes in. The car now has several options, none of which are ideal. It can swerve off the road, likely killing the driver but preventing the vehicle from causing a much worse multicar pileup.
“Oh, can’t you just adjust the algorithm so it never does something to hurt the driver?” I can hear you saying in response. OK, so in that case, the AI is potentially making a decision to cause multiple fatalities on the road.
What about a bias in favor of children? If the sensors detect a “baby on board” sticker on one of the cars behind or ahead, does that counter the bias in favor of protecting the riders in its vehicle?
Assuming there will one day be sensors sophisticated enough to instantly process age based on facial recognition technology, does the car swerve to hit a senior citizen in order to spare a teenager?
And on a slightly more conspiratorial bent (like in the original Robocop film), if any employee of Tesla is in the area, will a bit of hidden code in Tesla’s algorithm ensure their safety gets prioritized over someone else’s?
The truth is that we don’t want a machine to make these kinds of decisions
Tweaking an algorithm based on human input is not likely to improve this situation. These are hard questions that even the smartest humans will have trouble answering without relying on our own intuitions. The same type of smart people who are trying to build AI for autonomous cars are trying to do the same thing for companies — and we’re getting the same results.
The AI Powered Equity ETF, powered by IBM’s Watson, failed miserably in 2018 compared to just investing in the S&P 500 Index. But that’s not the worst part: No human being at that investment firm was able to explain why the AI made the investment decisions that it made.
That brings us around to my original contention: Biased humans can’t create unbiased algorithms. We’re as likely to make things worse through unintended consequences as we are to make it better by removing one or more types of human bias.
Augmented intelligence (for humans)
Let’s stop trying to make machines that make decisions for us — or at least, let’s put that aside for the moment. Let’s keep building technology that helps humans to make better decisions supported by access to better information.
Here’s the new term I believe we should use for augmented intelligence: human intelligence supported with artificial intelligence. The objective is to help every person to reach their maximum potential. It’s to help them to learn, be creative and be free to make decisions — not to obey commands for their own good.
And that’s what we want. If humans ever develop a true artificial intelligence, maybe it will be a good thing. But for now, let’s use technology to empower humans to be better.
First published on Forbes.com on Jul 12, 2019