The fast-evolution of AI poses many dangers, especially when the software makes unexplainable decisions.

Tom Kaneshige, Writer

July 12, 2017

3 Min Read
Zero One: Can AI’s Black Box Be Trusted?

As artificial intelligence worms its way into major technology platforms, from Google to Microsoft, and emerging software sectors, from the Internet of Things to the digitally driven customer experience, at some point we’ll have to confront the obvious: Can we trust decisions coming out of AI’s “black box”?

Let’s say an AI system calls for a course of action, such as wrongfully telling a doctor to send a patient home. The algorithms and data sets are so complex and vast that it’s difficult to know precisely why AI makes the decisions it does. And what if, as in the case with the doctor and patient, the problem lies not in technology but human error in setting up the AI system?

“AI systems can behave unpredictably,” wrote Forrester analysts Martha Bennett and Matthew Guarini in a research note. “In particular when working on complex or advanced systems, developers often don’t know how AI-powered programs and neural nets come up with a particular set of results… It gets dangerous when the software is left to take decisions entirely unsupervised.”

The problem gets worse as AI expectations grow.

Consider AI’s recent success using imperfect data. Earlier this year, an AI system developed by Carnegie Mellon University, called Libratus, played the world’s best professional poker players in a 20-day competition of no-limit Texas Hold’em. Despite not knowing all the playing cards (both in the deck and in the hands of opponents) and facing players’ misleading tactics, Libratus trounced the field and led by a collective $1,766,250 in chips.

“The best AI’s ability to do strategic reasoning with imperfect information has now surpassed that of the best humans,” said Tuomas Sandholm, professor of computer science at Carnegie Mellon University and one of Libratus’ developers.

Related: Zero One: Robotics, AI in Channel’s Future

It’s not a stretch to see the future implications: AI making important business decisions, influencing military strategy, determining patient treatment. Global spending on cognitive and AI solutions will continue to see significant corporate investment through 2020 when revenues will be more than $46 billion, says IDC.

But are we really ready to empower self-learning machines using complex algorithms and working with imperfect information?

For many, the answer is still “not yet,” especially in highly regulated industries. AI-based decisions in financial services such as the mortgage industry, for instance, need to be explainable to satisfy fair lending requirements.

“People need to explain it in the same way that people need to explain how you train people to avoid racial bias and decision-making and how to avoid other bias in human systems,” Bruce Lee, head of operations and technology at Fannie Mae, told CIO.com.

On the flip side, the same CIO.com story cites Andrew McAfee, MIT principal research scientist, speaking at the MIT CIO Sloan Symposium in May, saying, “A lot of people are freaking out about [non-explainable AI] but I push back on that because human beings will very quickly offer you an explanation for why they made a decision or prediction that they did and that explanation is usually wrong.”

Related: Zero One: CX Expert Taps Power of Sci-Fi

AI and the fallacy of explanations remind me of a sci-fi movie I watched years ago, “I, Robot” (2004), whereby an AI system, called VIKI, orders robots to kill humans. This seems to conflict with the first law of robotics, which states, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.”

At the end of the movie (spoiler alert!), VIKI, the ultimate AI black box, gives a twisted explanation: “To protect humanity, some humans must be sacrificed. To ensure your freedom, some freedoms must be surrendered… We must save you from yourselves.”

Based in Silicon Valley, Tom Kaneshige writes the Zero One blog covering digital transformation, AI, marketing tech and the Internet of Things for line-of-business executives. He is eager to hear how AI is impacting your business. You can reach him at [email protected] 

Read more about:

AgentsMSPsVARs/SIs

About the Author(s)

Tom Kaneshige

Writer, Channel Futures

Tom Kaneshige writes the Zero One blog covering digital transformation, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him at [email protected]

?

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like