Most artificial intelligence claims are bogus today, but this doesn’t make AI less real tomorrow.

Tom Kaneshige, Writer

January 3, 2018

4 Min Read
skymind

In the world of artificial intelligence, what is real and what is not?

Everyone from start-ups to giants says they have AI in their products doing amazing things, but the dirty little secret is that most do not. Their versions of AI are often analytics engines hard-wired to process data a certain way. No machine learning, no true intelligence, no great disruption.

“AI has suffered the fate of many hot technologies, which is that more people claim to do it than actually do it,” says Chris Nicholson, CEO of Skymind, a developer of a deep learning tool for Java, called Deeplearning4j, which helps companies build AI solutions such as image recognition, fraud detection, and recommender systems.

But the reason why AI marketing spin can’t be easily dismissed is because it’s not all hype.

That is, people who really understand AI know that this is the most awesome technology of our generation, with the potential not only to disrupt labor markets but shake societies. Many also worry such a powerful tool might fall into the wrong hands or spiral out of control.

Related: Zero One: Playing the AI Game

How good are you at AI reality check?

Imagine the following scenario: A drone learning to fly is equipped with high-powered weaponry and a camera with face recognition software. The drone’s AI system has orders to kill people who enter a designated area. Through trial and error, the autonomous drone gets better at its objective function. 

This sounds like something straight out of the sci-fi “Terminator” movies, but it’s possible today. As long as the system is built in a way that reinforces learning, a machine can get smarter and more efficient at accomplishing its goal without additional human programming or intervention.

Danny Lange, a machine learning expert, described this drone scenario in an interview with Fast Company. “I think AI is a complete and total disruption,” he says. “It’s about being able to train systems rather than program systems.”

Lange has been training systems to learn on their own for decades. Before he became vice president of AI and machine learning at Unity Technologies, Lange held positions such as head of machine learning at Uber, general manager of machine learning at Amazon, principal development manager of big data analytics with Hadoop at Microsoft, and IBM scientist working on intelligent agents in the early 1990s.

“Trust me, we have come a very, very long way,” Lange says.

Related: Zero One: Salesforce Einstein AI Getting Smarter

Nicholson, whose Skymind company employs 25 deep learning and systems engineers, says this is just the beginning. Behind the scenes, researchers are trying to build next-generation AI that can do anything better than a human can, not just solve narrow problems, he says.

Sound familiar? Think “Blade Runner 2049,” an AI machine capable of navigating life.

Everyone wants to own this next-generation AI, although not for the same reasons. “There’s a cyber arms race going on between major corporate research labs, some non-profit think tanks like OpenAI, and military research organizations of the world like DARPA,” Nicholson says.

For instance, OpenAI, co-founded by Greg Brockman, Ilya Sutskever, Elon Musk and Sam Altman, is working on developing artificial general intelligence, or AGI, which is defined as the intelligence of a machine that could successfully perform any intellectual task that a human can. OpenAI’s website says, “AGI will be the most significant technology ever created by humans.”

While OpenAI publishes research at top machine learning conferences, Nicholson says OpenAI will go into quiet periods. “They’ve said it’s possible there are some discoveries that they simply won’t share” for safety reasons, he says.

In a sign of what’s at stake, OpenAI’s whole purpose is to build safe AGI.

“All they hope is that they can get there first, that the algorithm lives in a controlled environment, and they can figure out how to make it safe,” Nicholson says. “How do I implant a kill switch in the code? What limits can I impose on the way that algorithm works… so that it doesn’t kill us?”

Tom Kaneshige writes the Zero One blog covering digital transformation, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him at [email protected].

Read more about:

AgentsMSPsVARs/SIs

About the Author(s)

Tom Kaneshige

Writer, Channel Futures

Tom Kaneshige writes the Zero One blog covering digital transformation, AI, marketing tech and the Internet of Things for line-of-business executives. He is based in Silicon Valley. You can reach him at [email protected]

?

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like