IoT Redefined by Machine Learning Advances, Edge Computing
From IoT World Today
Smart, connected products are changing the face of competition. That was the thesis of a formative 2014 article in Harvard Business Review that highlighted the transformative potential of information technology integrated into an array of products.
In the past five years, however, the seemingly straightforward words of “smart” and “connected” have become more enigmatic and, arguably, more loaded terms. And the meaning of those two terms has steadily evolved and continues to change. Five to 10 years ago, a “smart” product was one with embedded sensors, processors and software. These days, to qualify as “smart,” a device needs to take advantage of some form of basic machine learning at a minimum.
While most assessments of IoT adoption conclude the adoption of the technology has been steady in the past decade, neural network and machine learning advances have been swift.
“It’s blossomed at a much faster rate than people thought, even three, four years ago,” said Steve Roddy, vice president, products in Arm’s machine learning group.
Neural Network and Machine Learning Advances Redefine ‘Smart’
One factor driving the progress is progress with convolutional neural networks. A pivotal moment came when Alex Krizhevsky, then a grad student at the University of Toronto, entered the ImageNet competition along with colleague Ilya Sutskever. A visual database that began life more than a decade ago, ImageNet became a stockpile for thousands of categories of images. The volume of data helped support the rise of a contest, known as the ImageNet Large Scale Visual Recognition Challenge in 2010. Two years later, Krizhevsky entered and ultimately won, defeating the then state-of-the-art, human-written image recognition code.
“There were people who would spend decades of their life writing image recognition software by hand,” Roddy said.
Then, suddenly, a grad student from the University of Toronto creates a neural net dubbed “AlexNet” that beat researchers who had spent their careers on such research.
“Oops,” Roddy joked. “Within two years of that, there’s an explosion of interest in neural nets from researchers.”
Big tech companies also threw their hat into the ring. In 2015, neural nets from Microsoft and Google defeated humans at image recognition. That was the aha moment.
“Neural nets better than what we previously had been able to code by hand with tens of thousands of lines of C code,” Roddy said. In addition, because researchers can train with large enough data sets, “neural have higher recognition rates on images flashed on the screen than humans,” he added.
Partly as a result of the impressive breakthroughs in image recognition, commercial machine learning and neural network applications are …