IBM & Nvidia Add New Chip to a Suddenly Crowded Market
As recently as early in 2017, artificial intelligence (AI) and machine learning were by and large considered emerging technology that wouldn’t have applicable channel use cases for another couple of years. After all, it took years for technologies like prescriptive analytics and Internet of Things (IoT) applications to really gain traction in the IT world. Why would AI be any different?
We’ve seen some simple examples of cognitive computing leading to advanced business platforms, especially in areas like cybersecurity or customer support where an application’s constant interaction with data points improves its performance and automates tasks that not long ago were pure manual processes. But we’ve just begun to explore the benefits of AI, and tech’s big boys are working fast and furious to establish secure positions at the top of the machine learning food chain.
As 2017 winds to a close and we move into the new year, the momentum around AI and machine learning technology is gaining traction quickly. This tech is essential to the underlying processes that will serve as the foundation for the ‘constant connectivity’ that stems from widespread use of the IoT and big data (many experts agree that critical mass will happen around 2020).
OEMs such as IBM, Intel, Microsoft, and Google are releasing chips and applications in rapid succession, both under their own brands and in concert with other manufacturers. If your customers aren’t already tossing around the idea of integrating cognitive computing processes into their workflows, more than likely they will be in 2018.
For example, this week saw a May-December partnership as venerable IBM released a new series of microprocessors that are designed to integrate with graphics processing chips from upstart Nvidia. Big Blue claims that the resulting technology moves data at nearly 10x the speed of Intel’s processors, and Google has signed on to use the new chips in some of its data centers.
Speaking of Google, the cloud provider had a productive summer in its efforts to develop its own next-gen hardware and tech, releasing a chip it will rent to customers as-a-service. Parent company Alphabet announced an open source project centered on development of quantum computing software to run on the quantum hardware it’s rumored to be hard at work developing.
Speaking of data centers, Intel released a new generation of 58 processors specifically designed for servers, demonstrating a commitment to diversifying its portfolio beyond its near ubiquitous PC presence. While the company still gets most of its revenue from PC sales, its data-centric business is clearly where it’s betting its future; Intel’s server business–which encompasses its data center and IoT groups, among others–is growing by leaps and bounds. Intel has goals to combine those server chips with technology from two of its acquisitions, Altera and Nervana, to make its own AI play. (Coincidentally, Intel’s AI initiatives also count Google as a partner, which seems to be placing bets on every player as it waits to see who comes out on top.)
Speaking of Altera, Microsoft unveiled a new AI initiative this year that represented a first for the company. The software giant will be producing its own line of chips, from R&D to design to customized programming–everything except manufacturing the chips, which it sources from, you guessed it, Altera.
Are you seeing a theme here?
It’s a game of musical chairs as the big cloud providers, SaaS companies, and semiconductor manufacturers jostle to find just the right partnership that will lead to a clear AI lead. The applications of a fast, secure, intelligent, connected, and affordable machine learning technology are nearly endless. Marketing, manufacturing, transportation, supply chain, help desk–almost every single vertical will be clamoring for an AI solution that will give them an edge on their competitors.
“While the technologies IBM and NVIDIA release are very different, they have one critical commonality — they can ingest and compute far more data than previously possible,” says Matt Burr, GM of FlashBlade. “Customers need purpose-built, massively parallel infrastructure to keep up with the evolution of AI. Each iteration of innovation means more data, which needs to move to applications and processors faster. Advances in chip technology enable IT solutions providers to build faster, more parallel data platforms that allow end users to take full advantage of AI.”
The days when partners could take their time adopting and integrating emerging technology into their offerings are far behind us. With the emergence of born-in-the-cloud partners and their edge-computing expertise means if you aren’t looking for ways to incorporate these technologies into your portfolio, you’re probably already behind.