In the grand scope of IT things, major advances tend to occur in high performance computing (HPC) environments and then trickle down through the rest of the enterprise. So when President Barack Obama takes the unusual step of issuing an executive order that is intended to marshal the collective resources of the Federal Government to advance HPC research and development, the implications of that order go far beyond a few federally funded research labs.

Michael Vizard

August 17, 2015

2 Min Read
Looking Over the High Performance Computing Horizon

In the grand scope of IT things, major advances tend to occur in high performance computing (HPC) environments and then trickle down through the rest of the enterprise. So when President Barack Obama takes the unusual step of issuing an executive order that is intended to marshal the collective resources of the Federal Government to advance HPC research and development, the implications of that order go far beyond a few federally funded research labs.

Specifically, the National Strategic Computing Initiative (NSCI) calls for the building of supercomputers capable of processing one exaflop (1018 operations per second). Given the limitations of processors and Moore’s Law, that’s not likely to occur within the context of a single machine. Instead, HPC is going to be much more distributed than its predecessors.

According to Dave Turek, vice president of Technical Computing at IBM (IBM), the future of HPC will be defined by bringing compute engine to where the data already lies vs. relying only on building bigger machines. There will continue to be massive servers running in data centers, but how IT organizations will use those machines is evolving. For example, rather than passing every piece of raw data back to a supercomputer running in a data center, a lot more processing will take place within storage nodes and the network itself, Turek said.

Forcing that issue, said Turek, is that the volume of data, also known as big data, is quickly surpassing the ability to efficiently transfer the data to a centralized set of compute engines residing in a data center. IBM, said Turek, is already at work addressing those technology challenges as part of an OpenPOWER project being funded by the Department of Energy.

Turek said solution providers and their IT customers should take note of these developments because not only will they affect how big data applications are developed and deployed, but also just about anything to do with the Internet of Things. Instead of thinking in terms of servers, storage and networking, Turek said the future of IT will be defined by sets of “building blocks” that organizations can either distribute to the edge of the network or construct in a way the provides the exaflop of processing the President Obama is calling for with the NCSI executive order. Those building blocks, said Turek, then will have to be invoked using more parallel processing approaches to building applications.

Put it all together and 2015 very well may mark the beginning of a new era in IT that will play out through the next decade. The thing solution providers would do well to remember is that during that time many of the current assumptions they currently make about how to best build a solution will fall by the wayside.

Read more about:

AgentsMSPsVARs/SIs

About the Author(s)

Michael Vizard

Michael Vizard is a seasoned IT journalist, with nearly 30 years of experience writing and editing about enterprise IT issues. He is a contributor to publications including Programmableweb, IT Business Edge, CIOinsight and UBM Tech. He formerly was editorial director for Ziff-Davis Enterprise, where he launched the company’s custom content division, and has also served as editor in chief for CRN and InfoWorld. He also has held editorial positions at PC Week, Computerworld and Digital Review.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like