Free Newsletters for the Channel
Register for Your Free Newsletter Now
April 24, 2019
By Christine Hall
From IT Pro Today
Although computer technology isn’t as young as it once was, any way you slice or dice it, the technology is still pretty young when compared to other technologies. But however young or old it might be, many of today’s latest and greatest innovations aren’t as new as they may seem, but are based on older innovations that have been repurposed, repositioned or tweaked for modern times.
Take container technology, for instance. By all appearances, containers came over the horizon about 2013, about the time that Red Hat announced it would be integrating Docker with its OpenShift platform-as-a-service offering.
But as Jim Ford, founder of cloud startup Pareidolia, and a 23-year-veteran as the chief architect at ADM, pointed out in the opening keynote at last week’s Container World, the genesis behind what would become container technology goes back to early virtualization efforts.
Pareidolia’s Jim Ford
“Containers really started with IBM and the LPAR on the mainframe in 1972,” he said. “That was the birth of the hypervisor and that was the birth of the idea that you could run multiple full execution environments on the same piece of hardware.”
LPAR stands for “logical partition,” and each LPAR creates what is effectively a separate mainframe with it’s own separate operating system — which is sort of like virtualization … which is sort of like containers.
“In 1979 our friends at Bell Labs came up with chroot, and we suddenly had the ability to run two Unixes on one machine, but it was very naïve and very trusting,” he said. “That’s really the key here; it’s about trust and naivety. That’s really the whole game that we’re playing.”
Ford’s keynote presentation was partly a history lesson, partly a look at the issues we face now, and partly a cautionary tale as we move boldly into the always connected future.
“In 2005, Sun came out with Solaris zones,” he said. “Strangely, those were really the first containers, and in some respects they were greater than the ones we have today because they actually contained. They didn’t have leaky holes; they didn’t have strange methods of allowing things to happen through flaws in design or flaws in delivery.”
Ford’s purpose was to offer insight into how to look ahead and anticipate the future. As he sees it, the issues we’ll face in the future will often be tied to the issues we’ve faced in the past.
“If you think about it, the guy who wrote Pac-Man and put it into that coin-op cabinet 40 years ago was living at the extreme edge of what the hardware could do,” he said. “He was pushing very hard to make this stuff run. Now you run on your iPhone. Admittedly, he was running a five hertz processor.
“Everything we’ve ever thought was hard – streaming video, streaming audio, streaming whatever – has come to pass and become quite commonplace and quite performant. So, I want you to try to break away from their paradigm of, ‘Oh, we’re trapped today because,’ and think about, ‘Well, we’re trapped today, but let’s keep an eye out because tomorrow it may be simpler.'”
Ford’s optimism was tempered by the observation that many of the pressing issues being faced by the tech industry, especially on the security front, are embedded in …
… the technology we’re using.
“We are built on a general purpose computing,” he said. “General purpose computing has some ugly things in the basement. This was not purpose built for what we’re using it for. There are things that we don’t want to find there. We’re still finding 20 year old zero days.
“When I look at things like Spectre and Meltdown and other hugely ancient vulnerabilities, I recognize them not as failures of design, but failures of imagination. The guy who wrote that chip spec and said we can do speculative execution to speed it up, his failure was not imagining that one day we would run multiple disinterested third parties on one chip. He never foresaw that it wasn’t going to be all you or all me, that it was going to be part you, part me, and we could spy on each other. That’s not his fault; that’s our fault. We decided to go ahead and share something that wasn’t suitable for sharing. We’re still having the same challenge.”
In many ways, this was a call for infrastructure designers to consider in their designs that computing tomorrow probably won’t be the same as it is today. That’s no guarantee however that we won’t be burdened in the future by today’s decisions. Hindsight is always more accurate than foresight.
Read more about:VARs/SIs
You May Also Like
Channel People on the Move: AT&T, C1, Mitel, TD Synnex, MoreMar 1, 2024
Viirtue, MSP Partners Seek Larger Piece of IT PieFeb 29, 2024
New Cisco OT Route to Market Opens New Partner SetFeb 29, 2024
Broadcom-VMware Saga Update: Nutanix Wins, Carbon Black Sale, Hock Tan PayFeb 29, 2024