How does network latency impact the performance of cloud applications, and what can service providers to to address the latency issue? Laz Vekiarides of ClearSky Data explains.

Christopher Tozzi, Contributing Editor

February 15, 2018

4 Min Read
Low latency

Is solving network latency issues the key to continued cloud growth? That’s what Laz Vekiarides of ClearSky thinks. 

In computer networking, latency refers to the delay between the time data is sent by one party and when it is received by another. Although we tend to think of the internet as the great enabler of instantaneous communication, in reality, even the fastest networks suffer from latency. The delays might only be a few milliseconds, but they still exist.

The delays grow longer as the geographic distance between data’s origin point and endpoint increases. Latency increases by about one millisecond for every 60 miles. If you’re in Boston and trying to open websites hosted on servers in China, the response rates will be slower than they would be for sites based in the United States.

Latency should not be confused with throughput, or the amount of data that can travel across a network connection in a given amount of time. Latency rates are unaffected by the amount of data being transferred. Even if your network can handle gigabytes of data per second, there will still be some latency as a result of the amount of time that it takes for each snippet of data to travel across the network.

Laz Vekiarides, co-founder and CTO at ClearSky Data, believes latency issues must be solved if the cloud is to continue to grow, especially for applications such as big data and the Internet of Things (IoT).

laz-vekiarides-2018.jpg

Laz Vekiarides

Laz Vekiarides

Channel Futures: What does ClearSky Data do?

Laz Vekiarides: We deal with cloud latency by supplying caching infrastructure at the edge to make the cloud appear as if it were nearby. Our vision was always that latencies and the general physics of the universe would cause a certain set of applications and workloads to favor having compute locally rather than in the cloud. Databases are one example where we see this. IoT is another one that is rapidly growing. Our goal is to provide all the benefits of cloud storage, as well as the characteristics of enterprise-class local storage.

CF: What are your key use cases?

LV: Disaster recovery was our initial value prop. We allowed organizations to store data in the cloud, and thus offsite, without the performance drawbacks. Data could be recovered quickly. Going forward, though, we expect IoT to become a big deal. IoT is another example of a type of application where there’s a high amount of interactivity with data, and latency creates a problem for accessing it fast enough.

The typical use case that I’ve been seeing as of late is having a landing path for data, or a database that is reasonably local, that devices can interact with at a high rate.

CF: Don’t content-delivery networks (CDNs) already do this?

LV: We’re not a CDN. We’re like a backward CDN. CDNs are optimized for downloading data from an origin server. They’re always focused on the downward direction. What we do is the opposite. We’re bidirectional. It’s not just the reads from the cloud that are optimized, but also the writes to the cloud.

CF: Do you compete with cloud providers such as AWS?

LV: No, what we do is actually complementary. The big cloud providers operationally aren’t interested in creating local presence with the granularity that we are. We’re essentially leading data into their infrastructure, and after the fact they can come in and offer services. After all, when cloud vendors choose where to place a data center, it’s not consumers, but real estate costs and power costs that determine where they build. And when they build far away from consumers, consumers see latency.

A lot of the cloud-scale economics that we enjoy today are the result of cheap real estate and cheap power, and that’s a good thing. But delivering an excellent user experience requires addressing the latency issues that arise from these choices.

CF: Will new technology eventually make today’s latency issues irrelevant?

LV: No, because the speed at which data can travel over the network is limited by the speed of light. So latency, in this sense, is the result of a congenital problem with the universe. Unless someone figures out a different approach for transmitting data – an approach that is faster than the speed of light – we’re stuck with the latency. The best we can realistically do is control it by placing data at the edge of the network.

Read more about:

AgentsMSPsVARs/SIs

About the Author(s)

Christopher Tozzi

Contributing Editor

Christopher Tozzi started covering the channel for The VAR Guy on a freelance basis in 2008, with an emphasis on open source, Linux, virtualization, SDN, containers, data storage and related topics. He also teaches history at a major university in Washington, D.C. He occasionally combines these interests by writing about the history of software. His book on this topic, “For Fun and Profit: A History of the Free and Open Source Software Revolution,” is forthcoming with MIT Press.

Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like