Brought to you by Mary Branscombe at Data Center Knowledge
As hybrid cloud becomes more useful, many organizations are looking for hybrid cloud architectures to bridge the gap from the cloud to their own data centers. But with the scale of modern applications and associated data, cloud connectivity requires a lot of bandwidth and some network planning on your side.
As Todd Traver, VP, IT Optimization and Strategy, at the Uptime Institute told us, “Hybrid cloud computing has become the industry norm, with systems-of-record computing taking place in the enterprise data centers, [disaster recovery] taking place in colocation data centers, platform or software-as-a-service being provided by the cloud, and customer-facing applications such as video streaming or virtual reality being provided by edge compute.”
That’s where technologies like Alphabet subsidiary Google’s new Dedicated Interconnect service come into play, linking your data center directly to Google Cloud. Somewhat belatedly, following in the footsteps of Azure ExpressRoute and AWS Direct Connect and currently available in beta, the service supports most common hybrid cloud features.
Dedicated Interconnect is only one of a set of cloud connectivity options from Google; it’s designed for handling workloads at scale, with significantly high-bandwidth traffic of more than 2Gbps. You can also use it to link your corporate network directly to GCP’s Virtual Private Cloud private IP addresses. Taking your cloud traffic off the public internet and onto your own network range gives you more options for taking advantage of cloud services. “Networking technologies are enabling applications and data to be located in their best execution venue for that workload,” Traver noted.
Like its competitors, Google Cloud Platform requires you to connect to one of several global peering locations, so as well as Google’s charges (which vary by the bandwidth you need, in 10Gb/s increments and with up to eight circuits, and whether you set up the recommended second interconnect for redundancy), you’re also going to need to pay your network service provider to reach Google’s peering points.
The most common technology for direct cloud connectivity from your data center is MPLS, or Multi-Protocol Label Switching. Instead of setting up complex routing through backbone internet providers, with the resulting bandwidth limitations and latency, MPLS creates a direct connection between your data center switches and the network in a hyper-scale data center; you can also use it to link your data centers to a disaster recovery site or to increase the scale of your own data center, connecting new on-premises or colocation facilities. Service providers are starting to offer alternatives, but MPLS can be integrated directly into your data center core switches, simplifying connectivity and reducing latency.
If you already have a significant wide area network, you might be able to reduce those costs by using direct peering, but most businesses will need to work through a carrier partner to meet Google’s requirements for this – so while Google won’t charge for peering, your carrier might.
Google charges for both interconnects and VLAN attachments, as well as for data egress. Getting data in to GCP via Dedicated Interconnect is free, giving you the opportunity to quickly load up, say, data sets you’re using for machine learning applications running on Google’s platform. Getting data out is reasonably priced; depending on your location and what services you’re using, prices are between two and six cents per GB. Oddly, Google charges you more to transfer data out of platform services in GCP than to get data out of VMs running in private address space on Google’s cloud.
Consider how you’re going to use the network when you look at how much this will cost you; the pricing model is a good fit for running a hybrid cloud with processing and storage on GCP, and data collection on-premises.
Common Use Cases
Hybrid cloud, video production, and IoT sensor data processing are the three main areas that Google says customers are applying its direct cloud connectivity service today, a spokesperson told us. “The most common use is for hybrid cloud, where customers are taking their traditional data center applications and moving them to cloud. In addition, some customers are using high-bandwidth connections to upload data for post-processing in cloud. We are seeing this in particular from media companies doing post-production editing in the cloud, as well as from companies that are generating large amounts of telemetry data that require cloud processing.”
But Google expects it to become relevant to a wider range of organizations in time. “Over the coming years we see the market changing from data center access as the predominate use to traditional enterprise connections to cloud. This means that the corporate data center will no longer exist in its current form, and all enterprise apps will transition to cloud. Following this trend will be the emergence of SD-WAN as the most common access method to cloud workloads. Enterprises working through the traditional telco providers will have dedicated cloud data center connections from their enterprise locations, moving away from MPLS private WANs to shared network connections with dedicated cloud connections running as an overlay on these networks. With these changes there will also be a move in the SaaS ecosystem, with SaaS providers moving their solutions to public clouds and using SD-WAN connections for private dedicated access to these SaaS solutions. This will expand the cloud access from large enterprise using custom applications in cloud to medium and small enterprises using public cloud-resident SaaS solutions.”
You can order Dedicated Interconnect in the GCP portal, but you still need to create VLAN attachments and establish BGP sessions. That’s a little more complex than the equivalent options on Azure and AWS; as a beta service, the offering isn’t as mature as the competition.
AWS Direct Connect uses a standard 802.1q VLAN to map your address space into Amazon’s, with the ability to partition a single connection into several virtual interfaces. This approach lets you mix access to different AWS resources, keeping connections to public and private networks separate. 1Gbps and 10Gbps base connections can be used, with the option of bundling connections for higher throughput. While the service is easy to provision, pricing is more complex, mixing data transfer charges and per-hour port connection costs.
The VPN approach used by AWS hides much of the complexity of the network, with control of connections, ports, and VPNs handled through the AWS console. You’ll still need to consider some routing issues and will need to run BGP to handle connections – or at least work with a networking service that has its own AS number.
ExpressRoute takes a very similar approach, but unlike Direct Connect and Dedicated Interconnect, which only include a single dedicated connection, ExpressRoute includes a secondary connection for redundancy. Most organizations are going to want a redundant connection, so remember to include that extra cost when comparing the services.
With all of these direct connections it’s important to also make sure you have a redundant network architecture on your own side, Clive Longbottom, founder of the IT market research firm Quocirca, told Data Center Knowledge. “We still see far too much use of a single line -- a massive weak link in the chain should anything go wrong with it -- or attempted redundancy through using two connections from the same [network] provider (not much better).” He suggests treating data center-to-cloud connectivity in much the same way you’d manage connections to your own sites. “Our advice is to use two hot links (used with load balancing) from two different suppliers going out from different parts of the data center to avoid the ‘back hoe’ problem of something digging through both connections at the same time.”
With GCP finally catching up and offering direct connections into your infrastructure, supporting high-speed cloud connectivity from on-premises data centers is now as standard a service for public clouds as offering container orchestration via Kubernetes or a similar DCOS, but you do need to plan more carefully how the cloud services you use fit into your own network architecture.