A guide to the lingo: C is for container, K is for Kubernetes.

November 14, 2019

5 Min Read
Containers as a service, CaaS
Shutterstock

By Andrew Gracey

Gracey-Andrew_SUSE-author-150x150.jpg

Andrew Gracey

If you’re looking at moving some of your workload to containers, it’s likely a bit overwhelming as the market seems to move faster than any one person can keep up with. Let’s look at a few of the terms you will likely see discussed. This should help you wade through the ocean of information to formulate a plan that best suits your needs.

Containers. A container is really just an application running in a sandboxed environment. What sets it apart from a virtual machine is that it doesn’t come with its own kernel or device drivers. These are shared by the host system, but isolated in a way that the application can’t see any other processes.

This distinction is important because it means that compute resources (CPU, memory, GPU, networking, etc.) can be shared much more effectively as it reduces many of the hard partitions that keep overhead high. Less overhead means less throwing money away in a data center that’s generating heat. Most containers are shipped as a set of layers so you can even share your build pipeline work across applications to further optimize resource usage — not to mention developers’ sanity from not needing to repeat themselves.

The technology and methods to do this aren’t new, but Docker came along a few years ago and popularized the idea by making the process much easier. The open-source lightweight containerization technology offers an isolated environment for managing applications. At this point, there are a few container runtimes delivering different ways to run your application.

Kubernetes. Containers are lightweight and give an avenue to achieve shorter cycle times with more components. This then leads to human overhead to manage the horde of containers continually being upgraded.

Kubernetes came out of a project from Google (called Borg) and gave a descriptive way to manage the state all of the containers should be in. This has grown to be hugely popular due to its power and flexibility. This flexibility comes from its many points where customers and vendors can add custom logic (such as CNI for networking and CSI for storage). This allows you to start looking at the entire data center in a more holistic way.

This also has the downside of increasing decisions for how to build out your k8s cluster, as you now have to pick all of the components to use. There has been a rise of prepackaged distributions to ease this pain. This way the work of maintaining internal compatibility is already done (think of it like a Linux distribution).

Helm. Some applications are built out of multiple discrete components. Due to this, it’s important to be able to package dependencies together in a way that the configuration can be shared across components.

Helm came out of this need to provision container applications through easy installation, update and removal. It provides a good way to handle this abstraction and packaging through template “charts.”

Note: At the time of writing, Helm 2 needs a component called Tiller to do the installation with Kubernetes resources. This poses some security concern, but Helm 3 will fix this and is in an alpha release.

Service Mesh. One of the ideas in vogue right now is the service mesh. These are a way to abstract some of the common complexity that most (if not all) containers use. It takes care of things such as service discovery, load balancing, retry logic and monitoring.

Typically, this is done using what’s known as a sidecar, where every pod gets given an extra container that exposes hooks to the real workload, as if it was the operating system. This allows the workload to not care where it’s getting things such as networking and storage from, allowing the developer to not care, either.

Functions as a Service (aka Serverless). Obviously, there is a server somewhere. We don’t have that magic quite yet.

Ignoring the naming, the main idea of functions as a service (FaaS) is to attach …

… smaller bits of code to events and expose those events in a variety of triggers (for example, HTTP or publish-subscribe messaging, also called PubSub). This allows you to scale different pieces independently, as well as scale to zero between events to save on compute costs. Done right, it can be a very effective way to process data.

As with a service mesh, a FaaS or serverless framework abstracts a lot of repeated components pieces into a single layer.

OCI, CRI and Container Runtimes. Fairly early on, the Kubernetes team realized that it was a bad idea to keep implementation details about how containers are run in their code itself. This meant creating some specifications around how to communicate with potential runtimes. Out of this, we got three specs:

  • Container Runtime Initiative (CRI), allowing control from Kubernetes-compliant orchestration.

  • Open Container Initiative (OCI) Runtime Spec, governing how to communicate with a runtime.

  • Open Container Initiative Image Spec, giving details on how to build, store and transfer container images.

Networking and Storage. Similar to the runtime, handling networking and storage is done by the Container Networking Interface (CNI) and Container Storage Interface (CSI), each exposing their own respective services.

Each of these have API objects that allow common administration across providers to give your users easy configuration. That said, some providers will allow extra configuration through Custom Resource Definitions (CRD).

Next Steps

Now that you better understand these terms that get thrown about, where do you start?

The best place to start might be to spin up a small cluster and see how easy it can be to install software into it. There are a variety of good distributions out there fitting a variety of needs that will ease the installation and configuration pain.

Once you have a cluster, see if there’s a helm chart for a piece of your existing infrastructure. Spin it up, see how it works and maybe even pull some traffic on to it once you are comfortable.

Andrew Gracey is a technical marketing manager at SUSE with 10-plus years of experience in a variety of software engineering positions. He is interested in the intersection of business, technology and human interactions. Follow Andrew on LinkedIn or on Twitter @gracey_andrew or @SUSE.

Read more about:

MSPsVARs/SIs
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like