Taking the Mystery Out of Container Terminology, from Kubernetes to FaaS

Andrew Gracey
If you’re looking at moving some of your workload to containers, it’s likely a bit overwhelming as the market seems to move faster than any one person can keep up with. Let’s look at a few of the terms you will likely see discussed. This should help you wade through the ocean of information to formulate a plan that best suits your needs.
Containers. A container is really just an application running in a sandboxed environment. What sets it apart from a virtual machine is that it doesn’t come with its own kernel or device drivers. These are shared by the host system, but isolated in a way that the application can’t see any other processes.
This distinction is important because it means that compute resources (CPU, memory, GPU, networking, etc.) can be shared much more effectively as it reduces many of the hard partitions that keep overhead high. Less overhead means less throwing money away in a data center that’s generating heat. Most containers are shipped as a set of layers so you can even share your build pipeline work across applications to further optimize resource usage — not to mention developers’ sanity from not needing to repeat themselves.
The technology and methods to do this aren’t new, but Docker came along a few years ago and popularized the idea by making the process much easier. The open-source lightweight containerization technology offers an isolated environment for managing applications. At this point, there are a few container runtimes delivering different ways to run your application.
Kubernetes. Containers are lightweight and give an avenue to achieve shorter cycle times with more components. This then leads to human overhead to manage the horde of containers continually being upgraded.
Kubernetes came out of a project from Google (called Borg) and gave a descriptive way to manage the state all of the containers should be in. This has grown to be hugely popular due to its power and flexibility. This flexibility comes from its many points where customers and vendors can add custom logic (such as CNI for networking and CSI for storage). This allows you to start looking at the entire data center in a more holistic way.
This also has the downside of increasing decisions for how to build out your k8s cluster, as you now have to pick all of the components to use. There has been a rise of prepackaged distributions to ease this pain. This way the work of maintaining internal compatibility is already done (think of it like a Linux distribution).
Helm. Some applications are built out of multiple discrete components. Due to this, it’s important to be able to package dependencies together in a way that the configuration can be shared across components.
Helm came out of this need to provision container applications through easy installation, update and removal. It provides a good way to handle this abstraction and packaging through template “charts.”
Note: At the time of writing, Helm 2 needs a component called Tiller to do the installation with Kubernetes resources. This poses some security concern, but Helm 3 will fix this and is in an alpha release.
Service Mesh. One of the ideas in vogue right now is the service mesh. These are a way to abstract some of the common complexity that most (if not all) containers use. It takes care of things such as service discovery, load balancing, retry logic and monitoring.
Typically, this is done using what’s known as a sidecar, where every pod gets given an extra container that exposes hooks to the real workload, as if it was the operating system. This allows the workload to not care where it’s getting things such as networking and storage from, allowing the developer to not care, either.
Functions as a Service (aka Serverless). Obviously, there is a server somewhere. We don’t have that magic quite yet.
Ignoring the naming, the main idea of functions as a service (FaaS) is to attach …
- Page 1
- Page 2