Understanding container-based automation and proofs of concept are capabilities of even small security teams.

June 14, 2019

9 Min Read
Cloud Container Security
Shutterstock

By Curtis Franklin Jr.

From Dark Reading

Containers are a very big deal in enterprise computing right now. If your organization isn’t already using them, then trends indicate it probably will soon. This application virtualization technology has had profound implications for some companies that have embraced DevOps, and there is plenty of potential for it to have a similar impact on security operations.

To understand why, it’s good to start with an understanding of just what containers are. A modern application is a collection of pieces of code: the main application itself, configuration files, and custom files on which it depends. These tend to be unique to the application as it’s configured to be deployed on a given server.

A container bundles all of these things up into an image that can be saved and deployed quickly, consistently, and automatically across multiple servers. If differences exist in the operating system details between the development and production servers, the container insulates the application from them, making application movement between development and operations very fast and straightforward.

So what are the implications for security in all of this? One is that containers can allow vulnerabilities to quickly propagate if developers trust that all code in an image has been properly reviewed and updated. Conversely, a more positive implication is that specific network and application configurations can be tested, saved as images, and then automatically deployed when an attack, malware, or other problem takes the network or application delivery system down.

Because container technology is still relatively new, many IT managers are reluctant to depend on it. They also worry about complexity. But fear not: At Interop19, the Network Orchestration Hands-on Showcase decided to do a proof of concept that showed just how simple a container deployment can be — and why containers can be important to even a smaller organization. The demonstration involved primary and secondary network links with monitoring and network control applications deployed on the simplest of servers — Raspberry Pis.

Here, we’ll look at the individual components used in the showcase and how each could be used by a security team to replicate the work done at Interop. Most are software, a couple are languages (or language-like), and one is hardware. (To help those who are interested in replicating its experiment, the Interop Demonstration Lab team has placed all of its containers and support code on GitHub.)

Ansible

The demonstration network team had a number of criteria for its work — criteria that made for creative tension, if not outright conflict between goals. According to network architect and team leader Glenn Evans, the demonstration had to be practical, linked to a group of sessions at the conference (in this case, the network automation crash course), and something attendees could understand and learn from quickly. The demonstration they hit on – to automate the response to a network disruption – fulfilled all three, Evans says, while the architecture they chose allowed attendees to …

… readily replicate and build on the demonstration after they returned home.

The team chose Ansible as its principle automation platform.

“Ansible is a simple automation language that can perfectly describe an IT application infrastructure,” as defined by Red Hat, Ansible’s owner. In practice, this means developers can create automated processes that control any number of things in an enterprise network or application infrastructure.

“We have Ansible running inside of a container. We’ve got that container running on a Raspberry Pi, the idea being that you can take this container down from our GitHub and run it on your computer,” says Robert Davis, a network and AWS IT consultant, as well as one of the volunteers who built the demonstration network.

The Ansible instructions, gathered in “playbooks,” are written in YAML (YAML Ain’t Markup Language), and readily read and modified by developers or administrators. More about YAML later, but for security teams looking for ways to automate network responses, it’s important to know that individual actions, known as “plays,” can be put together in long sequences of complex actions. These long sequences can be saved on its own or as part of an Ansible image that is saved and made available for deployment in containers around the enterprise.

Kubernetes

If Ansible is the solution for building automated processes that can be deployed as containers, Kubernetes is the tool for taking those containers and automating their deployment based on criteria established by security and network analysts. Originally developed by Google, Kubernetes is now maintained by the Cloud Native Computing Foundation as an open source project. According to the foundation, Kubernetes is a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts.”

Consultant Davis pinpoints the sort of situation that could make Kubernetes important for a group moving into security automation. “With network orchestration and automation, you can make a proactive process where, for example, you have a DDoS attack come in, something detects that — whether it’s AI or a managed solution — and then via your orchestration system you’re able to mitigate that issue,” he explains.

William Jensen, a network engineer at the University of Wisconsin, in Madison, and a demonstration network volunteer, says process automation with Kubernetes can help minimize errors that can come with rapid human response to attacks.

“If you offload some work that you have to currently do manually and maybe improve some reports you have, from a security perspective you would have a little better assurance that everything is configured the way it’s supposed to be,” he says.

Elk Stack

Even when processes are automated, it’s important for security and network teams to know what systems are doing. During the demonstration network, the team decided to use the ELK Stack, deployed in a container, as part of the system to monitor and …

… report on conditions and activities. The ELK Stack, in this case, stands for Elasticsearch, Logstash, and Kibana — three open source applications that, together, allow users to simultaneously gather data from multiple sources (with Logstash), transform and store it in Elasticsearch, and visualize the results with Kibana.

As with many enterprise systems, the Interop19 Demo Network had multiple indicators of conditions in the environment. One indicator was a set of lights, triggered by network activity and processed through containers on the Raspberry Pi platforms before being passed to the cloud-based building-automation system. Mark Sullivan, director of network operations at Informatik Group and a Demo Network volunteer, integrated the IoT portion of the demonstration and ensured security was part of the application.

“We’re doing SSH so you can use secure protocols to operate this,” Sullivan explains. “You can use preshared keys, and that’s always a good idea. A nice thing is, you don’t need to have a whole stack of preshared keys with the containers.”

As with the other parts of the infrastructure, the ELK stack was deployed in containers. “Through the Elk Stack running in the Kubernetes, we can show the log of the network change occurring,” Davis says. “We’re trying to show the soup to nuts of being able to orchestrate something, but then show that orchestration occurred in a containerized fashion. That way anyone else can take these containers and run the applications themselves.”

Oxidized

While containers pull together an application with its dependencies and configuration files, Oxidized is a tool for backing up and restoring the configuration for a network switch itself. In the demonstration created for the Interop19 network, the volunteers used Oxidized deployed in a container to store a switch configuration that could be restored when sensor data indicated network conditions had changed.

Davis explains the advantages of restoring the switch configuration – or a range of switch configurations – through an automated process using Oxidized.

“We can eliminate human error,” he says. “And because Ansible is doing these processes in sync and in parallel, we can do it very quickly among 100 devices.”

YAML

YAML (YAML Ain’t Markup Language) is the language used for creating Ansible plays and for many of the other services used in container automation. Described by its developers and maintainers as a “data serialization language,” YAML can be used in conjunction with many other programming languages to script actions and control the flow of data in and out of processes.

YAML is not new; a look at the official website shows that major development steps were completed by 2011. Because it is mature, YAML is well-understood and stable, making it a solid tool for developing security applications and processes.

JSON

While YAML is the language used to create the sequences of actions recorded as plays in the Ansible playbooks, JSON (JavaScript Object Notation) is the language used to describe and communicate the data used to …

… detect the presence of an error or fault condition of the switch, and pass remediation commands back to the software controlling the switch in response.

JSON has become the data description language in networking and security control applications, taking the role many believed XML would take two decades ago. JSON is a very simple data descriptor language that has been integrated into every major programming and scripting language in current use. It was a necessary choice for the demo network team; no other data description language is used by so many hardware and software components in security and networking

Raspberry Pi

Few computers have had the impact of the Raspberry Pi when it comes to encouraging experimentation and creativity in application design. Small, low-cost (OK, cheap), and with low -power requirements, the Raspberry Pi puts a full Linux server into a package smaller than the average deck of playing cards.

In order to showcase the simplicity and low cost of container-based networking and security automation, the demonstration network team decided to use combination of donated laptop computers and Raspberry Pis. Most of the containers were able to be deployed on the small Linux computers using images downloaded from the demo team’s GitHub site.

The point of the demo was to give attendees something to inspire their own work when they returned to their organization, the University of Wisconsin’s Jensen says.

“We’ve had people come in, and we encourage them to download the code themselves,” he says. “If they don’t have Docker on their machine, they can use the directions [on our GitHub] to go get Docker, download our material on GitHub to their machine, then build the container and run it themselves so they can actually do exactly what we’re doing.”

Security automation and network automation are each beginning to be seen in growing numbers of networks. The simple demonstration network at Interop19 showed that understanding container-based automation and creating a proof of concept are within the capabilities of even small security teams.

Curtis Franklin Jr. is senior editor at Dark Reading. In this role he focuses on product and technology coverage for the publication. In addition, he works on audio and video programming for Dark Reading and contributes to activities at Interop ITX, Black Hat, INsecurity and other conferences. Previously ,he was editor of Light Reading’s Security Now and executive editor, technology, at InformationWeek, where he was also executive producer of InformationWeek’s online radio and podcast episodes.

Read more about:

MSPsVARs/SIs
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like