Forget magical thinking. Use DevSecOps teams to bake in security and follow strict access policies.

January 29, 2020

6 Min Read
Cloud Container Security
Shutterstock

By David Christian

Christian-David_Anexinet-author-150x150.jpeg

David Christian

The biggest cloud security threat to containerized applications is security teams believing that simply issuing decrees to over-stressed developers (who rarely understand the intent or logic of those policies) will be effective. This approach hasn’t worked for bare-metal servers or virtual machines, and it is magical thinking to believe this approach will work for containers. Compounding the issue is that many security team members have never been active participants on agile development teams, so the cultural and experiential gap is wide. Application teams will pay lip service to these decrees, and a few items of the low-hanging fruit variety will be resolved, but usually the edicts will be acknowledged but largely unheeded.

As a general rule for any piece of IT infrastructure, the default security configurations simply aren’t enough. While I’m certain that the people who designed these defaults did so as a starting point, many engineers view them as the end game. Internet-to-cloud network, service and instance integration into cloud network, host OS-to-container integration, software components, and their credentials in the container layers are all part of the security equation – and no one singular vendor or project controls them all. Yet they must be made to be rational, understandable and secure at every step, and continually refined as the computing landscape inevitably changes.

The connection between the host OS and the guest OS are the most helpful of the “default” configurations, and they’re also a great example of what a moving target these defaults can be. When the container engine Docker was first released, the container user had to be root. This was obviously not ideal, but it was the only way one could make it work and gain a security advantage. The Docker folks corrected this by allowing containers to be managed by a nonroot user, but what if you still have a Docker configuration from those days? The Docker user must be limited in what it can access at the OS level — just enough to do the job. This is difficult enough in a public cloud, and in private clouds it can raise concerns to a whole other level, because of the lack of APIs.

And there are plenty of other security issues to consider. For example:

  • Controlling your network via software, so when the inevitable concerns come to light, they can be rapidly refactored.

  • Management of identity and access management (IAM). When you are breached, can you explain how the breach was limited to what your bad code or misconfiguration was limited to, or was literally everything at risk?

  • Maintaining a solid game plan to securing cloud machine images. Do you know when a long-running instance has fallen out of policy? When you discover a problem Instance, what do you do then? Is the instance sufficiently automated to repair the issue? Is your environment sufficiently automated to do a simple replacement?

  • Container monitoring and updating mechanisms. Containers can become fossilized very quickly. This is especially true if there is no continuous Integration and continuous delivery (CICD) chain to either pull from GitHub or DockerHub to keep the base container(s) up to date, along with multiple scanning tools to find defects.

  • Flexible and routinely updated policies (and enforcement!) Overly stringent decrees intended to protect your environment mean, in reality, that imported software becomes old and is rarely updated.

The Solutions

The following steps can help address these concerns:

DevSecOps: It is critical that security teams contribute code or, at the very least, engage in true discourse with development teams. Some organizations have figured this out and that is why there is a DevSecOps movement. The reality is that security is just another set of features, and if the security team can’t or won’t contribute, only those features …

… which directly impact usability, plus a few others, will be implemented. DevSecOps is a model to which organizations should aspire.

Robust IAM policies: Identity and access management (IAM) policies used for controlling access to containerized applications in the cloud is critical. If an attacker manages to break out of the container, then you have a problem. The ability for that attacker to penetrate resources beyond the instance (or host OS, if you prefer) using the breached credentials is greatly determined by security groups that allow for logical connections to the other resources in your environment. It is IAM that determines what the authorized or unauthorized user may do to the service, having made the connection. Strict IAM policies can help ensure that the attacker, once free of the container, cannot just access anything else in the environment by default.

From what we know of the Capital One breach, a compromised machine was trusted as if it were a human with administrative privileges. That allowed an attacker access to all S3 buckets enterprisewide. Any human who requested this would have certainly been denied access. But because it was more convenient to let that machine have it by default, 100 million people have had their personal data exposed. It also seems as though Capital One used an AWS IAM policy without modification. These should never be used, because one has little visibility and no control over what or how AWS will change that policy, including broadening access. It’s far more secure to have small features unavailable for a short while than leave systems and clients unnecessarily exposed.

No human access, just automation. No individual on any team should be given access to containerized applications in the cloud. This goes especially for the production team. Human beings should not be logging directly into containers. Logging, network connections, diagnostic tools and analytics tools must all be configured in advance of rolling into production. Should a gap be identified, the gap should be fixed and new containers should be rolled out to replace those which have been retired. In addition, if you have not completely automated your pipeline, you have no business as a responsible IT administrator deploying containers or their cousin Lambda, another foundational service, anywhere, let alone in production.

Security isn’t a list of things to be done; it’s a battle with a remorseless enemy. Executive leadership must be made aware that there is no single formula for success in having a secure cloud. The ongoing nature of security requires hiring the best DevSecOps people possible. Mistakes happen, we are human. Oversight, with both automated tools and motivated humans, is required to make the fixes.

David Christian is a cloud architect at Anexinet, with more than three decades of technology experience across many industry verticals, from start-ups to the largest enterprises. He has managed products, developers, operations, support desks and has worked on quality assurance. For the last decade, Dave has specialized in cloud operations, where he helps clients set management standards, including their cloud security, implements governance based on those standards and has written code for SaaS automating ERP systems into the cloud. He also has devised and implemented multiregion and containerization strategies and has helped firms reduce costs by migrating from cloud to cloud-native infrastructures. Follow him @anexinet on Twitter.

Read more about:

MSPs
Free Newsletters for the Channel
Register for Your Free Newsletter Now

You May Also Like