Tim Mackey, a technical evangelist for Black Duck Software, engages with technical communities to help them solve application security problems. At Open Source Summit in Los Angeles, Mackey will be delivering a talk titled “A Question of Trust – When Good Containers Go Bad.”
Mackey says that as container adoption increases, the pace of information flow from Ops back to Dev needs to increase, too. If malicious actors have greater access to information or greater resources to create exploits than, for example, a multi-national financial services company, those malicious actors are in the driver’s seat when it comes to security.
In his talk, Mackey will deconstruct some significant vulnerabilities, examine how information flows, and explain when various “organizations” have an information advantage. “I also look at a few issues from the past and how they’re impacting the modern world. The end goal is to increase awareness of the types of issues we face and how to better protect ourselves and build better products moving forward,” said Mackey.
We talked with Mackey to learn more about his talk.
Linux.com: What’s the inspiration behind your talk? What are the areas you will be touching on?
Tim Mackey: Over the past several years we’ve seen major vulnerability after vulnerability disclosed against a variety of open source components. These disclosures have led some to question the role of open source technologies in modern application development. Rather than have a religious debate, I’ve chosen to focus on the attributes which make open source different from closed source commercial products and how information flow is a key challenge for us — particularly when it comes to security.
As part of that effort, I decompose multiple vulnerabilities to show how information flow is biased towards malicious actors. With such a bias, defenders are often at a disadvantage due to both awareness of issues and point in time decisions they make while performing triage. Minimally. this can lead to delays in mitigation, but at the extreme it can lead to a belief that a given vulnerability doesn’t represent a viable attack.
Linux.com: Have there been cases of containers gone bad? Especially when most are hosted on trusted platforms like DockerHub?
Mackey: Containers go bad everyday, and often without warning. This is probably best illustrated by example. Let’s assume we’re working for a very security conscious organization and have governance rules which dictate all applications must pass a static code analysis and have any exposed interfaces fuzzed. We also can assume that our public facing systems are subject to penetration testing and have sophisticated perimeter defenses. In this environment, we create a container image that passes all tests and is then deployed on this trusted platform and scaled out.
Now that our application has been deployed, let’s add to the mix a CVE, which is disclosed say within hours of the release of our application. All containers deployed using this image are now at an increased level of risk of compromise. Quantifying that risk is a challenge for most organizations, but the bigger challenge comes when you need to identify which container images are impacted by the CVE and trigger remediation plans. While there is a desire to trust perimeter defenses, they often need to be reconfigured to block newly malicious traffic and may themselves be vulnerable to the new CVE.
Linux.com: Security as it’s well understood is a process and not a product, so what advice can you give to the DevOps teams to add that process in their workflow?
Mackey: Identification of risk is a crucial component of security, and risk is a function of the composition of a container image. Once you know precisely what the composition of the image is, it becomes possible to identify any potential risks. Most organizations start with traditional application security models focusing on code they create. The goal is to ensure the risk of what’s created by the organization is minimized, and continual code scans are resource intensive from a tooling and process perspective. This leaves a large gap in containerized environments stemming from the base image and any associated dependencies. Some key questions operations teams need to answer in order to minimize risk include:
What security risks might present in that base image, and how often is it updated?
If a patch is issued for that base image, what is the risk associated with consuming the patch?
How many versions behind tip can a project or component be before it becomes too risky to consume?
Given my tooling, how quickly will I be informed of component updates for dependencies which directly impact my containers?
Given the structure of a component or project, do malicious actors have an easy way to gain an advantage when it comes to issues raised against the component?
Linux.com: Most platforms come with quite a lot of security features, scanning, and mitigation, do you feel that it’s not enough?
Mackey: Fundamental security measures like SELinux of AppArmour, reduction in system capabilities, strict enforcement of image admission and restrictive network profiles are vital, but not sufficient. DevOps teams are tasked with responding to changing business priorities while balancing risk and operational efficiency. Tools performing container runtime scanning both impose a performance hit to cluster nodes and potentially expose data to scans which can limit their utility.
Mitigation measures are valuable, but without a clear understanding of the complete application environment, mitigation measures are challenged. In the end, operations teams recognize they’re under constant attack, and that malicious actors are both persistent and creative. Part of that creativity is an understanding that an attack vector which wasn’t viable a couple years ago might become viable through changes in application design or deployment systems. A perfect example of this is “Dirty Cow,” which like many race conditions only becomes more exploitable over time due to increases concurrency in modern processors.
Linux.com: Have you seen any lack of practices that make containers / microservices more vulnerable, and can you explain?
Mackey: There are a few items which I see far more often than I’d like, and many fall into the “point in time decision” camp. By way of example, consider a Dockerfile which specifies a version for the base image. Pinning the base image likely solved a problem a developer had, but is also unlikely to be revisited as new versions of the container are created. Over time, security debt builds and eventually that version is so old that APIs have changed and updating the image becomes a serious problem. Flipping the scenario around, the base image could be “latest,” which has its own set of problems. There the version in use could be radically different with each image and have a uncertain number of vulnerabilities. Related to this, the desire to update to the “latest” patch is also problematic when you recognize that a given patch may fix some issues and introduce others.
Another interesting problem can be seen when a container image is “shipped.” Development teams are charged with creating applications and packaging them up. Take a CI process, for example. It should always be configured to fail a build when an application contains known security issues. This ensures vulnerable applications can accidentally become deployed but also imposes a set of trust boundaries requiring us to both shift right and left. Developers on the left need to ensure they’re making correct decisions about the composition of their applications “as deployed.” Operations teams on the right need to ensure they’re both deploying what has been vetted during development, but are also actively monitoring for issues related to what “was deployed.” Only then can the two teams actively close the loop to ensure security issues are attended to as quickly as possible.