With the rise of new tech, a plethora of engineering jobs become available. This has opened up a number of unique and diverse career paths for software testers as well, such as security testing and management. In this article, you’ll find information about why focusing on security is so important to the advancement and sustained usage of containers, giving you a glimpse into the world of security testing.
Ensuring Container Image Security: A Necessary Step in Application Testing
Containers have introduced a new level of efficiency and power to distributed computing. Yet, the advantages that containers provide can be offset easily by the security risks they incur unless an enterprise practices constant vigilance.
Allow me to elaborate.
There’s a fundamental problem with container security that is very similar to the one associated with any sort of binary executable: you really don’t know what’s on the inside. Once something is compiled into a binary format, its internals become opaque. The file that you think is nothing more than a video of somebody's cat dancing the rhumba might actually be hosting hidden code that performs malicious actions to your file system.
The same is true of containers. You might think that you’ve downloaded a container image that does nothing more than run a web site that converts .csv files to .json format. Yet behind the scenes, that container also has code that’s probing your network in search of passwords, credit card information and encryption keys.
These days container security is on the minds of many, particularly in light of the recent news that RunC, the container runtime for Docker, Kubernetes and other container orchestration technologies, has a potentially catastrophic security vulnerability that allows a bad actor container to overwrite RunC in order to get root-level access to the host. Once a container has root-level access, it’s effectively in control of the system.
As much as containers are a boon to the modern enterprise, they are also a potential hazard. Protective measures need to be taken. Thus, it’s critical that companies have processes in place that safeguard against bad containers getting into the enterprise. Part of that safeguard is to have testing practices that ensure that the containers that are being deployed anywhere in the enterprise, from development to production, are secure. Ensuring container security is an arduous undertaking. It encompasses how containers are made, how they are deployed and, once deployed, how they are monitored. Each step in the process is worthy of an article of considerable length. But, we need to start somewhere. So for this article, let’s take a look at the first step in the process: creating secure container images.
Creating Secure Container Images
A container image is the template from which a container is created and run. You can think of a container as an instance of a container image, very much in the same way that an object is an instance of a class in object-oriented programming. A container image is created by running the Docker command:
When docker build is invoked, Docker looks for a file named, Dockerfile, which typically is at the location in the file system where the build command is invoked when indicated by the dot in the build example shown above. docker build uses the information in the Dockerfile to construct the container image.
Creating a secure container can be a challenge due to the nature of container construction itself. Take a look at Listing 1 below. It’s the Dockerfile for a simple NodeJS web application I made.
Listing 1: A Dockerfile for creating a simple NodeJS web application
The Dockerfile tells Docker to do 4 things to build the container image. (1) Download a base container image, node:8.0-alpine from the DockerHub repository. This base image has the executable and operating system libraries necessary to run NodeJs. (2) Open port 3000 on the container in order to allow users to access the web application. (3) Copy the file server.js from my local filesystem into the container image. server.js is the file that contains the application behavior the node will run. And, finally (4) invoke Node JS to run the file.
That’s all there is to it. Now, in terms of doing a security audit, things are pretty straight forward. All of the application logic resides in server.js, which is a text file. It’s just a matter of running some software that does a security check against my local file system. No biggie, right? Wrong!
The issue is not the file server.js. The issue is the base container image, node:8.15-alpine. Let me explain.
The way Docker works is that one container image can use another container image(s) as a base. Then, once the base image(s) is defined, the container image under construction will build upon the base image(s).
This architecture is very efficient in that it allows developers to leverage existing work. For example, when I want to create a Node application as a Docker image, all I need is to define a pre existing Node JS container and add my application code, as you can see in the Dockerfile shown above in Listing 1. I don’t have to get all the dependencies that NodeJS requires. The base NodeJS container image takes care of all that.
However, building an application into a container image that uses a pre existing base image creates a security problem. Unless certain precautions are taken, we have no way of knowing what’s in that base container image. Remember, the base container image comes from DockerHub, which is a third-party repository external to the enterprise. We can hope it’s a good actor, but how do we know for sure?
One thing we can do is to go out to DockerHub and take a look at the actual repository for node:8.15-alpine. The documentation for the image is comprehensive. In fact, the documentation even has a link to the Dockerfile for the image out on GitHub. So, we should be safe, right? Well…maybe. Take a look at Listing 2 below, which is the first line of the Dockerfile for node:8.15-alpine.
Listing 2: A snippet of the Dockerfile for node:8.15-alpine.
Notice anything interesting? Hopefully, you said, “Hey wait, we have one container image using another container image, which turn uses yet another container image. Where does it end?”
The reality is that any Docker image is but the last link in a chain of other Docker images. That chain might be very, very long. So the question is, given that any container image might be and most probably is made up of many other Docker images, how do we ensure the security of that image?
And that, my friends, is the question.
The first thing to do at the enterprise level is to make sure that there is a central authority that builds container images and that all container images are stored on a common secure repository. This means that while developers can and should create container images for their local work, they should never be the authority to deploy an image. Rather developers need to deploy their Dockerfiles only and let qualified personnel do security tests against the given Dockerfile, as well as the container images and containers that are the product of the Dockerfile.
Qualified security personnel will not only test and build container images and continuously verify that the containers that are running in production are not malicious, but they will also deploy well-tested container images to repositories that are properly secured. Such repositories might be hosted by a third party that provides security certification such as DockerHub or Google Container Registry in conjunction with Google Container Analysis, or the container images might be hosted privately on premises and subject to inspection using well-respected analysis tools.
This brings us to container build policy. Controlling container build events and applying testing and tools to ensure container security are good and useful practices. But, the implementation of such practices needs to be part of an overall container build and testing security policy. The depth and breadth of such a policy will vary according to the company and inherent risks. Some companies might allow using base images that are deemed safe and “official” by well-known repository hosts such as DockerHub. Other companies might be more stringent and require that all container images be built from scratch and stored in private repositories on-premises, under tight access control. It’s a matter of risk and impact. But, regardless of the degree of scrutiny exerted, the most important thing to understand is that ensuring security at the container image level is important and that an adequate container image security testing policy must be published. Also, the procedures to ensure compliance with the policy must be in force.
Avoid the Risks at Your Own Peril
Back in May of 2018, the security publication, CSO reported that according to Doug Cahill at Enterprise Strategy Group, only 34% of those questioned said that they “need to verify that container images stored in container registries meet their organization’s security and compliance requirements.” That’s right, only 34%! Yet, in that same report, 74% say that they use or plan to use containers for new and some pre-existing applications. According to these numbers, more containers are indeed coming online by an order of magnitude. Sadly, nowhere near as much attention is being planned to be given to the security needs around those containers. The risks are apparent and they are significant.
As containers continue to proliferate the IT landscape, adoption will take two paths -- there will be companies that will have the wisdom to put container security at the forefront of adoption and there are those that won’t. Those that won’t will be doing so at great peril. Those on the wise path will do well to make security testing container images an important activity in the pursuit of comprehensive container security.Back to the blog