Clean Dockerfiles
Containerized applications are becoming more and more common in production applications of all sorts. They are significantly easier to deploy, provide better repeatability, and can offer some security benefits over other deployment options. One thing I’ve noticed from working with other developers, is that many people don’t know how to create good container images. The biggest problem here is that you have to have a basic understanding of what happens when creating images and understand what the goals are with containerizing. Thankfully its not hard to get started and learn the important bits of creating an image.
As usual, I want to take a moment to talk about what we’re going to do and note any special considerations. First, I’m going to be using Docker as its the defacto containerization technology right now and I’m also going to assume you have it setup and know the basics of it. Secondly, I’m going to be working with a Go web application but the principals apply to any language you might be working with and Go knowledge is not required to follow along. The final note is that the application I’m making should not be taken seriously and is not meant to be anything other than a toy.
OK with that out of the way lets take a look at the application we’re going to be containerizing. It’s a simple HTTP service with two endpoints. The first is a healthcheck endpoint that always returns 200 OK
. The second endpoint accepts POST
requests and a message and returns the same message back in the response (so its an echo service). The service uses dep for dependency management. So our project looks like this:
And the main.go
file looks like this:
package main
import (
"net/http"
"github.com/gin-gonic/gin"
"github.com/gin-gonic/gin/binding"
)
type EchoRequest struct {
Message string `form:"message" binding:"required"`
}
func main() {
router := gin.Default()
router.GET("/healthcheck", func(c *gin.Context) {
c.String(http.StatusOK, "OK")
})
router.POST("/echo", func(c *gin.Context) {
var request EchoRequest
binding := binding.Default(c.Request.Method, c.ContentType())
if err := c.MustBindWith(&request, binding); err != nil {
c.String(http.StatusBadRequest, "Missing message")
}
c.String(http.StatusOK, request.Message)
})
router.Run()
}
OK, simple enough. Now lets say that we followed the basic Dockerfile reference documentation to put together a simple Dockerfile. That might look something like the following.
FROM golang:1.10.2-alpine
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
COPY . .
RUN apk update
RUN apk add --no-cache ca-certificates git wget
RUN update-ca-certificates
RUN wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64
RUN chmod 755 /usr/bin/dep
RUN rm -r /var/cache/apk/*
RUN dep ensure
RUN go build -o echo-server main.go
EXPOSE 8080
CMD [ "./echo-server" ]
So after doing a bit of waiting for the Docker image to be built we now have a working container! Well that was easy right? Wrong. This works but there are some low hanging fruit that we can do to improve it.
One thing you might have noticed is that if you had downloaded the Go dependencies before building the image it would have been stuck on Sending build context
for a while, or whatever that means, before actually doing any of the commands in our Dockerfile. What’s that about? Well if you look at the first line in the output from building our Docker image you’ll see something like this:
You’ll see that we sent 7.7 MB of content to the Docker daemon as context. This context is basically the files that the daemon will have access to when building the image. Anything that is sent is accessible. As our project grows and we add more files and dependencies this context is going to grow bigger and bigger. That might not seem like a huge deal but if you have your Docker daemon on a different computer or you’re using Node as your application language (insert joke about node_modules
being more massive than a blackhole here), you’ll quickly see this slow you down. So what can we do about this?
Docker provides a really nice file called .dockerignore
that you can use to blacklist content from being included in the build context. We want to make this as restrictive as we possibly can and only include what is absolutely needed to build our image. This is what I ended up putting together for my .dockerignore
file.
/.editorconfig
/.env
/.git
/.gitignore
/.go-version
/*.md
/echo-server
/vendor
As you can see I’m removing all the developer environment stuff like editor configs, environment settings, documentation, binaries, downloaded dependencies (vendor), and even the .git
tree. After doing all that, we managed to bring our 7.7 MB build context down to 8.192 KB! That’s huge savings and we can’t possibly get it smaller than that. I’ve had some projects that I’ve worked on that went from almost 1 GB to a few MB after doing this so it is certainly worth doing.
OK so surely we’re done now right? Nope. If you take a look at the resulting image, you’ll notice that it takes up a whopping 476 MB for a super simple application. What gives! That’s overkill…
So one thing we can do is remove layers (no not the buttery kind) from our image. What are layers you ask? Each layer in an image represents a snapshot of the filesytem and environment at some point in time. Each command in a Dockerfile produces a new layer in the image which adds to the overall size. Now we do want there to be some layers since a layer is reusable between images but too many adds a lot of unnecessary bloat that we want to avoid. Layers also act like caches when doing a build. During a build, Docker will try to reuse as many layers as it can that haven’t changed. So we want to have layers but not more than we really need.
We also want to make sure that we only trigger rebuilds of layers when changes that are relevant to that layer happen to maximize caching. None of this having to redownload dependencies just because we changed an error message nonsense. Let’s try and make one small change to our Dockerfile to increase layer caching. Basically, what we’re going to do is move the COPY . .
line to just before we actually use some of our code.
FROM golang:1.10.2-alpine
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
RUN apk update
RUN apk add --no-cache ca-certificates git wget
RUN update-ca-certificates
RUN wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v0.4.1/dep-linux-amd64
RUN chmod 755 /usr/bin/dep
RUN rm -r /var/cache/apk/*
COPY . .
RUN dep ensure
RUN go build -o echo-server main.go
EXPOSE 8080
CMD [ "./echo-server" ]
There we go, now we won’t have to install any of our system dependencies every time some code changes. This is great since these dependencies rarely (if ever) change.
Another thing we can do is we can combine a bunch of those run commands into a single command to put a huge dent in the overall number of layers we make. Let’s give this a try by combining our dependency installation and code building steps each into a single layer.
FROM golang:1.10.2-alpine
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
ARG DEP_VERSION=0.4.1
RUN set -ex \
&& apk update \
&& apk add --no-cache ca-certificates git wget \
&& update-ca-certificates \
&& wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod 755 /usr/bin/dep \
&& rm -r /var/cache/apk/*
COPY . .
RUN set -ex \
&& dep ensure \
&& go build -o echo-server main.go
EXPOSE 8080
CMD [ "./echo-server" ]
So most of that should make sense. We’re running commands in a single layer, and if any of them fail, the build fails. That all makes sense. But what’s that ARG DEP_VERSION=0.4.1
thing that appeared?
ARG
is a special Dockerfile command that creates a build time environment variable. This is great because we can try out a different version of dep
without having to change the Dockerfile and when we’re ready to bump the version, we just change the default value! So we do this to set what version of dep
we want and then when we run the container, this value disappears and doesn’t get carried into the image like the ENV
command would (this is a slight lie but we’ll get into that later).
OK so now we only need to get our system dependencies when they change which shouldn’t happen often and we are down to just a few layers in our image. One thing that should be nagging at you now is the code dependencies. We need to redownload all our dependencies with dep
anytime our code changes. This is fine with this small application but as it grows this step will take a lot longer.
We can fix this by splitting the build step up into two stages and only copy the required files for each. So the first stage is running dep ensure
and the next is to actually do the build. Simple enough.
FROM golang:1.10.2-alpine
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
ARG DEP_VERSION=0.4.1
RUN set -ex \
&& apk update \
&& apk add --no-cache ca-certificates git wget \
&& update-ca-certificates \
&& wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod 755 /usr/bin/dep \
&& rm -r /var/cache/apk/*
COPY Gopkg.lock Gopkg.toml ./
RUN dep ensure -vendor-only
COPY . .
RUN go build -o echo-server main.go
EXPOSE 8080
CMD [ "./echo-server" ]
Great! So we only have to install dependencies when they actually change, that should save a ton of time!
Unfortunately this only saved a few megabytes on our overall image size and there’s no other obvious layers we can remove so what do we do now? This is where multi-stage Docker builds come to the rescue.
Docker added a new feature in 17.0 that allows you to basically combine multiple Dockerfiles into a single file but have the actual output of the last stage in the file be what you push. This means that we can remove all the build tools that are required to build a Go binary from our image! Since the build chain is the biggest part of our image, this means huge savings! Let’s see what this would look like.
FROM golang:1.10.2-alpine
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
ARG DEP_VERSION=0.4.1
RUN set -ex \
&& apk update \
&& apk add --no-cache ca-certificates git wget \
&& update-ca-certificates \
&& wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod 755 /usr/bin/dep \
&& rm -r /var/cache/apk/*
COPY Gopkg.lock Gopkg.toml ./
RUN dep ensure -vendor-only
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o echo-server main.go
FROM scratch
COPY --from=0 /go/src/bitbucket.org/jdurand/echo-server/echo-server .
EXPOSE 8080
ENTRYPOINT [ "./echo-server" ]
So the first part of that should look very familiar. We are still using the golang:1.10.2-alpine
base image and we’re still doing all the installation and build steps that we were doing before. Things start getting interesting right after that though. We see that there’s a second FROM
line now which you may have thought wasn’t possible. When Docker sees the FROM
command it creates a new image which makes sense but with multi-stage Docker builds we can then access content from previous images. We see this happen in this line:
COPY --from=0 /go/src/bitbucket.org/jdurand/echo-server/echo-server .
The COPY
command can take the --from=<stage>
argument and copy from that image stage instead of from the build context. So in this case we’re copying the binary that we built from the first image and adding it to the current image. Now that is pretty cool.
The other thing worth noting about our second stage is that we’re basing the image on the scratch
image. This is a very special image that all other images are based on. What makes it special is that it contains no files and has a size of 0 bytes. Yep, nothing, notta, zilch. We add the only file into the filesystem so the final size of the image is going to basically be the size of our binary.
The thing I don’t like about our image still is that we reference stages with their index. This seems like a big code smell since the ordering could change and 0 doesn’t really carry any context on what the stage was or did. Thankfully, we can name stages using the FROM <image>:<tag> AS <name>
syntax. This can come in handy when we suddenly need to add a React frontend to our little service.
FROM golang:1.10.2-alpine AS go-builder
WORKDIR /go/src/bitbucket.org/jdurand/echo-server
ARG DEP_VERSION=0.4.1
RUN set -ex \
&& apk update \
&& apk add --no-cache ca-certificates git wget \
&& update-ca-certificates \
&& wget --quiet --output-document /usr/bin/dep https://github.com/golang/dep/releases/download/v${DEP_VERSION}/dep-linux-amd64 \
&& chmod 755 /usr/bin/dep \
&& rm -r /var/cache/apk/*
COPY Gopkg.lock Gopkg.toml ./
RUN dep ensure -vendor-only
COPY main.go .
RUN CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o echo-server main.go
FROM node:9.11.1-alpine as node-builder
WORKDIR /code
RUN set -ex \
&& apk update \
&& apk add --no-cache util-linux \
&& rm -r /var/cache/apk/*
COPY ./frontend/package.json ./frontend/yarn.lock ./
RUN yarn
COPY ./frontend/.babelrc ./frontend/index.html ./frontend/index.jsx ./
RUN yarn build
FROM scratch
COPY --from=go-builder /go/src/bitbucket.org/jdurand/echo-server/echo-server .
COPY --from=node-builder /code/dist ./static/
EXPOSE 8080
ENTRYPOINT [ "./echo-server" ]
And the beauty of multi-stage builds comes in handy again when we only had a small bump in image size even though we used Node at point in the build.
So that’s all pretty cool and it really helped with increasing the speed of rebuilds and resulted in a smaller image, but what else can we do?
One common problem in an enterprise situation is having a private package artifact repository. To gain access to one of your private artifacts you need credentials of some kind. This is great for local development but how do we make this work with Docker? As we noted earlier, for every operation that we perform in the build process, Docker takes a snapshot of the filesystem and saves it. When we pull the image, we also pull all those layers that make it up. So if we at any point use credentials in the image, that is all kept around forever. This isn’t something we want since it means that if anyone gets access to one of our images, they get access to our artifact repository. Unfortunately, we can’t use build arguments like we did previously since even those are fingerprinted into the final image which means that you can read what arguments were given when creating the final image.
This is where a combination of build arguments and multi-stage builds comes into play. We can use build arguments to pass credentials into our image for build time only and hide them in a stage somewhere before the end of the Dockerfile. When we push the result of a multi-stage Docker image, we are only pushing the final stage of the image. So any arguments that were used in a previous stage wouldn’t be persisted in the final image. Let’s take a look at this in action with a Python application.
FROM python:3.6 as package_downloader
WORKDIR /packages
ARG PIP_CONF
RUN set -ex \
&& mkdir /root/.pip \
&& echo "${PIP_CONF}" > /root/.pip/pip.conf
COPY ./requirements.txt .
RUN pip download --requirement requirements.txt
FROM python:3.6
ENV DUMB_INIT_VERSION=1.2.0
WORKDIR /code
# dumb-init is used to give proper signal handling to the app inside Docker
RUN set -ex \
&& wget -nv -O /usr/local/bin/dumb-init https://github.com/Yelp/dumb-init/releases/download/v${DUMB_INIT_VERSION}/dumb-init_${DUMB_INIT_VERSION}_amd64 \
&& chmod +x /usr/local/bin/dumb-init
COPY --from=package_downloader /packages /python_packages
COPY ./requirements.txt .
RUN set -ex \
&& pip install --requirement requirements.txt --find-links /python_packages \
&& rm --recursive /python_packages
COPY . /code
EXPOSE 8080
CMD ["/usr/local/bin/dumb-init", "./docker-entrypoint.sh"]
Notice that in the first stage we copy in our requirements file and then download all the wheels, eggs, and tars to some directory. This is how we acquire any artifact that might require special credentials to download. We then copy that directory full of artifacts into our final image and when we do the pip install
there we tell pip to use our directory instead of PyPi. Cool, we can install private artifacts now!
This is just the basics of building a good Docker image but the best way to learn the rest is by doing so get out there and write some Dockerfiles!