Running Jenkins in Docker on AWS or a Mac

I’ve been working on a Jenkins image that will run in AWS. At the risk of going full Inception in the opening paragraph, I want the Jenkins container running in Docker to be able to run Docker commands. Running Docker inside Docker is a possibility but the author of the feature has other suggestions for this use case. So I want the running Jenkins container to be able to access Docker running on the host machine. This in itself isn’t much of a problem. But it should be testable on a Macbook (or other platform) without a lot of headache.

Turns out this isn’t quite so straightforward as mounting /var/run/docker.sock as a volume for the container to make use of (note, it might actually be that straightforward on a Linux machine). Because permissions are a thing and Docker for Mac is ‘Just Different’ when it comes to the permissions model. This is a good time to remember that the primary target is AWS. I experimented with a bunch of different ways to accomplish this. Using different user and group structures to try and align the Jenkins container to the host didn’t work because there is so much variability across platforms with group names and identifiers. Unfortunately every resource I could find recommended one of two solutions for this. The first was giving the jenkins user sudo access. I’ve seen too many badly implemented Jenkins jobs to feel like this is a good idea. The other suggestion was to create a docker group on the Mac and change the ownership of /var/run/docker.sock to that group so that it matches the container. Better than sudo, but not a solution that will persist across restarts.

At this point I decided that since AWS was the target platform, then the baseline image needed to run there (or possibly elsewhere which would require a rebuild with different build arguments). So the Dockerfile has defaults for version info (AWS usually doesn’t use the latest versions of Docker) and for the group ID for the docker group (to match the host).

FROM jenkins/jenkins:alpine

USER root

# Used for the docker group ID. Default is 497 (used by AWS Linux ECS Instances)
ARG DOCKER_GID=497

# Used for Docker and Docker Compose versions that are compatible with AWS Linux ECS.
ARG DOCKER_ENGINE=17.03.2-ce
ARG DOCKER_COMPOSE=1.17.0
ARG DOCKER_URL=https://download.docker.com/linux/static/stable/x86_64

RUN addgroup -g ${DOCKER_GID:-497} -S docker

RUN apk update && \
    apk --no-cache add sudo \
        git \
        curl \
        libffi-dev \
        linux-headers \
        python  \
        python-dev \
        py-setuptools \
        gcc \
        make \
        musl-dev \
        openssl-dev \
        py-pip && \
    curl -L -o /tmp/docker-${DOCKER_ENGINE:-17.03.2-ce}.tgz \
        ${DOCKER_URL}/docker-${DOCKER_ENGINE:-17.03.2-ce}.tgz && \
    tar -xz -C /tmp -f /tmp/docker-${DOCKER_ENGINE:-17.03.2-ce}.tgz && \
    mv /tmp/docker/docker /usr/bin && \
    pip install --upgrade pip && \
    pip install docker-compose==${DOCKER_COMPOSE:-1.17.0} && \
    pip install virtualenv ansible awscli boto boto3 && \
    addgroup jenkins docker && \
    addgroup jenkins users && \
    rm -rf /var/cache/apk/* \
        /var/lib/apt/lists \
        /usr/share/man \
        /tmp/*

USER jenkins

ENV JENKINS_USER admin
ENV JENKINS_PASS admin
ENV JAVA_OPTS -Djenkins.install.runSetupWizard=false

COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt

COPY default-user.groovy /usr/share/jenkins/ref/init.groovy.d/

To solve the permissions issues I was having on my Mac, I decided to sidestep the problem and have the Jenkins container access the docker daemon on the laptop by way of TCP. This was accomplished with a docker-compose file that runs the Jenkins container and runs socat in a separate container. The socat process wraps /var/run/docker.sock on the host by way of a mounted volume and exposes it in a bi-directional fashion as a TCP server to Jenkins. All without giving Jenkins more permissions than it needs.

version: '2'

volumes:
    jenkins_home:
        external: true

# The build-args have default values that will be used if unspecified. 
services:
    jenkins:
        build:
            context: .
            args:
                DOCKER_GID: ${DOCKER_GID}
                DOCKER_ENGINE: ${DOCKER_ENGINE}
                DOCKER_COMPOSE: ${DOCKER_COMPOSE}
        volumes:
            - jenkins_home:/var/jenkins_home
        environment:
            - DOCKER_HOST=tcp://socat:2375
        links:
            - socat
        ports:
            - "8080:8080"

    socat:
        image: bpack/socat
        command: TCP4-LISTEN:2375,fork,reuseaddr UNIX-CONNECT:/var/run/docker.sock
        volumes:
            - /var/run/docker.sock:/var/run/docker.sock
        expose:
            - "2375"

Now the container can run on AWS and access the EC2 instance’s docker daemon by way of mounting /var/run/docker.sock. This works as intended because the permissions align. I can test out the same container on my Mac without having to build a separate image by setting the DOCKER_HOST environment variable in the docker-compose file because that ultimately gets passed to Jenkins. Having a working Jenkins instance locally is now a matter of running:

docker-compose up -d jenkins

And it can build and run docker images through the host docker daemon. Which was the goal all along. The code for all of this is available at the links below.