- I am using docker agents that mount the
/var/run/docker.sock(i.e. Docker outside of Docker) and the
dockercommands inside that agent fail with connection issues. Example of a failed
+ docker build -t test-image:latest . Sending build context to Docker daemon 2.048kB [...] Step 4/5 : RUN apk --update --no-cache add openjdk8-jre=$JRE_VERSION curl ---> Running in a394ce75098c fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz [91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz: temporary error (try again later) [0mfetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz [91mWARNING: Ignoring http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz: temporary error (try again later) [0m[91mERROR: unsatisfiable constraints: [0m curl (missing): required by: world[curl] The command '/bin/sh -c apk --update --no-cache add openjdk8-jre=$JRE_VERSION curl' returned a non-zero code: 7
The docker bridge network is disabled by default in the AWS EKS AMI since the release v20190211. If you do not not specify any network when creating a docker container, the container has no network interface other than loopback:
$ ifconfig lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
For that reason, containers started using docker outside of docker - that is to say mounting the
/var/run/docker.sock from a jenkins kubernetes agent for example - will not be able to communicate to the outside world.
There are different solutions to that problem. Either of the following would work:
- Use Docker in Docker instead of Docker outside of Docker (i.e. mounting
- Enable the docker bridge in EKS nodes
A solution is to use Docker in Docker (DinD) instead of Docker outside of Docker (Dood). Since Docker in Docker runs its own docker daemon inside the container, it is not impacted by this issue.
This approach is also best suited for a Kubernetes environment. Kubernetes does not know anything about the containers started with Docker outside of Docker and that has consequences (see A Case for Docker-in-Docker on Kubernetes - Part 2):
- they may open ports on the host
- they may use names / configurations that conflict with kubernetes pod containers
- graph storage and logs are not automatically cleaned by Kubernetes
To understand how to configure a “DinD” agent, have a look at How to build my own docker images in CloudBees Core on Modern Cloud Platforms
(This will require to restart the Kubernetes workers)
--enable-docker-bridge has been added to the bootstrap script of the EKS AMI since version v20190220. The solution is to add the argument
--enable-docker-bridge true to the EC2 instance User Data at the line that execute the
- For CloudFormation, there is a BootstrapArguments attribute
- For Terraform, see https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/docs/enable-docker-bridge-network.md
Note: You may do this manually in AWS by stopping an Worker EC2 instance, editing the User Data (Instance Settings > View/Change User/Data) and starting it again. Can be used to test that this solved the problem.