Docker outside of Docker no longer works in EKS


  • I am using docker agents that mount the /var/run/docker.sock (i.e. Docker outside of Docker) and the docker commands inside that agent fail with connection issues. Example of a failed docker build:
+ docker build -t test-image:latest .
Sending build context to Docker daemon  2.048kB
Step 4/5 : RUN apk --update --no-cache add   openjdk8-jre=$JRE_VERSION   curl
 ---> Running in a394ce75098c
WARNING: Ignoring temporary error (try again later)
WARNING: Ignoring temporary error (try again later)
ERROR: unsatisfiable constraints:
  curl (missing):
    required by: world[curl]
The command '/bin/sh -c apk --update --no-cache add   openjdk8-jre=$JRE_VERSION   curl' returned a non-zero code: 7


Related Issue


The docker bridge network is disabled by default in the AWS EKS AMI since the release v20190211. If you do not not specify any network when creating a docker container, the container has no network interface other than loopback:

$ ifconfig
lo        Link encap:Local Loopback  
          inet addr:  Mask:
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

For that reason, containers started using docker outside of docker - that is to say mounting the /var/run/docker.sock from a jenkins kubernetes agent for example - will not be able to communicate to the outside world.


There are different solutions to that problem. Either of the following would work:

  • Use Docker in Docker instead of Docker outside of Docker (i.e. mounting /var/run/docker.sock)
  • Enable the docker bridge in EKS nodes

Use Docker in Docker instead of Docker outside of Docker

A solution is to use Docker in Docker (DinD) instead of Docker outside of Docker (Dood). Since Docker in Docker runs its own docker daemon inside the container, it is not impacted by this issue.

This approach is also best suited for a Kubernetes environment. Kubernetes does not know anything about the containers started with Docker outside of Docker and that has consequences (see A Case for Docker-in-Docker on Kubernetes - Part 2):

  • they may open ports on the host
  • they may use names / configurations that conflict with kubernetes pod containers
  • graph storage and logs are not automatically cleaned by Kubernetes

To understand how to configure a “DinD” agent, have a look at How to build my own docker images in CloudBees Core on Modern Cloud Platforms

Enable the docker bridge in EKS nodes

(This will require to restart the Kubernetes workers)

The flag --enable-docker-bridge has been added to the bootstrap script of the EKS AMI since version v20190220. The solution is to add the argument --enable-docker-bridge true to the EC2 instance User Data at the line that execute the

Note: You may do this manually in AWS by stopping an Worker EC2 instance, editing the User Data (Instance Settings > View/Change User/Data) and starting it again. Can be used to test that this solved the problem.


Have more questions?


Please sign in to leave a comment.