Maybe I’m doing it wrong but I needed to get the following scenario working: Run some Groovy tests in a docker container, where the tests would use docker-compose to spin up a container that they used to run integration tests against.
Why? Because docker-all-the-things.
Docker within docker
The first part I knew how to do: install the docker and docker-compose binaries into the first/outer container (the one running the tests) and then mount the docker socket from the host machine so that the docker commands within the container would actually use the docker daemon that was running on the host:
-v /docker.sock:/docker.sock
So far so good.
The problems arose when the tests in the first/outer container tried to run health checks against the docker-compose containers.
Docker networking 101
Turns out that the default networking config isn’t sufficient, in part because it only permits IP address to IP address connections. The solution is to create a new docker network and then use that both when running the test container and also in the docker-compose config.
docker network create new-network docker run ... --network new-network
NB – the docker-compose config must define the network but mark it as externally managed.
services: elastic: image: docker.elastic.co/elasticsearch/elasticsearch:6.0.0 networks: - new-network ports: - "9200:9200" networks: new-network: external: true
Then both the test container and the containers created by docker-compose are sitting on the same network. They don’t need to expose ports unless the host needs to speak to them, however they do need to stop using 127.0.0.1 or localhost to communicate with each other and start using the service names instead (or you can assign explicit IP addresses I think).