sebastiandaschner blog


Unorthodox docker connection without links

wednesday, december 24, 2014

When linking two docker containers via --link the containers are more or less bound to each other for their lifetime. And when the linked container is re-built, both have to be restarted.
This is not a perfect solution when creating a webproxy or loadbalancer container - the proxy would have to be restarted all the time.

An easy solution is not to --link the containers, rather than connecting them via their IP addresses.
But the container’s addresses can change each time they are run…​

A better solution is to bind all needed ports to the default docker0 network bridge. And to access all other containers via this IP address and the specific port.
The default address docker chooses for this interface is 172.17.42.1. If you are unsure about the address, check it via ip addr show docker0.

So you can start your container which exposes ports like docker run --rm --name server -p 172.17.42.1:8000:8080 [image].
Another container access the data via 172.17.42.1:8000.
This docker0 interface is accessible from all containers and from localhost.

Here is an example for a docker landscape with a proxy, a web application and a database container:

$> docker run --rm --name db -p 172.17.42.1:4000:3306 [db-image]
$> docker run --rm --name webapp -p 172.17.42.1:8000:8080 [webapp-image]
$> docker run --rm --name proxy -p 80:80 -p 443:443 [proxy-image]

webapp connects to db via 172.17.42.1:4000
proxy connects to webapp via 172.17.42.1:8000
proxy is accessible from outside: 0.0.0.0:80 or :443

Now all containers can be re-built and replaced dynamically without restarting other containers. This solution can be helpful for a "dynamic" environment which run on only one host, where containers are renewed frequently and which needs a productive, simple solution (without introducing further software like consul or etcd).

 

Found the post useful? Subscribe to my newsletter for more free content, tips and tricks on IT & Java: