Docker Compose For Beginners: Working With Multiple Containers
Some time ago, I wrote this article: “Docker For Beginners”. The logical next step, after you’ve taken your first steps with Docker and created your first containers, is to learn Docker Compose, a very powerful tool to use with Docker to make multiple containers work together.
What is Docker Compose
Let’s start with the basics, what is Docker Compose.
To create and run containers, you use the command docker run
. You can provide many configuration options, like --volume
, --name
, --publish
, and more.
Basically, Docker Compose is a tool to run docker commands from configuration files. Let’s consider the following command:
docker run -v ./data:/data -p 5000:5000 --name my-container my-image
This command created a container with a bind mount, a published port, and a name. You can achieve the same thing with Docker Compose with the following configuration file:
version: "3.0"
services:
my-app:
image: my-image
container_name: my-container
volumes:
- ./data:/data
ports:
- 5000:5000
This kind of configuration file is most of the time named docker-compose.yml
.
So now, let’s say you’re in the directory where the docker-compose.yml
file is located, you can create and start your container with the following command:
docker compose up
You’ve accomplished exactly the same thing as with the previous docker run
command.
The advantage this time is that your configuration, which is normally passed as command-line arguments, is stored in a file, so you don’t have to retype it completely every time you want to create your container.
Installing Docker Compose
Normally, if your Docker installation is recent, you already have Docker Compose. Before, you had to install it in addition to Docker, as it was a plugin.
Try running docker compose version
, and if it works, you already have Docker Compose.
[estebanthilliez@arch-laptop ~]$ docker compose version
Docker Compose version 2.23.3
If it doesn’t work, you probably have an old Docker version, so I recommend upgrading Docker at the same time.
You’ll find everything you need to know to upgrade Docker in Docker docs, for example here is the documentation for Ubuntu: https://docs.docker.com/engine/install/ubuntu/#uninstall-old-versions
Your First Stack
Let’s start by creating a Dockerfile for a simple Python application.
FROM python:3.11
WORKDIR /app
COPY main.py .
CMD ["python", "main.py"]
As a reminder, here’s what this Dockerfile does:
- Start with a basic python image
- Move to
/app
in the container - Copy our script from the host machine to the container
- Supply the command to be executed by our container on startup
Of course, you need to create the main.py script before building your image. For example, here’s a script that doesn’t do much, but will allow us to play around with Docker Compose:
import os
if __name__ == '__main__':
print('Hello from Docker!')
my_env_var = os.environ.get('EXAMPLE_VAR')
if my_env_var:
print('The value of EXAMPLE_VAR is: ' + my_env_var)
data_dir = '/data'
if not os.path.exists(data_dir):
os.makedirs(data_dir)
with open(os.path.join(data_dir, 'example.txt'), 'w') as f:
f.write('Hello from Docker!\n')
if my_env_var:
f.write('The value of EXAMPLE_VAR is: ' + my_env_var + '\n')
We can build our image now:
docker build . -t my-app
And now, let’s see how to create our docker-compose.yml
.
version: "3.8"
services:
my-app:
image: my-app
container_name: my-app
Very simple, we simply specify our image, and a name for our container (optional).
A few words about the first line: this specifies the version of the compose file schema. In practice, we don’t really care unless you need older versions, like 2.4, but otherwise I recommend using 3.8, which is the most recent, or 3.0.
Let’s try our container:
(.venv) [estebanthilliez@arch-laptop medium]$ docker compose up
[+] Running 2/2
✔ Network medium_default Created 0.0s
✔ Container my-app Created 0.1s
Attaching to my-app
my-app | Hello from Docker!
my-app exited with code 0
It works! Now we can add configuration options just as we would on the command line. Let’s start with an environment variable.
version: "3.8"
services:
my-app:
image: my-app:latest
container_name: my-app
environment:
EXAMPLE_VAR: "example"
(.venv) [estebanthilliez@arch-laptop medium]$ docker compose up
[+] Running 1/0
✔ Container my-app Recreated 0.1s
Attaching to my-app
my-app | Hello from Docker!
my-app | The value of EXAMPLE_VAR is: example
my-app exited with code 0
It also works. We could also have stored it in an .env file and passed this file to our configuration:
...
services:
my-app:
...
env_file:
- .env
Now, let’s continue with volumes:
version: "3.8"
services:
my-app:
image: my-app:latest
container_name: my-app
environment:
EXAMPLE_VAR: "example"
volumes:
- ./data:/data
(.venv) [estebanthilliez@arch-laptop medium]$ docker compose up
[+] Running 1/1
✔ Container my-app Recreated 0.1s
Attaching to my-app
my-app | Hello from Docker!
my-app | The value of EXAMPLE_VAR is: example
my-app exited with code 0
You can check, a directory has been created in your current directory containing a file written by the Python app:
(.venv) [estebanthilliez@arch-laptop medium]$ cat data/example.txt
Hello from Docker!
The value of EXAMPLE_VAR is: example
Make Several Containers Work With Each Other
Docker Compose is very practical, so you don’t have to rewrite docker run and all its arguments all the time. But its real strength lies in its ability to easily manage multiple containers and make them communicate with each other.
Let’s say you’re developing a web application that requires a database. If you want to containerize your application, you need to find a way to make it access the database from the container. To do this, you can either use a common volume or a common network. The best way is to configure both services in the same docker-compose.yml
. Let’s try with an example.
Here’s a simple Flask Python application with two endpoints: /ping, which returns pong, and /count, which reads the contents of a file and returns it.
# api.py
import os
from flask import Flask
app = Flask(__name__)
@app.route('/ping')
def hello():
return 'pong'
@app.route('/count')
def count():
with open(os.getenv('COUNT_FILE', 'count.txt'), 'r') as f:
count = f.read()
return count
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
Here’s another application, which sends requests to the above application every second:
# client.py
import requests
import time
import os
if __name__ == "__main__":
api_url = os.getenv("API_URL", "http://localhost:5000")
ping_endpoint = api_url + "/ping"
count_endpoint = api_url + "/count"
successfull_requests = 0
while True:
print("Sending request to " + ping_endpoint)
try:
response = requests.get(ping_endpoint)
print(response.text)
with open(os.getenv("COUNT_FILE", "count.txt"), "w") as f:
f.write(str(successfull_requests))
successfull_requests += 1
response_successfull = requests.get(count_endpoint)
print(f"Successfull requests: {response_successfull.text}")
except Exception as e:
print(e)
time.sleep(1)
We can now containerize our two applications with two Dockerfiles:
# api.Dockerfile
FROM python:3.11
WORKDIR /app
RUN pip install flask
COPY api.py .
EXPOSE 5000
CMD ["python", "api.py"]
# client.Dockerfile
FROM python:3.11
WORKDIR /app
RUN pip install requests
COPY client.py .
CMD ["python", "-u", "client.py"]
Now let’s create an .env file for our environment variables:
API_URL=http://api:5000
COUNT_FILE=/data/count.txt
And finally, our docker-compose.yml :
version: '3'
services:
api:
build:
context: .
dockerfile: api.Dockerfile
env_file:
- .env
volumes:
- data:/data
client:
build:
context: .
dockerfile: client.Dockerfile
depends_on:
- api
env_file:
- .env
volumes:
- data:/data
volumes:
data:
There are several things to note. First of all, it’s possible to put our two applications in the same docker compose file. All you have to do is add an entry under the services key. You can put as many services as you like in the same docker compose.
Next, we specify directly how to build our images in the docker compose. This allows us to automatically build all our images when we run the docker compose build
command.
(.venv) [estebanthilliez@arch-laptop medium]$ docker compose build
[+] Building 0.3s (15/15) FINISHED docker:default
=> [api internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [api internal] load build definition from api.Dockerfile 0.0s
=> => transferring dockerfile: 149B 0.0s
=> [client internal] load metadata for docker.io/library/python:3.11 0.0s
=> [client 1/4] FROM docker.io/library/python:3.11 0.0s
=> [api internal] load build context 0.0s
=> => transferring context: 353B 0.0s
=> CACHED [client 2/4] WORKDIR /app 0.0s
=> CACHED [api 3/4] RUN pip install flask 0.0s
=> CACHED [api 4/4] COPY api.py . 0.0s
=> [api] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:1585adb8c6952b48bec3f9f5f8c9db0bc69a87609c5e2092fcf8f4c367d9c729 0.0s
=> => naming to docker.io/library/medium-api 0.0s
=> [client internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [client internal] load build definition from client.Dockerfile 0.0s
=> => transferring dockerfile: 154B 0.0s
=> [client internal] load build context 0.0s
=> => transferring context: 810B 0.0s
=> CACHED [client 3/4] RUN pip install requests 0.0s
=> CACHED [client 4/4] COPY client.py . 0.0s
=> [client] exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:16833e4c831de4aa72b0cc79dcf64f9a7ee1b5f146d7183630ebf10bb01283b7 0.0s
=> => naming to docker.io/library/medium-client
Then we notice that we’re using the same volume for both our containers, and that we specify it at the end of the docker compose:
...
api:
- data:/data
...
client:
- data:/data
volumes:
data:
This allows us to share the data volume between our containers. We could have done the same thing with a bind mount.
Finally, there’s the depends_on key: it lets you specify dependencies between containers so that they start in the right order. So, here, our client application starts once the api has started.
You can then run the docker compose up
command.
(.venv) [estebanthilliez@arch-laptop medium]$ docker compose up
[+] Running 3/2
✔ Network medium_default Created 0.0s
✔ Container medium-api-1 Created 0.1s
✔ Container medium-client-1 Created 0.1s
Attaching to api-1, client-1
api-1 | * Serving Flask app 'api'
api-1 | * Debug mode: off
api-1 | WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
api-1 | * Running on all addresses (0.0.0.0)
api-1 | * Running on http://127.0.0.1:5000
api-1 | * Running on http://172.28.0.2:5000
api-1 | Press CTRL+C to quit
client-1 | Sending request to http://api:5000/ping
api-1 | 172.28.0.3 - - [15/Jan/2024 20:00:01] "GET /ping HTTP/1.1" 200 -
client-1 | pong
api-1 | 172.28.0.3 - - [15/Jan/2024 20:00:01] "GET /count HTTP/1.1" 200 -
client-1 | Successfull requests: 0
client-1 | Sending request to http://api:5000/ping
api-1 | 172.28.0.3 - - [15/Jan/2024 20:00:02] "GET /ping HTTP/1.1" 200 -
client-1 | pong
api-1 | 172.28.0.3 - - [15/Jan/2024 20:00:02] "GET /count HTTP/1.1" 200 -
client-1 | Successfull requests: 1
client-1 | Sending request to http://api:5000/ping
api-1 | 172.28.0.3 - - [15/Jan/2024 20:00:03] "GET /ping HTTP/1.1" 200 -
As you can see, the client is able to communicate with the api. These logs show two things:
- By default, Docker Compose creates a common network for all containers in the stack, allowing them to communicate with each other.
- The data volume is the same for both containers
If, on the other hand, I try to access http://api:500/ping out http://localhost:5000/ping from my computer, it won’t work because even if the containers communicate with each other, they remain in an isolated environment and don’t communicate with the outside, unless specified by exposing ports or volumes.
Bonus
Finally, a few handy bonus commands:
docker compose up -d
: starts containers in detached mode, which is very handy for letting them run in the backgrounddocker compose logs
: displays logs for the entire container stackdocker compose config
: handy for checking that your configuration is correctly interpreted by docker compose
Final Note
I hope you now know how to use Docker Compose. It’s a very practical tool which I hope you’ll find useful. Next, we’ll be looking at how to use Portainer and how to use registries, which are the next steps in learning Docker.
Thanks for reading!
Here are some links that may interest you:
- 💻 All my tech stories
- ❓ Know more about me and my articles!
- 🔔 Become an email subscriber!
- 🤝 Support me by subscribing with my referal link:
If you enjoyed this article, consider trying out the AI service I recommend. It provides the same performance and functions to ChatGPT Plus(GPT-4) but more cost-effective, at just $6/month (Special offer for $1/month). My paid account to try: [email protected] ( password: aMAoeEZCp4pL ), Click here to try ZAI.chat.