Dockerize mutliple containers with compose

Use case - what it solves So If you're familiar with docker containers so far (if not, read my previous two articles on this), When there's more than 1 container, and they have to interact, there's some things you have to repeat, like: pulling image & creating a container. allocate volumes, pass ENV args, etc to container have all containers running in same network . Now if there's 3,4,5,.. or more, it gets more hectic, see where this is going? Docker compose solves that, by "composing" up your containers config in a file type called "yaml" (its similar to json - a data format, usually for configs). You still have things like Dockerfile and concepts of containers are the same, its just the process that's simpler. Let's compose it up A quick example of 3 containers rolling up using docker compose. So here's what we wanna setup: A node js server that writes random key, values to Redis DB Then we check the changes from Redis Insight. Let's start by writing code for this node.js server: Initialize a npm package: npm init -y npm install express redis Code for node.js server: const express = require("express"); const redis = require("redis"); const app = express(); const client = redis.createClient({ url: process.env.REDIS_URL }); client.connect(); app.get("/", async (req, res) => { const randomKey = `key_${Math.floor(Math.random() * 1000)}`; const randomValue = `value_${Math.floor(Math.random() * 1000)}`; await client.set(randomKey, randomValue); res.send(`stored random K-V pair: ${randomKey} = ${randomValue}`); }); app.listen(3000, () => console.log("Server running on port 3000")); Then writing up a Dockerfile to make this into a container: FROM node:18 WORKDIR /app COPY . . RUN npm install CMD ["node", "server.js"] Now finally, make a "docker-compose.yaml" file: version: "3.8" services: api: build: . ports: - "3000:3000" depends_on: - redis environment: REDIS_URL: redis://redis:6379 redis: image: redis:latest ports: - "6379:6379" redisinsight: image: redis/redisinsight:latest ports: - "5540:5540" restart: unless-stopped Read through this, you'll notice that you still have the same concepts, such as a Dockerfile, ports and environment. And what about Network between these containers? Since we had to do that manually earlier, docker-compose puts all of these in a same network, so you don't have to deal with that. The way one container refers to another is still the same, for example, redis-insight will refer to redis contaner as redis://6379 Then, run this with the command: docker compose up -d #d means run in background (detached mode) Now let's try testing this: First, make call to http://localhost:3000/ (node.js): curl http://localhost:3000 #via curl wget http://localhost:3000 #via wget Or just use your browser Then check in redis-insight: Make connection to the redis instance: See the random key-value pairs being added: For more reference on the docker-compose yaml config, check this Try making more containers with docker compose, to get some hands on experience, you can try making: A CRUD App: make a node.js + react app with some DB(PG, Mysql, Redis) A Db monitor: PG DB, use PG exporter for prometheus, with prometheus & grafana. So PG -> exporter -> prometheus -> grafana (and some server to make writes to PG) Anyways, that's about it. Maybe I'll cover more things with docker, with some examples, or talk about docker swarm in the next one!

May 9, 2025 - 21:44
 0
Dockerize mutliple containers with compose

Use case - what it solves

So If you're familiar with docker containers so far (if not, read my previous two articles on this),

When there's more than 1 container, and they have to interact, there's some things you have to repeat, like:

  • pulling image & creating a container.
  • allocate volumes, pass ENV args, etc to container
  • have all containers running in same network

.

Now if there's 3,4,5,.. or more, it gets more hectic, see where this is going? Docker compose solves that, by "composing" up your containers config in a file type called "yaml" (its similar to json - a data format, usually for configs).

You still have things like Dockerfile and concepts of containers are the same, its just the process that's simpler.

Let's compose it up

A quick example of 3 containers rolling up using docker compose.

So here's what we wanna setup:

docker-compose system

  • A node js server that writes random key, values to Redis DB
  • Then we check the changes from Redis Insight.

Let's start by writing code for this node.js server:

Initialize a npm package:

npm init -y
npm install express redis

Code for node.js server:

const express = require("express");
const redis = require("redis");

const app = express();
const client = redis.createClient({ url: process.env.REDIS_URL });
client.connect();

app.get("/", async (req, res) => {
  const randomKey = `key_${Math.floor(Math.random() * 1000)}`;
  const randomValue = `value_${Math.floor(Math.random() * 1000)}`;

  await client.set(randomKey, randomValue);

  res.send(`stored random K-V pair: ${randomKey} = ${randomValue}`);
});

app.listen(3000, () => console.log("Server running on port 3000"));

Then writing up a Dockerfile to make this into a container:

FROM node:18
WORKDIR /app
COPY . .
RUN npm install
CMD ["node", "server.js"]

Now finally, make a "docker-compose.yaml" file:

version: "3.8"

services:
  api:
    build: .
    ports:
      - "3000:3000"
    depends_on:
      - redis
    environment:
      REDIS_URL: redis://redis:6379

  redis:
    image: redis:latest
    ports:
      - "6379:6379"

  redisinsight:
    image: redis/redisinsight:latest
    ports:
      - "5540:5540"
    restart: unless-stopped

Read through this, you'll notice that you still have the same concepts, such as a Dockerfile, ports and environment.

And what about Network between these containers? Since we had to do that manually earlier, docker-compose puts all of these in a same network, so you don't have to deal with that.

The way one container refers to another is still the same, for example, redis-insight will refer to redis contaner as redis://6379

Then, run this with the command:

docker compose up -d  #d means run in background (detached mode)

Now let's try testing this:

curl http://localhost:3000 #via curl
wget http://localhost:3000 #via wget

Or just use your browser

  • Then check in redis-insight:

Make connection to the redis instance:

redis-insight add redis db

See the random key-value pairs being added:

redis insight view

For more reference on the docker-compose yaml config, check this

Try making more containers with docker compose, to get some hands on experience, you can try making:

  • A CRUD App: make a node.js + react app with some DB(PG, Mysql, Redis)
  • A Db monitor: PG DB, use PG exporter for prometheus, with prometheus & grafana. So PG -> exporter -> prometheus -> grafana (and some server to make writes to PG)

Anyways, that's about it.
Maybe I'll cover more things with docker, with some examples, or talk about docker swarm in the next one!