Docker, tools for uniting developers

Widyanto H Nugroho
10 min readFeb 28, 2022

There is a story of a software developer developing software with his team. When he runs it from his local, it runs smoothly. But when your team tries to run the app or when it is deployed in dev, it’s broken.

That event is a common pain that occurs when developing software with multiple people. There are several reasons such as dependencies don’t match with your teammate's environment due to different OS or there is a lacks configuration on the development server. How do we make sure the machine running on local is the same as the one running on the server?

Docker

Docker is one of the tools that used the idea of isolated resources to create a set of tools that allows applications to be packaged with all the dependencies installed and run wherever wanted. Docker defines the containers as follows:

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another.

I can hear the thoughts like “this is the same thing as virtual machines”, but there are some differences:

  • Docker containers share the same system resources, they don’t have separate, dedicated hardware-level resources for them to behave like completely independent machines.
  • They don’t need to have a full-blown OS inside.
  • They allow running multiple workloads on the same OS, which allows efficient use of resources.
  • Since they mostly include application-level dependencies, they are pretty lightweight and efficient. A machine where you can run 2 VMs, you can run tens of Docker containers without any trouble, which means fewer resources = less cost = less maintenance = happy people.

You can download so called docker engine for your own OS here.

Dockerfile and Docker Image

The Docker images are building the block of docker or docker image is a read-only template with instructions to create a Docker container. Docker images are the most built part of the docker life cycle.

Docker images are defined within special text files called Dockerfile, and you need to define all the steps explicitly inside the Dockerfile. Here goes an example from one of my images that has Python 3.8-slim, and some python requirement in it; this is an actual image that I use for some development configuration. You can get the whole repository here.

FROM python:3.8-slim RUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt . # Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN chmod +x run.sh ENTRYPOINT ["bash","/app/run.sh"]

The FROM is the base image you will use, at the example above, I use python:3.8 as my base image because I was trying to build python application. The RUN command will run any command you specify, there is also predefined command like COPY and ENV . The final is ENTRYPOINT which command that docker will execute to run the container of the image. Some people may prefer use CMD while other use ENTRYPOINT . You can find out the differences to this StackOverFlow forum.

After you specify the Dockerfile, you can build the docker image using command docker build . Here is my operation to build the image of my own project.

As you can see, docker execute the step as we declare it in Dockerfile. Little as you know, if there is some changes is Dockerfile, docker will not execute all of them again when you rebuild the image. It will use the latest image built and start from the line where you change the Dockerfile.

So, I will change my Dockerfile to add RUN python manage.py collectstatic --noinput before RUN chmod +x run.sh . So it looks like this.

FROM python:3.8-slimRUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt .# Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN python manage.py collectstatic --noinput
RUN chmod +x run.sh
ENTRYPOINT ["bash","/app/run.sh"]

If we run docker build again. We can see that docker will use cache from the step that we are trying to rebuild.

Docker will use cache if the steps aren’t changed.

After you succeed to build, you can check the image of your own using command docker images . As you can see the image below, I tag the image with paytungan-backend and it will show the repository as I tagged it.

In this Dockerfile, you can put the command as you like to build your application. There is lot of docker images already built. The one I introduce it to you is python image using version of 3.8-slim. If you wanna build Go or Node application you can use the image of your framework.

Container

After you build your image, you can run the image to a container using command docker run . docker run <image name> command creates and builds a container.

Then you can check if your app is running using docker ps -a the command to list all containers.

What is the difference between Image and Container? In short, Image is just a collection of commands that will trigger Docker Daemon to create a container.

Docker Compose

After you learn about the Dockerfile, docker image, and docker container. You must be thinking

What if I would like to run my application with database? Should I run and create it inside Dockerfile or should I run separated database with docker?

Well, you can do both. But, it will be not clean if you are running more than one container using just Dockefile along with several docker command. If you to that, you will need to build and run with multiple command you need to execute.

What if I telling you there is a amazing tools that can make your life as software developer simpler. This tools is called docker-compose. While you can run everything as Docker containers, it quickly becomes cumbersome to manage the containers along with the application itself; and Docker has a very good solution for this: Docker Compose.

According to Docker:

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration.

As explained, Docker Compose allows you to define your full application including its dependencies with a single file, which makes the development process incredibly easy. By using Compose, you get:

  • Single command to start your entire application: docker-compose up.
  • Simple DNS resolution using container names: meaning you can call the service-a container from service-b container by using http://service-a.
  • Mount volumes from your local filesystem to the container with a single line in the YAML definition.
  • Only restart the containers that have changes, which means faster boot times in case of restarts.

Here is an example docker-compose.yml file that run postgres database with python application:

version: "3.9"services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=changeme
- DB_HOST=postgres
ports:
- "8888:8080"
volumes:
- .:/app
networks:
- db
- api
postgres:
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- ./postgres:/data/postgres
ports:
- "5432:5432"
restart: unless-stopped
networks:
- db
networks:
db:
driver: bridge
api:
driver: bridge

Let’s walk through this file line by line as well:

  • It starts with defining the Docker Compose version, which defines which features can be used.
  • Then it goes into defining the services one by one.
  • web line defines the first container of the application.
  • build step tells Compose to build the current path with docker build , which implies that there is a Dockerfile in the current path and that should be used for the image.
  • environment allows you to set the environment variables for your container. In this example, it sets four environment variables: DB_NAME , DB_USER , DB_PASSWORD , and DB_HOST .
  • ports define the mapping between the host and the container ports in the <HOST_PORT>:<CONTAINER_PORT> fashion. In this example, localhost:8888 will be mapped to the port 8000 of this container.
  • volumes define the filesystem mapping between the host and the container. In this example, the current path is going to be mounted to the /app path of the container, which allows you to replace the code in the image with your local filesystem, including your live changes.
  • Then the next line defines the postgres service, which uses the postgres image along with its configuration same as web service.
  • There is restart that value is unless-stopped that define the service will be restarted always if the server reboot until the service is manually stopped.
  • The last is networks, the networks are like the isolated connection that this container is open to. As you can see, my postgreshave a dedicated network called db that can be accessed to any container that has access to that network. As you can see my web service have a network db so it is able to access postgresconnection. If you think of it, it is like a security group in AWS or subnetwork in GCP.

As you can tell, the service definition here is pretty intuitive and it allows managing all the dependencies easily through this single file. Thanks to Docker Compose, local development with Docker containers becomes incredibly easy, and once you have included this file in your project root, then all your teammates can just run docker-compose up to have the whole application up and running, without needing to install anything other than Docker itself.

Deployment and Testing

For the deployment and testing, we use docker for testing in our staging environment. My project uses Gitlab, so we use Gitlab CI to help our CI/CD Pipeline using .gitlab-ci.yml file. This example below is one of stages in my Gitlab CI to deploy to staging server in GCP.

stages:
- sonarqube
- deploy
sonar-scanner:
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
stage: sonarqube
dependencies:
- test
script:
- cat tests/coverage.xml
- sonar-scanner
-Dsonar.host.url=https://pmpl.cs.ui.ac.id/sonarqube
-Dsonar.branch.name=$CI_COMMIT_REF_NAME
-Dsonar.login=$SONARQUBE_TOKEN
allow_failure: true
deploy-staging:
stage: deploy
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
only:
- staging

As you can see, before I deploy to staging server. There is pipeline job to run sonar-scanner. If you already read my previous article about Git, there is a section where I talked about Git Workflow.

In my example .gitlab-ci.yml above you can see I use sonar to ensure the quality code of the changes before they can be deployed to staging. You can read more about sonar here.

If you notice, in my deploy-staging service, I use service docker and in cloudbuild.yaml that used to deploy to GCP, there is step to build the docker image first

steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/paytungan-backend', '.']
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/paytungan-backend']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'paytungan-backend', '--image', 'gcr.io/$PROJECT_ID/paytungan-backend', '--region', 'asia-southeast1']

options:
logging: CLOUD_LOGGING_ONLY

As you can see there is the command to build the docker image as in my project, there is Dockerfile to build the image.

Git workflow.

If the jobs are successful, the pipeline in Gitlab will look like the image above.

Benefits of Using Docker in Group Projects

Docker’s ease is felt when it comes to moving development servers from development in local computer.

Easy for Application Packaging

I’m currently developing an application for my Software Engineering Course. This app have staging and production environment. If we don’t use docker, it will be hard to make sure the app runs smoothly in production as in staging does.

Guarantee Application Quality

During the development period, we developers intensified our testing while still on localhost. When the program is deployed to the server, there are environmental issues that concern us so sometimes we have to retest on the server. This can be minimized with unit tests, but note that unit tests are only run in the GitLab pipeline, not on a deployment server that uses a different environment.

With Docker, this ensures the quality of our apps is the same. Either on localhost, GitLab or on the deployment server. So that our time can be used to better code the software about nothing more than server deployments.

Finale

Developing software is not an easy task when it comes to deployment. Software engineers need to ensure that their application can run smoothly in each engineers and deployment server. Docker comes as a rescue by providing the following tools:

  • Develop and run the application inside an isolated environment (container) that matches your final deployment environment.
  • Put your application inside a single file (image) along with all its dependencies and necessary deployment configurations.
  • And share that image through a central server registry) that is accessible by anyone with proper authorization

References

--

--