Docker, tools for uniting developers


  • Docker containers share the same system resources, they don’t have separate, dedicated hardware-level resources for them to behave like completely independent machines.
  • They don’t need to have a full-blown OS inside.
  • They allow running multiple workloads on the same OS, which allows efficient use of resources.
  • Since they mostly include application-level dependencies, they are pretty lightweight and efficient. A machine where you can run 2 VMs, you can run tens of Docker containers without any trouble, which means fewer resources = less cost = less maintenance = happy people.

Dockerfile and Docker Image

FROM python:3.8-slim RUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt . # Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
RUN chmod +x ENTRYPOINT ["bash","/app/"]
FROM python:3.8-slimRUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt .# Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
RUN python collectstatic --noinput
RUN chmod +x
ENTRYPOINT ["bash","/app/"]
Docker will use cache if the steps aren’t changed.


Docker Compose

  • Single command to start your entire application: docker-compose up.
  • Simple DNS resolution using container names: meaning you can call the service-a container from service-b container by using http://service-a.
  • Mount volumes from your local filesystem to the container with a single line in the YAML definition.
  • Only restart the containers that have changes, which means faster boot times in case of restarts.
version: "3.9"services:
context: .
dockerfile: Dockerfile
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=changeme
- DB_HOST=postgres
- "8888:8080"
- .:/app
- db
- api
image: postgres
PGDATA: /data/postgres
- ./postgres:/data/postgres
- "5432:5432"
restart: unless-stopped
- db
driver: bridge
driver: bridge
  • It starts with defining the Docker Compose version, which defines which features can be used.
  • Then it goes into defining the services one by one.
  • web line defines the first container of the application.
  • build step tells Compose to build the current path with docker build , which implies that there is a Dockerfile in the current path and that should be used for the image.
  • environment allows you to set the environment variables for your container. In this example, it sets four environment variables: DB_NAME , DB_USER , DB_PASSWORD , and DB_HOST .
  • ports define the mapping between the host and the container ports in the <HOST_PORT>:<CONTAINER_PORT> fashion. In this example, localhost:8888 will be mapped to the port 8000 of this container.
  • volumes define the filesystem mapping between the host and the container. In this example, the current path is going to be mounted to the /app path of the container, which allows you to replace the code in the image with your local filesystem, including your live changes.
  • Then the next line defines the postgres service, which uses the postgres image along with its configuration same as web service.
  • There is restart that value is unless-stopped that define the service will be restarted always if the server reboot until the service is manually stopped.
  • The last is networks, the networks are like the isolated connection that this container is open to. As you can see, my postgreshave a dedicated network called db that can be accessed to any container that has access to that network. As you can see my web service have a network db so it is able to access postgresconnection. If you think of it, it is like a security group in AWS or subnetwork in GCP.

Deployment and Testing

- sonarqube
- deploy
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
stage: sonarqube
- test
- cat tests/coverage.xml
- sonar-scanner$CI_COMMIT_REF_NAME
allow_failure: true
stage: deploy
image: google/cloud-sdk
- docker:dind
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
- staging
# build the container image
- name: ''
args: [ 'build', '-t', '$PROJECT_ID/paytungan-backend', '.']
# push the container image
- name: ''
args: [ 'push', '$PROJECT_ID/paytungan-backend']
# deploy to Cloud Run
- name: ""
args: ['run', 'deploy', 'paytungan-backend', '--image', '$PROJECT_ID/paytungan-backend', '--region', 'asia-southeast1']

Git workflow.

Benefits of Using Docker in Group Projects

Easy for Application Packaging

Guarantee Application Quality


  • Develop and run the application inside an isolated environment (container) that matches your final deployment environment.
  • Put your application inside a single file (image) along with all its dependencies and necessary deployment configurations.
  • And share that image through a central server registry) that is accessible by anyone with proper authorization




Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store