Docker, tools for uniting developers

Docker

  • Docker containers share the same system resources, they don’t have separate, dedicated hardware-level resources for them to behave like completely independent machines.
  • They don’t need to have a full-blown OS inside.
  • They allow running multiple workloads on the same OS, which allows efficient use of resources.
  • Since they mostly include application-level dependencies, they are pretty lightweight and efficient. A machine where you can run 2 VMs, you can run tens of Docker containers without any trouble, which means fewer resources = less cost = less maintenance = happy people.

Dockerfile and Docker Image

FROM python:3.8-slim RUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt . # Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN chmod +x run.sh ENTRYPOINT ["bash","/app/run.sh"]
FROM python:3.8-slimRUN apt-get update
RUN apt-get install -y libpq-dev gcc
COPY ./requirements.txt .# Install requirements
RUN pip install -U -r requirements.txt
RUN mkdir -p /app
COPY . /app
WORKDIR /app
RUN python manage.py collectstatic --noinput
RUN chmod +x run.sh
ENTRYPOINT ["bash","/app/run.sh"]
Docker will use cache if the steps aren’t changed.

Container

Docker Compose

  • Single command to start your entire application: docker-compose up.
  • Simple DNS resolution using container names: meaning you can call the service-a container from service-b container by using http://service-a.
  • Mount volumes from your local filesystem to the container with a single line in the YAML definition.
  • Only restart the containers that have changes, which means faster boot times in case of restarts.
version: "3.9"services:
web:
build:
context: .
dockerfile: Dockerfile
environment:
- DB_NAME=postgres
- DB_USER=postgres
- DB_PASSWORD=changeme
- DB_HOST=postgres
ports:
- "8888:8080"
volumes:
- .:/app
networks:
- db
- api
postgres:
image: postgres
environment:
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-changeme}
PGDATA: /data/postgres
volumes:
- ./postgres:/data/postgres
ports:
- "5432:5432"
restart: unless-stopped
networks:
- db
networks:
db:
driver: bridge
api:
driver: bridge
  • It starts with defining the Docker Compose version, which defines which features can be used.
  • Then it goes into defining the services one by one.
  • web line defines the first container of the application.
  • build step tells Compose to build the current path with docker build , which implies that there is a Dockerfile in the current path and that should be used for the image.
  • environment allows you to set the environment variables for your container. In this example, it sets four environment variables: DB_NAME , DB_USER , DB_PASSWORD , and DB_HOST .
  • ports define the mapping between the host and the container ports in the <HOST_PORT>:<CONTAINER_PORT> fashion. In this example, localhost:8888 will be mapped to the port 8000 of this container.
  • volumes define the filesystem mapping between the host and the container. In this example, the current path is going to be mounted to the /app path of the container, which allows you to replace the code in the image with your local filesystem, including your live changes.
  • Then the next line defines the postgres service, which uses the postgres image along with its configuration same as web service.
  • There is restart that value is unless-stopped that define the service will be restarted always if the server reboot until the service is manually stopped.
  • The last is networks, the networks are like the isolated connection that this container is open to. As you can see, my postgreshave a dedicated network called db that can be accessed to any container that has access to that network. As you can see my web service have a network db so it is able to access postgresconnection. If you think of it, it is like a security group in AWS or subnetwork in GCP.

Deployment and Testing

stages:
- sonarqube
- deploy
sonar-scanner:
image:
name: sonarsource/sonar-scanner-cli:latest
entrypoint: [""]
stage: sonarqube
dependencies:
- test
script:
- cat tests/coverage.xml
- sonar-scanner
-Dsonar.host.url=https://pmpl.cs.ui.ac.id/sonarqube
-Dsonar.branch.name=$CI_COMMIT_REF_NAME
-Dsonar.login=$SONARQUBE_TOKEN
allow_failure: true
deploy-staging:
stage: deploy
image: google/cloud-sdk
services:
- docker:dind
script:
- echo $GCP_SERVICE_KEY > gcloud-service-key.json # Google Cloud service accounts
- gcloud auth activate-service-account --key-file gcloud-service-key.json
- gcloud config set project $GCP_PROJECT_ID
- gcloud builds submit . --config=cloudbuild.yaml
only:
- staging
steps:
# build the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/$PROJECT_ID/paytungan-backend', '.']
# push the container image
- name: 'gcr.io/cloud-builders/docker'
args: [ 'push', 'gcr.io/$PROJECT_ID/paytungan-backend']
# deploy to Cloud Run
- name: "gcr.io/cloud-builders/gcloud"
args: ['run', 'deploy', 'paytungan-backend', '--image', 'gcr.io/$PROJECT_ID/paytungan-backend', '--region', 'asia-southeast1']

options:
logging: CLOUD_LOGGING_ONLY
Git workflow.

Benefits of Using Docker in Group Projects

Easy for Application Packaging

Guarantee Application Quality

Finale

  • Develop and run the application inside an isolated environment (container) that matches your final deployment environment.
  • Put your application inside a single file (image) along with all its dependencies and necessary deployment configurations.
  • And share that image through a central server registry) that is accessible by anyone with proper authorization

References

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store