Docker For Security Researchers xyz: x

Part x: A Comprehensive Guide to Docker 🐳

Zyad Elsayed
11 min readJul 12, 2024

⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀بِسْمِ اللَّـهِ الرَّحْمَـٰنِ الرَّحِيم ِ

Table Of Content

·⠀ About Docker
· ⠀Docker Capabilities
· ⠀Some Key Concepts
· ⠀VM vs Containers
⠀⠀ ∘⠀ How OS is made up
⠀⠀ ∘⠀ Virtual machine architecture
⠀⠀ ∘⠀ Container architecture
· ⠀Docker Architecture
⠀⠀ ∘ ⠀Client-Server Architecture
· ⠀Docker CLI commands
⠀⠀ ∘ ⠀Image Management
⠀⠀ ∘ ⠀Container Management
⠀⠀ ∘ ⠀Volume Management
⠀⠀ ∘ ⠀Network Management
⠀⠀ ∘ ⠀System Management
⠀⠀ ∘ ⠀Docker Compose
⠀⠀ ∘ ⠀Docker Init
· ⠀Docker hub
· ⠀Resources

About Docker

Docker is an open-source platform that automates the deployment, scaling, and management of applications using containerization. Containers package an application and its dependencies into a standardized unit for software development, ensuring consistency across multiple environments and isolation from systems.

Docker Capabilities

Docker is essential in modern software development due to its key benefits. It ensures portability by allowing applications in containers to run consistently across various environments, resolving the “ it works on my machine ” issue. This consistency enhances collaboration between development and operations teams.

Additionally, Docker improves scalability by enabling easy distribution of containers across clusters, allowing rapid application scaling.

A critical benefit of Docker is isolation. Containers encapsulate applications and their dependencies, providing process and filesystem isolation. This allows different applications to run various versions of the same library without conflicts, which is particularly useful for languages like Python and Node where multiple projects may need different library versions.

Moreover, Docker’s cross-platform compatibility is a significant advantage. A Docker image can run on any operating system that supports Docker, including Windows, macOS, and various Linux distributions. This flexibility ensures that applications and testing environments are consistent and portable across different development and production systems.

Furthermore, Docker containers are highly efficient as they share the host system’s kernel and resources, making them lighter and more resource-efficient than traditional virtual machines (VMs). This efficiency translates to faster startup times and reduced overhead, allowing more applications to run on the same hardware.

Some Key Concepts to know before start

Dockerfile: A text file with instructions on how to build a Docker image. It contains a series of commands that specify the environment inside the container.

Images: Read-only templates used to create Docker containers. Images contain the application, runtime environment, third-party libraries, dependencies, and all the environmental variables needed to run the application. They are built from Dockerfiles and stored in image registries, such as Docker Hub, for sharing.

Docker images serve as executable artifacts, ensuring consistency and reliability across different environments.

Containers: Lightweight, standalone, and executable packages that include and encapsulate everything needed to run a piece of software, including the code, runtime, system tools, libraries, and settings; it shares the host OS kernel. Runtime instances of Docker images.
Multiple containers can be instantiated from a single Docker image.

In a Dockerfile, we define instructions for running an application. These instructions are used to build an image, which is easily distributable. The image, once built, can be run to create a Docker container.

  • Docker Compose: A tool for defining and running multi-container Docker applications using a YAML file (docker-compose.yml).
  • Docker Hub: A cloud-based repository where Docker users can create, test, store, and distribute container images like GitHub.

VMs vs Containers

Virtual Machines and containers are both technologies used to isolate and run applications, but they have significant differences in their architecture, performance, and use cases.

Firstly, How OS is made up

Operating system is generally composed of three layers: the hardware layer, which includes the processor, memory, and devices; the OS application layer, which hosts user applications such as Chrome and Microsoft tools; and the kernel layer, which serves as the interface between the hardware and applications. The kernel manages resource allocation among applications.

The core difference between containers and virtual machines lies in their architectural layers. Virtual machines are composed of a complete operating system stack, including the kernel and applications. In contrast, containers share the host operating system’s kernel and only isolate the application and its dependencies.

Virtual machine architecture

Virtual Machines are an abstraction of physical hardware. They allow multiple OS instances to run on a single physical machine by using a hypervisor, which creates and manages these VMs. But these VMs run a complete operating system–including its own kernel

A hypervisor is software used to run multiple virtual machines on a single physical machine. It allocates underlying physical computing resources, such as CPU and memory, to individual virtual machines as needed. Examples of hypervisors include VMware, VirtualBox, and Hyper-V.

Container architecture

A container is an isolated, lightweight silo for running an application on the host operating system. Containers build on top of the host operating system’s kernel, and contain only apps and some lightweight operating system APIs and services that run in user mode.

Unlike VMs, containers share the host OS kernel and isolate processes at the user level.

Initially, Docker was designed for Linux and couldn’t be run directly on Windows. The Windows kernel does not support Linux-specific features, necessitating a compatibility layer or a virtual machine to emulate the Linux environment.

Since most popular services and databases are Linux-based, this limited compatibility for Windows users. To address this, Docker introduced Docker Desktop for Windows and macOS, enabling Linux containers to run on these operating systems using a hypervisor layer with a lightweight Linux distribution.

Docker Desktop for Windows initially used VirtualBox and Boot2Docker but later transitioned to leveraging Hyper-V, a native hypervisor introduced in Windows 10, to create a small, optimized Linux VM that acts as a host for Docker containers.

Similarly, Docker Desktop for Mac uses Apple’s Hypervisor.framework to run a Linux VM, ensuring compatibility with Linux-based Docker images.

Docker Architecture

Docker’s architecture is based on a client-server model. It consists of a client component “Docker-CLI” that communicates with a server component known as the Docker Engine using RESTful APIs. This client-server interaction allows for the management and deployment of containerized applications.

Client-Server Architecture

  • Docker Client “CLI”: The Docker client is the primary interface for interacting with Docker. Users issue commands docker build, docker run through the client, which then communicates with the Docker daemon.
  • Docker Engine: The Docker Engine, also known as the Docker daemon (dockerd), is responsible for building, running, and managing Docker containers. It processes the commands from the Docker client and interacts with the host operating system to perform the necessary actions.

Docker clients interact with the Docker daemon (dockerd) via a socket/API to manage containers. dockerd is a high-level container runtime that handles tasks like networking and orchestration. It uses containerd as the container manager, responsible for container lifecycle and image management.
containerd leverages containerd-shim to abstract low-level runtime operations, maintaining the container's state independently of dockerd. The actual container runtime is managed by runc, a command-line tool that complies with the OCI runtime specification to create and run containers.

Docker CLI commands

Image Management

  • docker build [OPTIONS] PATH | URL | -Build an image from a Dockerfile.
    ex. docker build -t image_name /Dockerfile_path
    ex. docker build -t node_app .
  • docker images || docker image lsList images.
  • docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from a registry.
    docker pull image_name For official image
    docker pull directory/image_name
    ex. docker pull ubuntu
    ex. docker pull amazon/cloudwatch-agent
  • docker push [OPTIONS] NAME[:TAG] Push an image or a repository to a registry.
  • docker rmi [OPTIONS] IMAGE [IMAGE…] Remove one or more images after removing its containers.
  • docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE.
  • docker save [OPTIONS] IMAGE [IMAGE…]Save one or more images to a tar archive (streamed to STDOUT by default).
  • docker load [OPTIONS] Load an image from a tar archive or STDIN.

Container Management

  • docker run [OPTIONS] IMAGE [COMMAND] [ARG…] to deploy a container from an image
    ex. docker run -it --name ubuntu ubuntu
    ex. docker run -it --name ubuntu ubuntu /bin/bash
    ex. docker run --name ubuntu ubuntu ls -la /
    Common Options
    -it Run container with interactive terminal.
    -d Run container in detached mode (in the background).
    --name To specify container name “used later instead of containerID”
    -p {host_port}:{container_port} Map a host port to a container port.
    -e To set environment variables within container
    -h for help menu
    --net To specify the network settings for a container.
    --rm Automatically remove the container when it exits.
    -v Bind mount a volume
  • docker ps List running containers.
    ⠀⠀-a List all containers even stopped containers
  • docker start [OPTIONS] CONTAINER [CONTAINER...] Start one or more stopped containers.
  • docker attach [OPTIONS] CONTAINER to attach to started container
    ex. docker attach <container_id_or_name>
  • docker stop [OPTIONS] CONTAINER [CONTAINER...] Stop one or more running containers.
  • docker restart [OPTIONS] CONTAINER [CONTAINER...] Restart one or more containers.
  • docker kill [OPTIONS] CONTAINER [CONTAINER...]Kill one or more running containers.
  • docker rm [OPTIONS] CONTAINER [CONTAINER...]Remove one or more containers.
  • docker pause CONTAINER [CONTAINER...]Pause all processes within one or more containers.
  • docker unpause CONTAINER [CONTAINER...]Unpause all processes within one or more containers.
  • docker exec <options> <container_id_or_name> <command> Execute a command inside a running container without starting an interactive shell.
    ex. docker exec -it ubuntu ls -la
  • docker inspect [OPTIONS] NAME|ID [NAME|ID...] Display detailed information on one or more containers.
  • docker container start my_containerThis is the newer syntax introduced in Docker 1.13 and is recommended for clarity. It explicitly indicates that you are operating on a container.

Note that: Docker containers are not inherently persistent by default. When a Docker container is destroyed, any changes made within the container’s filesystem or environment are typically lost. To persist data, use volumes or bind mounts.

Volume Management

  • docker volume create [OPTIONS] [VOLUME] Create a new volume.
    ex. docker volume create mongo_network
  • docker volume ls [OPTIONS] List volumes.
  • docker volume prune [OPTIONS] removes all volumes not used by at least one container.
  • docker volume inspect [OPTIONS] VOLUME [VOLUME...] Display detailed information on one or more volumes.
  • docker volume rm my_volume Remove a volume.

To connect a volume to a container when running it, use the -v or --mount option followed by the volume name and the mount point inside the container

docker run -d --name my_container -v my_volume:/path/in/container my_image

Network Management

  • docker network create [OPTIONS] NETWORK Create a new network.
    docker network create mongo
  • docker network connect [OPTIONS] NETWORK CONTAINER Connect a container to a network.
  • docker network disconnect [OPTIONS] NETWORK CONTAINER Disconnect a container from a network.
  • docker network ls [OPTIONS] List networks.
  • docker network rm NETWORK [NETWORK...] Remove one or more networks.
  • docker network inspect [OPTIONS] NETWORK [NETWORK...] Display detailed information on one or more networks.

System Management

  • docker info Display system-wide information.
  • docker system prune -a Remove unused data.
  • docker version Show the Docker version information.
  • docker system df [OPTIONS] Show docker disk usage.

Docker Compose

docker-compose is a tool provided by Docker that allows you to define and manage multi-container Docker applications. It uses YAML files to configure the services, networks, and volumes required for your application to run as a set of interconnected containers.

docker-compose up -d Start services in the background.

  • docker-compose down Stop and remove services.
  • docker-compose build Build or rebuild services.
  • docker-compose ps List services.
  • docker-compose logs View output from services.
  • docker-compose exec Execute a command in a running container.

Docker Init

Initialize a project with the files necessary to run the project in a container.

Run docker init in your project directory to be walked through the creation of the following files with sensible defaults for your project:

  • .dockerignore
  • Dockerfile
  • compose.yaml
  • README.Docker.md

Docker hub

It serves as a central registry for Docker images, allowing users to store, manage, and distribute Docker container images.

However, for companies and organizations that require stricter control over their Docker images, especially for security and compliance reasons, private registries are essential.

Cloud providers like AWS, Google Cloud, and Azure typically offer private container registries as part of their services. These private registries allow companies to securely store and manage their Docker images within their own infrastructure or a cloud environment, ensuring that sensitive images are not publicly accessible.

Nexus and Docker Hub also provide options for private registries that can be deployed on-premises or in a cloud setup, offering additional flexibility and control over image management.

Note that: Every docker image is built on top of another existing image, which referred to as base image. The base image serves as the starting point for your image and provides the foundational operating system environment and utilities that your application or service needs.

Wait for the coming parts 🐳

--

--