How to Build a Docker Image: A Complete Guide
Docker images are the foundation of containerized applications. Whether you're packaging a simple web server or a complex microservice, understanding how to build a Docker image gives you a portable, reproducible way to run software anywhere Docker is installed. Here's how the process actually works — and what shapes the outcome for different setups.
What Is a Docker Image?
A Docker image is a read-only template that contains everything needed to run an application: the operating system layer, runtime, dependencies, configuration files, and application code. When you run an image, Docker creates a container — a live, isolated instance based on that template.
Images are built in layers. Each instruction in a build file adds a new layer on top of the previous one. This layered structure is what makes Docker efficient: unchanged layers are cached and reused, so rebuilds are fast when only part of the image changes.
The Dockerfile: Your Build Blueprint
Every Docker image starts with a Dockerfile — a plain text file containing step-by-step instructions. Docker reads this file top to bottom and executes each instruction to assemble the image.
A minimal Dockerfile looks something like this:
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/app.py CMD ["python3", "/app/app.py"] The key instructions you'll use most often:
| Instruction | What It Does |
|---|---|
FROM | Sets the base image to build on top of |
RUN | Executes a command during the build process |
COPY | Copies files from your local machine into the image |
ADD | Like COPY, but also handles URLs and compressed archives |
ENV | Sets environment variables inside the image |
EXPOSE | Documents which port the container listens on |
CMD | Defines the default command to run when a container starts |
ENTRYPOINT | Sets a fixed executable that always runs at container start |
How to Build an Image from a Dockerfile
Once your Dockerfile is ready, building the image is a single command:
docker build -t my-app:1.0 . Breaking that down:
docker build— triggers the image build process-t my-app:1.0— tags the image with a name (my-app) and version (1.0).— tells Docker to look for the Dockerfile in the current directory
Docker will pull the base image if it's not already cached locally, execute each instruction in sequence, and output a final image ID when the build completes.
Choosing the Right Base Image 🐳
Your FROM instruction is one of the most consequential decisions in the build. Base images vary significantly in size, security surface, and available tooling.
- Full OS images (e.g.,
ubuntu,debian) — large, familiar, good for development or complex dependency trees - Slim variants (e.g.,
python:3.11-slim) — stripped-down versions of official images, smaller and faster to pull - Alpine-based images (e.g.,
node:18-alpine) — extremely small (~5MB), but use a different C library (muslvsglibc), which can cause compatibility issues with some packages - Distroless images — contain only the application and its runtime, no shell or package manager; favored in production security-conscious environments
The right base image depends on your application's language runtime, how much you care about image size, and whether you'll need to debug inside a running container.
Build Context and the .dockerignore File
When you run docker build, Docker sends your project directory — the build context — to the Docker daemon. Large build contexts slow down builds significantly.
A .dockerignore file works like .gitignore: it tells Docker which files to exclude from the build context. Common entries include:
node_modules/ .git/ *.log .env Keeping the build context lean reduces build times and avoids accidentally copying sensitive files (like credentials or local configs) into the image.
Layer Caching and Build Order
Docker caches each layer after it's built. If nothing in that layer has changed, Docker reuses the cached version. Layer order matters enormously for build performance.
A common pattern is to copy dependency files first, install dependencies, then copy application code:
COPY package.json package-lock.json ./ RUN npm install COPY . . This way, the expensive npm install step only reruns when dependencies actually change — not every time you edit your source code.
Multi-Stage Builds: Keeping Images Small 🔧
Multi-stage builds let you use one image to compile or build your application, then copy only the finished artifact into a smaller final image. This is especially useful for compiled languages like Go, Rust, or Java.
# Stage 1: Build FROM golang:1.21 AS builder WORKDIR /app COPY . . RUN go build -o myapp # Stage 2: Run FROM gcr.io/distroless/base COPY --from=builder /app/myapp /myapp CMD ["/myapp"] The final image contains only the compiled binary — not the entire Go toolchain.
Variables That Affect Your Build Experience
Even with the same Dockerfile, outcomes vary based on:
- Host OS and architecture — building on an ARM Mac (
M1/M2) produces ARM images by default;docker buildxenables cross-platform builds foramd64,arm64, and others - Docker version — newer versions of Docker include BuildKit by default, which offers parallel layer execution and better caching
- Network conditions — base image pulls and package installations depend on network speed and registry availability
- Dependency versions — unpinned package versions can cause non-deterministic builds over time
- Build arguments —
ARGinstructions let you pass variables at build time, making the same Dockerfile produce different images depending on inputs
Different Workflows, Different Approaches
A developer building a local testing environment will likely prioritize fast iteration — accepting larger images in exchange for familiar tooling and easy debugging access.
A team building for production CI/CD will prioritize reproducibility and minimal attack surface — pinning base image digests, using multi-stage builds, scanning images for vulnerabilities, and pushing to a private registry like Amazon ECR, Google Artifact Registry, or Docker Hub.
Someone working on an edge device or Raspberry Pi will navigate architecture constraints that most cloud-focused tutorials skip entirely.
The mechanics of docker build stay consistent. What varies — sometimes dramatically — is which base image, which layer strategy, and which tooling choices actually fit your environment and goals.