How to Create a Docker Container: A Practical Guide
Docker has become one of the most widely used tools in modern software development and deployment. Whether you're running a web app, testing code in isolation, or managing services across machines, containers make the process repeatable and portable. But the steps to create one — and how you configure it — vary considerably depending on your environment and goals.
What Is a Docker Container, Exactly?
Before creating one, it helps to understand what you're actually building. A Docker container is a lightweight, isolated runtime environment that packages an application along with everything it needs to run: code, libraries, dependencies, and configuration. Unlike a virtual machine, a container shares the host operating system's kernel, which makes it faster to start and more resource-efficient.
Containers are created from images — read-only templates that define what's inside the container. Think of an image as a recipe and the container as the dish you make from it.
What You Need Before You Start
To create Docker containers, you'll need:
- Docker Engine installed on your machine (available for Linux, macOS, and Windows)
- Basic familiarity with the command line
- An image to work from — either pulled from Docker Hub or built from a custom Dockerfile
On Windows and macOS, Docker Desktop provides a GUI layer on top of the engine. On Linux, Docker Engine runs natively without the desktop wrapper.
Two Main Ways to Create a Docker Container
1. Running a Container Directly from an Image
The fastest path is pulling a pre-built image and running it immediately:
docker run hello-world This command pulls the hello-world image from Docker Hub (if not already cached locally) and starts a container from it. For something more practical:
docker run -d -p 8080:80 nginx This runs an Nginx web server in detached mode (-d), mapping port 80 inside the container to port 8080 on your host machine.
Key flags to know:
| Flag | What It Does |
|---|---|
-d | Run container in background (detached mode) |
-p | Map host port to container port |
-v | Mount a volume (local folder into container) |
--name | Assign a custom name to the container |
-e | Set environment variables |
--rm | Automatically remove container when it stops |
2. Building a Custom Image with a Dockerfile
When you need control over what's inside the container, you write a Dockerfile — a plain text file with step-by-step instructions:
FROM ubuntu:22.04 RUN apt-get update && apt-get install -y python3 COPY app.py /app/app.py WORKDIR /app CMD ["python3", "app.py"] Breaking this down:
FROMsets the base image your container starts fromRUNexecutes commands during the image build processCOPYmoves files from your local machine into the imageWORKDIRsets the working directory inside the containerCMDdefines the default command that runs when the container starts
To build the image from this Dockerfile:
docker build -t my-python-app . Then create and run a container from it:
docker run my-python-app Managing Containers After Creation 🐳
Creating a container is just the start. A few commands you'll use regularly:
docker ps— list running containersdocker ps -a— list all containers, including stopped onesdocker stop [name or ID]— stop a running containerdocker start [name or ID]— restart a stopped containerdocker exec -it [name] bash— open an interactive terminal inside a running containerdocker rm [name or ID]— delete a stopped container
Containers are ephemeral by default — any data written inside them disappears when the container is removed unless you've mounted a volume with -v.
Variables That Shape How You Build Containers
The "right" way to create and configure a container shifts significantly based on a few factors:
Base image choice matters more than many beginners expect. Alpine Linux images are small (often under 10MB) and fast, but may lack packages your app needs. Ubuntu or Debian-based images are larger but more compatible. Distroless images reduce attack surface but require more expertise to debug.
Networking mode affects how containers talk to each other and to the outside world. Bridge mode (the default) works well for simple setups. Host mode is faster but reduces isolation. Custom networks are typically needed once you're running multiple containers that need to communicate.
Volume strategy determines data persistence. For databases or any stateful workload, not mounting a volume means losing your data when the container stops. For stateless apps, this usually doesn't matter.
Resource constraints — CPU and memory limits — become relevant when running containers in production or on shared hardware. What's fine on a developer's laptop may behave differently under load on a server.
Orchestration needs change the picture entirely. A single container run with docker run works for local development. Teams managing dozens of containers often move toward Docker Compose (for multi-container apps defined in a YAML file) or container orchestration platforms like Kubernetes for production scale.
The Skill and Environment Spectrum
A developer testing a local app might only need a single docker run command and never touch a Dockerfile. A DevOps engineer building a production pipeline will work with multi-stage Dockerfiles, image optimization, registry pushes, and container security scanning. Someone setting up a home media server lands somewhere in between.
The commands and concepts are the same — but how far you go, and which tradeoffs matter most, depends entirely on the workload you're containerizing, the infrastructure you're running it on, and how much operational complexity you're prepared to manage.