What Is a Container? Docker Explained with Pizza Boxes
Foundations Beginner 9 min read

What Is a Container? Docker Explained with Pizza Boxes

Containers aren't as complicated as they sound. Think of them like pizza boxes — everything your app needs, packed up and ready to go anywhere.

By J. Martin · · devops
Table of Contents

Why Should You Care?

You write some code. It works on your laptop. You send it to a friend and it breaks. You send it to a server and it breaks differently. Everyone has seen this movie. The villain has a name: “it works on my machine.”

This is the single most common frustration in software development. Your code depends on things your laptop has that the server doesn’t — the right version of Node.js, a specific library, a config file sitting in exactly the right place. Move the code, and all those invisible assumptions shatter.

Containers solve this. Completely. And they’re not as complicated as they sound.

The Pizza Box Analogy

Imagine you make a great pizza. Someone asks for the recipe. You could hand them a list of ingredients and steps — but they might have a different oven, different flour, different altitude. The pizza might not turn out the same.

Now imagine you could put the entire finished pizza — dough, sauce, toppings, the exact oven temperature, even the plate — into a box. The box keeps everything sealed. Someone opens it on any table in any kitchen in any country, and the pizza is the same every time.

That’s a container.

A container packages your application with everything it needs to run: the code, the runtime (like Node.js or Python), the libraries, the config files, the environment variables. You seal it up. It runs the same everywhere — your laptop, your friend’s laptop, a server in Atlanta, a data center in Frankfurt.

The “oven” — the host machine — doesn’t matter anymore. The box is self-contained.

What Problem Does This Actually Solve?

Let me make this concrete. I run a blog (the one you’re reading right now). It’s a static site built with Astro, and it gets served by a lightweight web server called nginx. To run it on a server, I’d need to:

  1. Install nginx on the server
  2. Make sure it’s the right version
  3. Copy my built files to the right directory
  4. Write an nginx config file and put it in the right place
  5. Start nginx with the right flags

If the server already has a different version of nginx, or the config directory is somewhere else, or another app is already using the port I need — problems. Every server is a unique snowflake.

With a container, my entire Dockerfile looks like this:

FROM nginx:alpine
COPY dist/ /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]

Five lines. That’s the real Dockerfile for this blog. Let’s break down what each one does.

The Dockerfile: A Recipe Card

A Dockerfile is the recipe for building a container image. Each line is an instruction:

FROM nginx:alpine

Start from a base image. This says “begin with a minimal Linux system that already has nginx installed.” The :alpine part means it uses Alpine Linux, which is tiny — about 5MB. You don’t install nginx manually. Someone already made a well-tested image with it, and you build on top of their work.

COPY dist/ /usr/share/nginx/html

Copy your app files into the image. My blog builds to a dist/ folder full of HTML, CSS, and JavaScript. This line puts those files exactly where nginx expects to find them.

COPY nginx.conf /etc/nginx/conf.d/default.conf

Copy your config. My custom nginx configuration goes where nginx will read it on startup.

EXPOSE 3000

Document the port. This tells anyone reading the Dockerfile that the app listens on port 3000. It doesn’t actually open the port — that happens at runtime — but it’s good practice to declare it.

CMD ["nginx", "-g", "daemon off;"]

The startup command. When the container runs, this is what executes. It starts nginx in the foreground so the container stays alive.

That’s the whole thing. Five instructions. The result is a self-contained package that runs this blog identically whether I’m testing on my workstation or deploying to a VPS 800 miles away.

The Build Pipeline: Code to Running App

Here’s the progression from code to a live application. Each step has a clear purpose:

Source Code
    → Dockerfile (the recipe)
        → Image (the sealed box)
            → Container (the running app)

Source code is what you write — HTML, JavaScript, Python, whatever.

Dockerfile is the recipe that defines the environment and how to assemble everything.

Image is what you get after building. Think of it as the sealed pizza box. It’s a snapshot — frozen, immutable, ready to ship. You can store images, share them, and version them.

Container is what happens when you run an image. It’s a live process with its own isolated filesystem, network, and memory. You can run multiple containers from the same image, just like you could theoretically produce identical pizzas from the same recipe.

Building and running looks like this:

# Build the image from the Dockerfile
docker build -t my-blog .

# Run a container from the image
docker run -d -p 3000:3000 my-blog

The -t my-blog gives the image a name. The -p 3000:3000 maps port 3000 on your machine to port 3000 inside the container. The -d runs it in the background.

After running that second command, open http://localhost:3000 in your browser. Your app is live.

Try It Yourself

You don’t need a real project to build your first container. Create a folder with two files:

index.html:

<!DOCTYPE html>
<html>
<head><title>My First Container</title></head>
<body>
  <h1>Hello from inside a container!</h1>
  <p>This page is being served by nginx, running in an isolated environment.</p>
</body>
</html>

Dockerfile:

FROM nginx:alpine
COPY index.html /usr/share/nginx/html/index.html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

Then build and run:

docker build -t hello-container .
docker run -d -p 8080:80 hello-container

Open http://localhost:8080. You just containerized a web application. The html file, the web server, the operating system — all sealed in a box. Move that box to any machine with Docker installed and it works identically.

A Real-World Example: Multi-Stage Builds

Simple containers are powerful, but real apps often need a build step. You don’t want your final container to include all the build tools — that makes the image larger and exposes unnecessary surface area.

Here’s the real Dockerfile from one of my other projects, SkyMaxx USA — a Node.js e-commerce app:

FROM node:20-alpine AS build
WORKDIR /app
COPY package.json package-lock.json .npmrc ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=build /app/build ./build
COPY --from=build /app/node_modules ./node_modules
COPY --from=build /app/package.json ./package.json
ENV NODE_ENV=production
ENV HOST=0.0.0.0
ENV PORT=3000
EXPOSE 3000
CMD ["node", "build/server/index.js"]

This is a multi-stage build. Two FROM lines means two stages:

Stage 1 (named build): Starts from a Node.js image, installs dependencies, copies the source code, and runs the build. This stage has all the development tools.

Stage 2 (the final image): Starts fresh from the same Node.js base, then copies only the build output from stage 1 using COPY --from=build. The final image never includes your source code, build tools, or dev dependencies. It’s lean and production-ready.

You don’t need to understand multi-stage builds right now. But notice the pattern: the container only carries what it needs to run, never what it needed to build. That’s the pizza box principle — the customer doesn’t need to see the kitchen.

Containers vs. Virtual Machines

You might have heard of virtual machines (VMs). They solve a similar problem — running software in isolation — but they’re heavier.

Virtual Machine:                Container:
┌──────────────────┐           ┌──────────────────┐
│    Your App      │           │    Your App      │
├──────────────────┤           ├──────────────────┤
│   Libraries      │           │   Libraries      │
├──────────────────┤           └──────┬───────────┘
│   Guest OS       │                  │
│  (entire Linux)  │           Shares host kernel
├──────────────────┤
│   Hypervisor     │
└──────────────────┘

A VM includes an entire operating system. That means each VM might be 2-10GB before you even install your app. VMs take minutes to boot.

A container shares the host’s kernel. No guest OS. A minimal container can be as small as 5MB and starts in under a second. That’s why you can comfortably run ten or twenty containers on a single machine where three or four VMs would start to strain.

Docker vs. Podman: Same Boxes, Different Movers

You’ve probably heard “Docker” used as a synonym for containers. Docker popularized containers and its tooling is widespread. But Docker isn’t the only game in town.

I use Podman on my workstation. Podman runs the same container images, reads the same Dockerfiles, and uses nearly identical commands:

# Docker                          # Podman
docker build -t my-app .           podman build -t my-app .
docker run -d -p 3000:3000 my-app  podman run -d -p 3000:3000 my-app
docker ps                          podman ps

The commands are interchangeable. Many people alias docker to podman and forget they switched.

The key difference: Docker requires a background daemon (a long-running service) with root privileges. Podman is daemonless and runs rootless by default — each container runs as your user, not as root. From a security perspective, that’s a meaningful improvement.

Both tools build and run OCI-standard containers. OCI (Open Container Initiative) is the open standard that defines what a container image looks like. Any OCI-compatible tool can run any OCI image. The pizza box fits in any oven that follows the standard.

If you’re just starting out, use whichever is easier to install on your system. Docker has more tutorials online. Podman is the better security posture. The skills transfer directly between them.

Why This Matters Beyond “Shipping Code”

Containers aren’t just a deployment trick. They change how you think about infrastructure:

Reproducibility. I can hand someone my Dockerfile and they’ll build an identical environment. No “install this version of that tool” spreadsheet. No wikis that go stale. The Dockerfile is the documentation, and it actually runs.

Isolation. Each container gets its own filesystem and network. If one container crashes, it doesn’t take down the others. I run my AI chat platform, my blog, my e-commerce site, and a market data service all on the same server — each in its own container, unable to interfere with the others.

Versioning. Container images can be tagged. my-app:v1.0, my-app:v1.1, my-app:latest. If a new version breaks, roll back to the previous image. The old box is still sitting there, sealed and ready.

Portability. I build images on my workstation in Georgia, export them as tar files, and load them on a VPS in a data center. Same image, different machine, identical behavior. The pizza box travels.

What You Learned

  • Containers solve the “works on my machine” problem by packaging an app with everything it needs to run
  • A Dockerfile is a recipe: base image, copy files, set config, define the startup command
  • The pipeline goes: source code → Dockerfile → image (sealed box) → container (running process)
  • Multi-stage builds keep final images lean by separating build tools from runtime
  • Containers share the host kernel, making them faster and lighter than virtual machines
  • Podman is a rootless, daemonless alternative to Docker that uses the same commands and standards
  • Container images follow the OCI standard — build once, run anywhere that speaks OCI

Next time someone says “it works on my machine,” you’ll know the answer: put it in a box.