Skip to main content
Mini PC Lab logo
Mini PC Lab Tested. Benchmarked. Reviewed.
tutorials

How to Run Docker on a Mini PC Home Server — Complete Setup Guide | Mini PC Lab

By Mini PC Lab Team · February 1, 2026 · Updated March 27, 2026

This article contains affiliate links. If you purchase through our links, we may earn a commission at no extra cost to you. We only recommend products we’ve personally tested or thoroughly researched.

Docker on mini PC home server guide hero image

Docker is the fastest way to turn a mini PC into a useful home server. One command installs a service that used to take an hour to configure. This guide covers everything from a fresh Linux install to running your first containers — including the networking and storage patterns that matter for a production home server.

Before You Start

Requirements:

  • Mini PC running Debian 12 or Ubuntu 24.04 LTS (recommended base OS)
  • 8GB+ RAM (4GB minimum for Docker itself plus one or two containers)
  • 32GB+ storage for Docker images and container data
  • Internet connection
  • Estimated time: 30–45 minutes

Don’t have a mini PC yet? See our best mini PC for Docker guide for tested hardware at every budget.

Running Proxmox? Run Docker inside a Debian LXC container or a Debian VM. Don’t install Docker directly on the Proxmox host.


Step 1: Install a Fresh Linux OS

If starting from scratch on bare metal, Debian 12 (“Bookworm”) is the recommended base:

  1. Download the Debian 12 netinstall ISO from debian.org
  2. Write to USB with Rufus (Windows) or dd (macOS/Linux)
  3. Boot and install — select: standard system utilities, SSH server. Deselect desktop environments.

After install, SSH in or work from the console:

ssh [email protected]

Update first:

sudo apt update && sudo apt upgrade -y

Step 2: Install Docker Engine

Use the official Docker install script — this is the fastest correct method:

# Install dependencies
sudo apt install -y ca-certificates curl gnupg lsb-release

# Add Docker's official GPG key
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the Docker repository
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

# Install Docker Engine + Compose plugin
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

Add your user to the docker group (so you don’t need sudo for every docker command):

sudo usermod -aG docker $USER
newgrp docker  # Apply without logout

Verify the install:

docker run hello-world
# Should print "Hello from Docker!"

docker compose version
# Should print "Docker Compose version v2.x.x"

Step 3: Configure Docker Storage

By default, Docker stores everything in /var/lib/docker. If your OS drive is small, move it:

# Stop Docker first
sudo systemctl stop docker

# Create the data directory on your larger drive
sudo mkdir -p /data/docker

# Configure Docker daemon
sudo tee /etc/docker/daemon.json << EOF
{
  "data-root": "/data/docker",
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m",
    "max-file": "3"
  }
}
EOF

# Move existing data (if any)
sudo rsync -aP /var/lib/docker/ /data/docker/

# Restart Docker
sudo systemctl start docker

The log-opts configuration above is important: without log limits, long-running containers accumulate gigabytes of logs. max-size: 10m and max-file: 3 caps each container at 30MB of logs total.


Portainer is a web UI for managing Docker containers. It’s not required — docker compose from the terminal works fine — but it gives you a visual overview of running containers, logs, and resource usage that’s useful for home server management.

# Create a persistent volume for Portainer data
docker volume create portainer_data

# Run Portainer
docker run -d \
  -p 9443:9443 \
  --name portainer \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:latest

Access Portainer at https://[YOUR-IP]:9443. Create an admin account on first visit.


Step 5: Understand Docker Compose (How You’ll Run Everything)

Docker Compose is the right way to manage home server containers. Instead of long docker run commands, you write a docker-compose.yml file that describes your services, then run docker compose up -d to start them.

A minimal example — Nginx:

# ~/services/nginx/docker-compose.yml
services:
  nginx:
    image: nginx:alpine
    container_name: nginx
    ports:
      - "80:80"
    volumes:
      - ./html:/usr/share/nginx/html:ro
    restart: unless-stopped
mkdir -p ~/services/nginx
cd ~/services/nginx
# Create the docker-compose.yml above, then:
docker compose up -d

The key patterns you’ll use:

services:
  myservice:
    image: someimage:tag        # always pin a specific tag, not :latest
    container_name: myservice   # consistent name for logs and exec
    environment:
      - PUID=1000               # run as your user (not root)
      - PGID=1000
    volumes:
      - /host/path:/container/path  # persist data outside the container
    ports:
      - "8080:80"               # host:container
    restart: unless-stopped     # restart on crash and server reboot
    networks:
      - proxy                   # put related containers on the same network

Step 6: Set Up a Reverse Proxy with Nginx Proxy Manager

Running multiple services on ports like :8080, :8096, :8123 gets messy. Nginx Proxy Manager lets you map clean domain names (or local names like plex.home) to each service with SSL.

# ~/services/nginx-proxy-manager/docker-compose.yml
services:
  npm:
    image: jc21/nginx-proxy-manager:latest
    container_name: nginx-proxy-manager
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
      - "81:81"      # Admin UI
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Access the admin UI at http://[YOUR-IP]:81. Default login: [email protected] / changeme — change immediately.


5 Essential Containers to Run First

Once Docker is running, these are the highest-value services for a mini PC home server:

1. Pi-hole (DNS ad blocking for your whole network)

services:
  pihole:
    image: pihole/pihole:2024.07.0
    container_name: pihole
    environment:
      TZ: 'America/New_York'
      WEBPASSWORD: 'yoursecurepassword'
    volumes:
      - ./etc-pihole:/etc/pihole
      - ./etc-dnsmasq.d:/etc/dnsmasq.d
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "8080:80/tcp"
    restart: unless-stopped

2. Vaultwarden (self-hosted Bitwarden password manager)

services:
  vaultwarden:
    image: vaultwarden/server:1.32.0
    container_name: vaultwarden
    environment:
      WEBSOCKET_ENABLED: 'true'
    volumes:
      - ./vw-data:/data
    ports:
      - "8200:80"
    restart: unless-stopped

3. Uptime Kuma (service monitoring dashboard)

services:
  uptime-kuma:
    image: louislam/uptime-kuma:1
    container_name: uptime-kuma
    volumes:
      - ./uptime-kuma-data:/app/data
    ports:
      - "3001:3001"
    restart: unless-stopped

4. Heimdall (dashboard for all your services)

services:
  heimdall:
    image: lscr.io/linuxserver/heimdall:latest
    container_name: heimdall
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - ./config:/config
    ports:
      - "8090:80"
    restart: unless-stopped

5. Watchtower (automatic container updates)

services:
  watchtower:
    image: containrrr/watchtower:1.7.1
    container_name: watchtower
    environment:
      WATCHTOWER_CLEANUP: 'true'
      WATCHTOWER_SCHEDULE: '0 0 4 * * *'  # 4am daily
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
    restart: unless-stopped

Useful Docker Commands

# List running containers
docker ps

# View logs for a container
docker logs -f container_name

# Restart a container
docker restart container_name

# Update a container (pull new image + recreate)
docker compose pull && docker compose up -d

# Remove unused images and volumes (reclaim disk space)
docker system prune -a

# Execute a command inside a running container
docker exec -it container_name bash

# View resource usage
docker stats

Troubleshooting

Container exits immediately after starting

Check the logs: docker logs container_name. Common causes: missing required environment variable, missing volume directory, port already in use.

Port conflict — “address already in use”

Another process or container is using that port. Check with sudo ss -tlnp | grep :PORT. Change the host port in docker-compose.yml (the left side of "8080:80").

Container can’t reach the internet

Check Docker’s default network: docker network ls. Verify that docker0 bridge interface exists: ip link show docker0. Restart Docker if it’s missing: sudo systemctl restart docker.

Containers don’t persist data after restart

Make sure volumes are defined in docker-compose.yml. Named volumes (mydata:/app/data) and bind mounts (./data:/app/data) both survive container restarts. Data only lives inside the container if you didn’t mount a volume — and it’s lost on docker compose down.


What to Do Next


Quick Price Summary


→ Check Current Price: Beelink EQ14 on Amazon — Intel N150, 6W idle, handles 10–15 Docker containers simultaneously → Check Current Price: Beelink SER9 PRO+ on Amazon — 8-core Ryzen 7, 8W idle, ideal for 20+ containers or mixed Docker + VM workloads → Check Current Price: GMKtec K11 on Amazon — Ryzen 9 8945HS, dual Intel NICs, upgradeable to 64GB DDR5