Building a Proxmox Cluster with 3 Mini PCs — Setup Guide | Mini PC Lab
By Mini PC Lab Team · January 20, 2026 · Updated March 27, 2026
This article contains affiliate links. If you purchase through our links, we may earn a commission at no extra cost to you. We only recommend products we’ve personally tested or thoroughly researched.

A three-node Proxmox cluster lets you migrate VMs between nodes without downtime, restart VMs automatically when a node fails, and use Ceph for distributed shared storage — all with consumer mini PCs that cost less than a single enterprise server. This guide covers the complete setup from three fresh Proxmox installs to a working high-availability cluster.
Before You Start
Requirements:
- 3 mini PCs with identical or similar specs (makes resource planning simpler)
- Each node: Proxmox VE 8.x installed, accessible via IP
- Each node: at least 16GB RAM for VMs; 32GB+ recommended
- Each node: at least one NVMe SSD (a second drive per node is recommended for Ceph OSD)
- A network switch with sufficient ports (gigabit minimum; 2.5GbE or 10GbE for Ceph storage traffic)
- Static IPs assigned to all three nodes
- Estimated time: 2–3 hours for complete cluster + Ceph setup
Recommended hardware: Three identical Beelink SER9 PRO+ units (~$380–480 each) give you 8 cores / 16 threads per node, 32GB LPDDR5X (soldered), and low idle power (~8W/node). Three nodes at 8W idle = $24/year total electricity.
Not ready to build a cluster? See our best mini PC for Proxmox guide for single-node setup recommendations.
Understanding Proxmox Clustering
Why 3 nodes, not 2?
Proxmox clusters use quorum — a majority vote system to decide cluster state. With 2 nodes, if the network link between them breaks, neither node can determine if the other is truly down or just unreachable. Both would think they’re in a split-brain state and stop running VMs to be safe.
With 3 nodes, 2 nodes always form a majority. If node 3 goes offline, nodes 1 and 2 have quorum and keep running. The failed node’s VMs migrate to the surviving nodes automatically (with HA configured).
Network architecture:
Management network: 192.168.1.0/24
node1: 192.168.1.51
node2: 192.168.1.52
node3: 192.168.1.53
Ceph/cluster network (separate VLAN or second NIC): 10.10.0.0/24
node1: 10.10.0.1
node2: 10.10.0.2
node3: 10.10.0.3
The separate Ceph network is optional but recommended — Ceph replication traffic is heavy and shouldn’t compete with VM traffic on the management network.
Step 1: Prepare Each Node
On each of the three nodes, complete a fresh Proxmox VE 8.x install. See our Proxmox installation guide for the full walkthrough.
After install on each node:
# Switch to community repos (no subscription required)
echo "deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription" >> /etc/apt/sources.list
echo "# deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise" > /etc/apt/sources.list.d/pve-enterprise.list
# Update
apt update && apt full-upgrade -y
# Set hostnames (do this on each respective node)
hostnamectl set-hostname pve01 # on node 1
hostnamectl set-hostname pve02 # on node 2
hostnamectl set-hostname pve03 # on node 3
Edit /etc/hosts on each node to map all three hostnames:
nano /etc/hosts
Add (using your actual IPs):
192.168.1.51 pve01
192.168.1.52 pve02
192.168.1.53 pve03
Do this on all three nodes. Proxmox uses hostnames for cluster communication.
Step 2: Create the Cluster on Node 1
On node 1 only:
pvecm create my-homelab-cluster
Verify the cluster is running:
pvecm status
# Should show: Cluster information, nodes: 1
Step 3: Join Nodes 2 and 3
On node 2:
# Join the cluster — enter node 1's IP and root password when prompted
pvecm add 192.168.1.51
On node 3:
pvecm add 192.168.1.51
Verify cluster formation:
# Run on any node
pvecm status
# Should show: Nodes: 3, Quorate: Yes
The Proxmox web UI now shows all three nodes in the left sidebar when you log in to any node at https://[ANY-NODE-IP]:8006.
Step 4: Configure Ceph Distributed Storage (Optional but Recommended)
Ceph creates a pool of storage distributed across all three nodes. VMs stored on Ceph can be live-migrated between nodes without shared NAS hardware.
4a. Install Ceph on Each Node
In the Proxmox web UI, click on each node → Ceph → Install (if not already installed). Select the “Quincy” or “Reef” release.
Or via terminal on each node:
pveceph install --repository no-subscription
4b. Create the Ceph Monitor on Each Node
A Ceph monitor tracks cluster state. Three monitors provide quorum.
# Run on each node
pveceph mon create
Verify monitors: Datacenter → Ceph → Monitor → you should see 3 monitors with “in quorum.”
4c. Add OSD (Storage) on Each Node
Each NVMe drive you want in the Ceph pool becomes an OSD. Ceph works best with dedicated drives separate from the OS drive.
Via the web UI: Node → Ceph → OSD → Create OSD → select your second NVMe disk
Via terminal:
# On each node — replace /dev/nvme1n1 with your data drive
pveceph osd create /dev/nvme1n1
After adding OSDs on all three nodes, wait for Ceph to sync. Check status:
ceph -s
# Look for: health: HEALTH_OK and all PGs active+clean
4d. Create a Ceph Pool for VMs
Datacenter → Ceph → Pools → Create:
- Name:
vm-pool - Size: 2 (2 replicas — data on 2 of 3 nodes. Use 3 for better redundancy)
- Min. Size: 1
After creating the pool, Datacenter → Storage → Add → RBD:
- ID:
ceph - Pool:
vm-pool - Nodes: all
This makes the Ceph pool available as storage for VMs across all nodes.
Step 5: Configure High Availability
HA automatically restarts VMs on surviving nodes when a node fails.
Create an HA Group:
Datacenter → HA → Groups → Add:
- Group ID:
ha-group - Nodes: select all three with equal priority
Enable HA for a VM:
Select a VM → More → Manage HA → Enable:
- Group:
ha-group - Max restart: 3
When a node with an HA-enabled VM goes offline, Proxmox automatically starts that VM on one of the other nodes within 60–120 seconds.
Step 6: Test the Cluster
Test live migration:
- Create a VM on node 1 with its disk on Ceph storage
- Right-click the VM → Migrate → select node 2 → Migrate
The VM migrates to node 2 with ~1 second of downtime (for most VMs). VMs on local storage require offline migration; VMs on Ceph storage support live migration.
Test HA failover:
- Enable HA on a test VM (per Step 5)
- Simulate node failure:
systemctl stop pve-cluster corosyncon the node running the VM - Wait 60–120 seconds — the VM should restart on another node automatically
Maintenance Procedures
Rolling update (update one node at a time without downtime):
# 1. Migrate all VMs off the node being updated
# In web UI: Node → Bulk Actions → Migrate
# 2. Update the node
apt update && apt full-upgrade -y
reboot
# 3. Verify node rejoins cluster after reboot
pvecm status
# 4. Migrate VMs back, repeat for next node
Remove a node from the cluster:
# On the node to be removed
pvecm delnode pve03
Quick Price Summary
- Beelink SER9 PRO+ — Recommended 3-node cluster unit
- Beelink EQ14 — Budget node, dual Intel 2.5GbE Ceph
- GMKtec K11 — Premium node, OculLink expansion
Troubleshooting
Cluster shows “no quorum” after adding nodes
SSH is required for cluster join — verify SSH is enabled on all nodes and node 1 can reach nodes 2 and 3 by hostname. Verify /etc/hosts has all three hostnames on all nodes.
Ceph OSDs show “down”
Usually a permission or drive initialization issue. Check: ceph osd tree and journalctl -u ceph-osd@X (replace X with the OSD number). Re-creating the OSD after wiping the drive with ceph-volume lvm zap --destroy /dev/nvme1n1 resolves most issues.
VMs won’t migrate — “storage not shared”
HA live migration requires VMs to use shared storage (Ceph). VMs on local storage (local-lvm) can only be migrated offline. Move the disk to Ceph: VM → Hardware → disk → Move Disk → Target: ceph.
Recommended Hardware for a 3-Node Cluster
→ Check Current Price: Beelink SER9 PRO+ on Amazon — 8-core Ryzen 7, 8W idle (×3 = $24/year electricity), ideal cluster node → Check Current Price: Beelink EQ14 on Amazon — budget option, 6W idle, dual Intel 2.5GbE for Ceph traffic → Check Current Price: GMKtec K11 on Amazon — premium cluster node, dual Intel NICs, OculLink for eGPU expansion
See also: best mini PC for Proxmox | how to install Proxmox on a mini PC