Skip to main content
Mini PC Lab logo
Mini PC Lab Tested. Benchmarked. Reviewed.
reviews

Minisforum MS-01 Review 2026 — 10GbE Proxmox Powerhouse | Mini PC Lab

By Mini PC Lab Team · January 18, 2026 · Updated March 27, 2026

This article contains affiliate links. If you purchase through our links, we may earn a commission at no extra cost to you. We only recommend products we’ve personally tested or thoroughly researched.

Minisforum MS-01 review — 10GbE mini PC for Proxmox homelab

The Minisforum MS-01 exists because a certain type of homelab builder was tired of choosing between “mini PC with limited networking” and “rack server that’s loud, large, and power-hungry.” The MS-01 answers with: dual 10GbE SFP+ ports, two additional 2.5GbE RJ45 ports, a PCIe slot for expansion cards, three M.2 NVMe slots, Intel Core i9-13900H, and support for 96GB DDR5 — in a box roughly the size of two stacked paperback books.

It’s not cheap (~$700–800 configured), and it draws ~25W at idle. But for building a Proxmox cluster with 10GbE interconnects, hosting Ceph shared storage, or running a dense VM workload on a single machine with proper networking, the MS-01 is genuinely different hardware than anything else at this form factor and price.

This review covers its strengths, limitations, and who should — and shouldn’t — buy it.


Quick Verdict

→ Check Current Price on Amazon

CategoryScore
Networking⭐⭐⭐⭐⭐
CPU performance⭐⭐⭐⭐⭐
RAM capacity⭐⭐⭐⭐⭐
Storage expandability⭐⭐⭐⭐⭐
Value (vs. alternatives)⭐⭐⭐⭐
Power efficiency⭐⭐⭐
Overall for homelab4.5 / 5

Best for: Proxmox cluster nodes with 10GbE interconnects, Ceph storage hosts, multi-VM dense workloads, lab environments requiring real enterprise networking at home.

Not for: Budget homelab builders. Anyone who doesn’t have or need 10GbE infrastructure. Power-sensitive setups (25W idle is 2.5× more than the EQ14).


Minisforum MS-01 Specifications

SpecDetail
CPUIntel Core i9-13900H (14C/20T: 6P + 8E cores, up to 5.4GHz, vPro)
RAMUp to 96GB DDR5 SO-DIMM (2× slots)
Storage3× M.2 (2× PCIe 4.0 2280/22110, 1× PCIe 3.0 2280) + U.2 support
Networking2× 10GbE SFP+ + 2× 2.5GbE RJ45 (4 total NICs)
Expansion1× PCIe 3.0 x4 half-length slot
Display2× USB4 (8K) + 1× HDMI 2.0 (4K)
USB2× USB4 (40Gbps), 2× USB 3.2, 2× USB 2.0
vProIntel vPro Enterprise — remote management capability
Power Draw~25W idle / ~80W load
Dimensions~197 × 135 × 46mm (larger than typical mini PCs)
Price~$700–800 (configured with RAM/SSD)

The Networking Story: Why 10GbE Matters Here

The MS-01’s networking configuration is what makes it unusual. Four independent NICs:

  1. 2× 10GbE SFP+ — For direct 10GbE connections: NAS storage backend, Ceph cluster interconnect, or a dedicated VM migration network in Proxmox clusters.
  2. 2× 2.5GbE RJ45 — For management network, VM guest traffic, or as a secondary storage path.

In a Proxmox cluster, this translates to: 10GbE dedicated to Ceph or shared storage (where storage I/O speed determines VM disk performance), 2.5GbE for VM guest network, and the second 2.5GbE for Proxmox management traffic. All on one machine, no PCIe NIC cards needed.

For context: replicating this network configuration on a desktop or rack server requires buying a separate 10GbE NIC (~$80–200), which the MS-01 includes natively.

What 10GbE Delivers for Proxmox VM Performance

The practical difference between 1GbE and 10GbE for Proxmox live migration is significant:

  • 1GbE: Live migrating a 32GB VM takes ~4–5 minutes (saturating at ~125MB/s)
  • 10GbE: Same VM migrates in ~25–30 seconds (~1.2GB/s throughput)

For Ceph distributed storage, 10GbE enables the OSDs (object storage daemons) to actually use NVMe speed (~2–3GB/s sequential) rather than being bottlenecked by a 1GbE interconnect.


PCIe Slot: What You Can Do With It

The MS-01 includes a PCIe 3.0 x4 half-length slot. This is real PCIe expansion in a mini PC, which is rare. Use cases:

  • Add a dedicated network card: A second 10GbE or dual 25GbE card for more network bandwidth
  • GPU for inference: A half-length GPU like the NVIDIA T400 or A2 for Proxmox passthrough
  • HBA for storage expansion: A PCIe host bus adapter to add 4–8 SATA ports for a dedicated storage array
  • NVMe add-in card: Expand M.2 slot count further

The x4 PCIe 3.0 bandwidth (~4GB/s) won’t saturate a high-end GPU, but it’s more than adequate for the above use cases.


CPU Performance: i9-13900H in a Mini PC

The Intel Core i9-13900H is a 14-core, 20-thread Raptor Lake chip — 6 performance cores plus 8 efficiency cores, with the P-cores boosting to 5.4GHz. In a homelab context:

VM density: With 96GB DDR5 installed, you can run 20–30 simultaneous VMs. The P-core architecture handles mixed CPU-intensive and I/O-bound workloads better than pure E-core designs.

Single-core performance: The i9-13900H’s P-cores are competitive with the best AMD Ryzen APUs at 5+ GHz boost. For workloads where individual VM performance matters (compilation, database queries, Windows VMs), single-core speed is relevant.

Thermal management: Under sustained Proxmox VM load, the MS-01 runs warm — Minisforum’s BIOS sets PL1 at 60W and PL2 at 80W. The cooling system maintains performance without throttling under typical homelab loads, though the fan is audible under stress.


What We Actually Tested

Proxmox VE 8.3 on bare metal. Both 10GbE SFP+ NICs and both 2.5GbE RJ45 NICs are recognized immediately after Proxmox installation. No additional drivers or kernel parameters required.

Ceph cluster simulation. Running two MS-01 units as Ceph nodes with 10GbE interconnect: OSD replication at 900–1100MB/s sustained — effectively saturating 10GbE (theoretical 1.25GB/s). For comparison, the same test over 2.5GbE achieved 280MB/s.

VM density with 64GB DDR5 installed: 15 VMs running simultaneously (2 vCPU, 4GB RAM each = 60GB allocated), all showing responsive boot times and <5s Proxmox live migration to a second node.

PCIe passthrough of 2.5GbE RJ45 to VM: Works with standard Proxmox IOMMU configuration. IOMMU groups on the i9-13900H are well-structured for passthrough.


Power Consumption

The MS-01 is not power-efficient by mini PC standards. Measurements from scottstuff.net (running the MS-01 as a 24/7 Proxmox node) show 25–33W at idle depending on NIC activity and memory load.

StatePower DrawAnnual Cost (24/7)
Idle (Proxmox, no active VMs)~25W~$26/year
Light load (5 VMs running)~35–40W~$37–42/year
Heavy load (20 VMs, sustained)~70–80W~$73–84/year

For a homelab machine, $26–42/year is still reasonable compared to a 1U rack server (100–300W idle). But it’s 4× the EQ14’s cost to run. If your workload doesn’t need the 10GbE or 96GB RAM, the UM790 Pro at 12W idle is a more efficient choice.


Limitations: What the MS-01 Doesn’t Do Well

No AMD GPU. The Intel Iris Xe iGPU is sufficient for video output and basic transcoding, but it’s not competitive with AMD Radeon 780M for compute-intensive inference tasks or Plex/Jellyfin scale. For GPU-heavy workloads, the PCIe slot fills this gap.

25W idle is the starting point. Three MS-01 units in a Proxmox cluster draw ~75W combined at idle. This is still competitive with rack hardware but meaningfully higher than an N150 or Ryzen 7 cluster.

Intel NUC-era sizing. At ~197 × 135mm, the MS-01 is larger than most mini PCs in this guide. It won’t fit in some standard mini PC mount locations.

vPro is present but limited: Intel vPro Enterprise supports AMT (Active Management Technology) for remote management — useful for headless setups where you want BIOS-level remote access. However, AMT setup requires network configuration beyond basic Proxmox use and is overkill for most homelab scenarios.


MS-01 vs. Alternatives

ModelCPU10GbEPCIeMax RAMIdlePrice
MS-01i9-13900H (14C)2× SFP+Yes96GB~25W~$700–800
Minisforum MS-A2Ryzen 9 8945HX (16C)No64GB~20W~$799+
GMKtec K11Ryzen 9 8945HS (8C)NoNo (OculLink)64GB~15W~$639
Minisforum UM790 ProRyzen 9 7940HS (8C)NoNo64GB~12W~$380–500

The MS-01 is the only option with dual 10GbE SFP+ and a physical PCIe slot. If you don’t need those specifically, the UM790 Pro or K11 deliver more CPU efficiency per dollar.


Quick Price Summary


Frequently Asked Questions

Is the Minisforum MS-01 worth it for a home Proxmox cluster?

If you have or plan to add a 10GbE switch, yes — the 10GbE interconnect transforms Proxmox live migration and Ceph storage performance compared to 2.5GbE nodes. If you’re running 1GbE or 2.5GbE infrastructure, the MS-01’s networking premium isn’t used, and the UM790 Pro or K11 are better value.

What SFP+ modules work with the MS-01?

Standard SR, LR, and DAC (direct-attach copper) 10GbE SFP+ modules work. For back-to-back MS-01 connections (cluster inter-connect), a 10GbE SFP+ DAC cable (~$15–25) is the lowest-cost solution. For switch connectivity, any standard 10GBASE-SR or 10GBASE-LR SFP+ transceiver works.

Can I use the PCIe slot for a GPU in Proxmox?

Yes. Half-length low-profile GPUs (NVIDIA T400, NVIDIA A2, AMD FirePro W4300) fit the physical slot. PCIe 3.0 x4 bandwidth is sufficient for inference-class GPUs. Full passthrough to a VM works with standard Proxmox IOMMU configuration.

How loud is the MS-01?

Under Proxmox idle (no active VMs), the fan runs at low speed — audible at arm’s length but not disruptive in an office environment. Under sustained VM load (20+ VMs), fan noise increases to ~35–40dB — similar to a 2U rack server at low load. For noise-sensitive environments, consider the load profile before purchasing.

Does the MS-01 support ECC RAM?

No. Intel’s mainstream consumer and prosumer platforms don’t support ECC. The i9-13900H uses standard DDR5 SO-DIMM. For ECC memory requirements, purpose-built server hardware (Supermicro, Dell PowerEdge mini towers) is necessary.


Our Testing Methodology

MS-01 power consumption cited from scottstuff.net’s Proxmox deployment measurements (January 2025, 24/7 operation). Ceph I/O benchmarks based on 2-node test cluster with MS-01 units. VM density estimates based on actual Proxmox allocation: 2 vCPU, 4GB RAM per VM. NIC configuration verified via lspci and Proxmox network configuration.