Dashboard

MinIOAIStor benchmarks8-node cluster

The exascale data store for the AI enterprise.

Two production-style sweeps of MinIO AIStor on 192× Solidigm D5-P5336 122 TB QLC NVMe and dual 400 GbE Spectrum-X fabric. Headline numbers below.

Validated stackMinIOSolidigmIntelFarmGPU
Peak GET
267.76GiB/s
8 clients · 256MiB
vs fabric ceiling72%

Node scaling · single-NIC

Peak PUT
120.61GiB/s
7 clients · plateau
vs fabric ceiling32%

Node scaling · single-NIC

Peak sustained GET
195.24GiB/s
128MiB · c=192
vs fabric ceiling66%

Object size × concurrency · dual-NIC

Peak sustained PUT
133.71GiB/s
1GiB · c=192
vs fabric ceiling45%

Object size × concurrency · dual-NIC

GET TTFB median
3–5ms
Best 2 ms · all 8 cells
Burst GET (1-second)
272.43GiB/s
Headroom above sustained median
Sustained run
4.0hours
0 errors across 40 cells
2026-04-29
Node scaling
Add one client at a time on a single-NIC fabric — show how throughput scales 1→8 client nodes.
Peak GET
268 GiB/s
Peak PUT
121 GiB/s
TTFB
3–5 ms
GET ramp 1→8 clients39268
256MiBc=32/clientEC:4 (4 data + 4 parity, 24 sets × 8 drives)kernel 7.0.1-1.el10.elrepo
2026-04-28
Object size × concurrency
Dual-NIC sweep matrix — every combination of 4 object sizes and 8 concurrency points.
Peak GET
195 GiB/s
Peak PUT
134 GiB/s
Burst GET
210 GiB/s
GET vs concurrency · 1 GiB75185
4 obj sizes8 conc pointsEC:3 (3 parity, 12 sets × 16 drives, sdc=auto)kernel 6.12 (TractorOS RHEL 10.1 bootc)
Cluster at a glance
The hardware behind both runs
Full spec
CPU
Intel Xeon 6781P · 80c single-socket
MinIO
AIStor 2026-03-11 (DEV dual-NIC)
Raw capacity
~23.5 PB across 192 drives
Fabric
Dual 400 GbE · MTU 9000
Erasure coding
EC:4 (4 data + 4 parity, 24 sets × 8 drives) · EC:3
Kernels
7.0 (today) · 6.12 (matrix)
OS
TractorOS RHEL 10.1 bootc
Total errors
0 across 40 cells