Mason Shopperly

MS-06 · built-systems · Build · Reviewed Apr 27, 2026

Multi-node CFD server

A small local cluster Mason built so OpenFOAM, Nektar++, and other CFD / numerical workloads run with more cores, better turnaround, and from anywhere — with proper monitoring.

Role
Built and operated
Era
2024–ongoing
Status
ongoing
Tier
established
Tools
Linux · MPI · OpenFOAM · Nektar++ · remote ops · monitoring
Multi-node CFD server topology — Mason's i7-11800H laptop as operator workstation, a matched i7-11800H laptop as a compute node, a Dell OptiPlex 990 i7 tower, and a smaller Dell Vostro 3710 tower, all on the home LAN, running aeroAUTO / aeroBASE / CFDLab / Nektar++ / SBP-Gregory workloads.
Topology — four commodity machines on the home LAN. Workstation (Mason's i7-11800H laptop) + matched-laptop compute node + Dell OptiPlex 990 i7 tower + smaller Dell Vostro 3710 tower. No rack; the network and patience do the work.

Why build the box

Serious external-aero CFD does not fit on a single laptop, and the work turns over often enough that renting cloud cycles every iteration is the wrong shape. The point was to own the compute: more cores than a laptop, turnaround that supports daily iteration, runnable from anywhere, with real visibility into what the cluster is doing.

What it is

A local multi-machine compute setup wired for parallel CFD and numerical workloads. MPI ties the nodes together. The day-to-day workflow is:

Workloads it runs

The cluster is solver-agnostic; what runs on it is whatever the work needs.

What it earns

The thing the cluster pays back is iteration discipline. Running cases locally — and watching them run — surfaces the issues that a black-box rented run would hide: a residual that ticks up at restart, a y⁺ that drifts after remeshing, a node load that says one rank is doing all the work. The point of the rack is not the FLOPs; it is the visibility.

aeroAUTO workload — Ahmed-25 symmetry-velocity, ParaView render from the cluster's stored case
Workload — aeroAUTO Ahmed-25 symmetry-velocity (M25, t=3600). Mesh-independence runs M0..M35 lived on this cluster; the render comes directly out of the case directory.
aeroBASE workload — Mach-2 wedge oblique-shock reflection, numerical schlieren
Workload — aeroBASE compressible Euler. The compose-first descriptor pipeline writes runnable workspaces; the cluster runs them.
CFDLab workload — modified-wavenumber analytical-probe figure
Workload — CFDLab analytical layer. Numerics development is small enough to run anywhere; what the cluster adds is the parameter-sweep + CI footprint.
aeroBASE workload — von-Kármán shedding result rendered live from the case on the cluster
Workload — aeroBASE CYL Re=100 vortex shedding rendered live from the cluster's case directory; this is what 'remote operation + monitoring' actually outputs.

Working notes — what's open on this lane

← Previous · Forced isotropic turbulence on JHTDB

Next · CFDLab — a two-book CFD codebase →