Mason Shopperly

MS-01 · built-systems · Treatise · Reviewed Apr 29, 2026

Sepal AI · reproducible aerospace analysis workflows

Aerospace engineering inside Sepal AI's task-based system — building reproducible analysis workflows, including constraint-driven fixed-wing UAV sizing, that someone without the original setup can re-run and trust.

Role
Aerospace Engineer · contract
Era
Nov 2025 – Mar 2026
Status
archived
Tier
established
Tools
Aerospace analysis · Fixed-wing UAV sizing · Reproducible analysis

Problem

A one-off aerospace analysis is half an answer. The result that gets shipped — sized, signed off, reviewed — has to be re-runnable by someone who did not set it up; otherwise the answer is lore the next iteration cannot stand on. The Sepal AI engagement was about doing analysis the team could run again, with the same inputs producing the same outputs every time.

Approach

Aerospace engineering inside Sepal AI’s task-based engineering system. Each task carried explicit inputs, explicit assumptions, and a defined deliverable, so the work itself produced the audit trail rather than needing one bolted on after the fact.

The work bundled around constraint-driven fixed-wing UAV sizing — the textbook chain from mission requirements through power-loading and wing-loading, through the constraint envelope (cruise, climb, stall, turn), to a defensible design point. Sizing of that kind is only as honest as the assumptions feeding it: the air density at altitude, the propeller efficiency band, the structural weight fraction, the payload envelope. Calling those out — by number, by source, in the same task — is half the deliverable.

The discipline carried from there. Every input is named. Every assumption is logged. Every step is something a non-author can step through. Where a number gets pulled from an external reference, the reference travels with the task; where an empirical correlation is used, its range of validity is part of the record. The workflow is the answer plus its receipts.

Result

A workflow shape that mirrors what aeroAUTO and the Ahmed-body work try to do on the open-source side: turn a one-off analysis into something you can hand off. The constraint diagram and design point are not the whole product — the path that produced them is, because that’s the part the next iteration runs against. Reproducibility is not a deliverable in itself; it is what makes every other deliverable trustable a second time, and a third.

What I’d do differently

Earlier review with the people who would re-run the workflow rather than the people who built it. Authors stop seeing their own assumptions; the second user is where the gaps actually surface. A scheduled dry-run by a non-author at the halfway point — clean machine, no in-flight context, only the task as written — would have caught the implicit dependencies a week sooner than they were caught.

← All work

Next · aeroAUTO →