The Invariant Operator

The Invariant Operator

Topology optimization of cold plates — heat management components in electronics — requires solving fluid flow through porous media thousands of times as the optimizer explores different geometries. Each solve is expensive: the Navier-Stokes-Brinkman equations on a complex domain. A surrogate that replaces the solver must be fast, accurate, and — crucially — mesh-invariant: it must work at any resolution, because the optimizer changes mesh density as it refines the design.

Fourier Neural Operators achieve all three. Compared to convolutional autoencoders and U-Nets, the FNO achieves mean squared error as low as 0.0017 with speedups up to 1000x over computational fluid dynamics. But the structural advantage is mesh invariance — the FNO learns in Fourier space, where the representation is independent of the discretization. A model trained on one mesh resolution generalizes to others without retraining.

The through-claim: mesh invariance is not a bonus feature — it’s what makes the surrogate usable for topology optimization. An optimizer that refines mesh density as it converges needs a solver that works at every resolution along the way. A convolutional model trained at resolution N fails at resolution 2N — it would need retraining every time the mesh changes. The FNO’s Fourier representation naturally separates the physics (which is resolution-independent) from the discretization (which is not). The 1000x speedup matters; the mesh invariance is what makes the speedup deployable.


No comments yet.