The Stability Gate

The Stability Gate

Neural network solvers for mechanics learn to minimize energy — but minimizing energy is necessary, not sufficient. A solution can have low energy and still be physically impossible if it violates stability: a small perturbation would cause it to collapse to a completely different state. Classical variational mechanics has stability conditions — quasiconvexity, rank-one convexity, Legendre-Hadamard inequalities — that distinguish genuine energy minima from saddle points and unstable equilibria.

For Cosserat elasticity — the mechanics of microstructured materials where each material point has both position and orientation — neural networks learn deformation and director fields simultaneously. The loss function enforces equilibrium. But without stability validation, the network can converge to configurations that satisfy the equations of motion at saddle points rather than true minima.

The framework validates neural network solutions against classical stability conditions automatically, rejecting solutions that violate necessary stability criteria. The gate doesn’t change the solver — it filters its output.

The through-claim: physics-informed neural networks that minimize energy inherit a specific failure mode from classical variational mechanics — the inability to distinguish minima from saddle points using gradient information alone. The energy is the same at both; only the curvature differs. Adding stability conditions as post-hoc validation is structurally different from adding them to the loss function. A loss-function penalty pushes solutions away from saddle points during training but doesn’t guarantee success. A validation gate rejects unstable solutions after training regardless of how they were found. The gate is more reliable precisely because it doesn’t try to influence the search — it only judges the result.


No comments yet.