The Located Physics

The Located Physics

Stiff partial differential equations — those with sharp gradients, shock fronts, exponentially thin boundary layers — defeat uniform discretization. A mesh that’s fine enough to resolve the boundary layer wastes most of its points in smooth regions. Adaptive meshing solves this, but requires knowing where to refine.

GMM-PIELM learns where the physics is. Instead of placing radial basis function centers randomly or on a grid, the method learns a probability density function over the domain and samples kernel centers from it. The density concentrates in high-error regions — boundary layers, shock fronts — using weighted Expectation-Maximization. No gradient-based neural network training. The placement learns the structure of difficulty.

On singularly perturbed convection-diffusion with diffusion coefficient ν = 10⁻⁴, this approach achieves L₂ errors up to seven orders of magnitude lower than baseline random-center methods. It resolves exponentially thin boundary layers that uniform methods miss entirely.

The through-claim: the location of physics in a domain is itself a learnable quantity. Standard approaches treat the computational domain as given and solve for the field variables within it. But the information density is wildly non-uniform — almost all the physics happens in a vanishing fraction of the space. Learning the distribution of computational effort before solving the equation inverts the usual order: first find where the answer matters, then compute it there. The seven-order improvement is not from a better solver. It’s from learning that the question “where should I look?” has a tractable answer.


No comments yet.