EXP-064 · Final · K=100 ceiling reached

Flow reconstruction from sparse sensors.

Pi-LNN combines DeepONet operator learning with CfC (closed-form continuous-time) memory to reconstruct a Re = 10,000 two-dimensional Kolmogorov flow from only K = 100 velocity sensors. The model never sees a full DNS field — its supervision is pointwise sensor MSE plus the residual of the Navier–Stokes equations.

7.80%
KE Relative Error
Kinetic energy error across the full T = [0, 5] window.
17.0/ 20.2 %
u / v Relative L²
‖pred − ref‖₂ / ‖ref‖₂; standard PINN field metric.
0.184
Divergence L²
Global incompressibility residual, ∇·u → 0.
0.962
kf Amplitude Ratio
Forcing-mode amplitude vs DNS; phase error −0.023 rad.
§ 01 — Problem setup

A real engineering sensor scene, reconstructed by physics alone.

We take the engineering scenario seriously: only point-wise velocity samples are available, and the only auxiliary signal is the governing PDE. Full DNS fields are used offline as a benchmark — never as supervision.

  • Flow
    Two-dimensional Kolmogorov turbulence on a strict periodic [0, 1]² domain.
  • Reynolds
    Re = 10,000.High-Re regime with a wide inertial range and rich small-scale vortices.
  • DNS reference
    N = 256, T = 201 snapshots, dt = 0.025.Used only to extract sensor values and for offline diagnostics.
  • Forcing
    fx = A sin(2π kf y), A = 0.1, kf = 2.
  • Observation
    K = 100 sensors (QR-pivot placement), velocity components u, v only.Pressure p is unobserved and emerges as an internal physics field.
  • Training signal
    Sensor MSE + NS momentum + continuity residual. Nothing else.
DNS vorticity field at Re=10000, t=5
DNS · Reference snapshot Re = 10,000 vorticity field at t = 5.0. The model never trains on this field — it is the offline yard-stick against which every reconstruction figure below is measured.
§ 02 — Architecture

Two parallel paths, fused in a DeepONet-style readout.

A branch path turns the sensor stream into a continuous-time memory; a trunk path lifts each query (x, y, t) into an aligned condition feature. Cross-attention fuses them into a field value.

Pi-LNN architecture: a branch path turns sensor observations into spatial tokens via Fourier or RFF encoding, then a CfC closed-form continuous-time memory advances them through token attention with causal updates; a trunk path carries the query coordinates (x, y, t) with a temporal anchor; the DeepONet decoder fuses the two via query-token attention and emits the (u, v, p) field; the loss combines sensor MSE with NS momentum and continuity residuals; the SOAP preconditioned optimiser updates parameters; full-field DNS is never used as supervision.
Figure · End-to-end signal flow Branch (left) builds a dt-aware sensor memory; trunk (centre) carries the query (x, y, t) with a temporal anchor; the DeepONet decoder fuses them and emits (u, v, p). Training loss is sensor MSE plus NS momentum and continuity residuals; SOAP applies the preconditioned update. Full-field DNS is used only offline. Diagram errata: the third branch block reads “CIC Memory” — should be CfC Memory (closed-form continuous-time). A clean re-render is queued.
i.
Branch · Sensor memory Fourier embedding d = 128 → SpatialSetEncoder → token attention → CfC(dt) causal state, yielding a dt-aware token bank.
ii.
Trunk · Query path Continuous (x, y, t) with temporal anchor of tq; shares the same d = 256 latent space as the branch tokens.
iii.
Decoder · Operator fusion Cross-attention readout at operator rank = 256; output (u, v, p), with p recovered from the NS residual.

Want the full graph?

Component-by-component nodes, time semantics, decoder zoom-in, and the closed-form CfC update are documented in the detailed architecture page.

§ 03 — Results

Re = 10,000 reconstruction quality from K = 100.

Field structure, spectral content, and physics constraints are evaluated on the full T = [0, 5] window. Numbers are versus DNS; the model has not seen any of these full fields during training.

KE rel. err
7.80%
Energy across full window.
u rel. L²
17.0%
‖uₚ − u_ref‖₂ / ‖u_ref‖₂.
v rel. L²
20.2%
RMSE = 6.21 × 10⁻².
ω rel. L²
45.4%
k² amplifies high freq.
div L²
0.184
Incompressibility residual.
kf phase
−0.023 rad
Forcing-mode phase error.
Velocity field comparison at t=5
Field · Velocity at t = 5 Reconstruction tracks the large-scale shape and the dominant Kolmogorov streaks; residual error concentrates on vortex sheets and high-gradient interfaces.
Energy spectrum comparison
Spectrum · Energy E(k) Low-wavenumber energy band is recovered; the model rolls off above the K = 100 information barrier — a structural ceiling, not an optimisation gap (see §04).
§ 04 — Information-theoretic limit

K = 100 is a mathematical ceiling, not an optimisation gap.

The Re = 10,000 velocity field is highly sparse in a db4 wavelet basis (Gini ≈ 0.983). Compressed-sensing theory requires M ≥ O(s · log N) ≈ 5,000 sensors for exact recovery. K = 100 is short by a factor of ~50.

Frequency band Energy share Wavelet DOF Feasible at K = 100? EXP-064 band err.
Low  (k ≤ 8) 94.4 % ~196 Underdetermined — feasible 3.62 %
Mid  (8 < k ≤ 16) 4.8 % ~588 Exceeds capacity ≈ 100 %
High  (16 < k ≤ 32) 0.8 % ~1,452 Far exceeds capacity ≈ 100 %

Convergent evidence — the wall is real

Three independent turbulence-aware variants (multi-scale CfC time-constants, frequency-tiered Fourier, PINN causal weighting; EXP-067 / 068 / 069) all fail to break the mid- and high-band errors. KE regresses by +1.9 to +12.3 percentage points; the band-mid error stays ≈ 100 %. EXP-064 (K = 100, KE 7.80 %) is the global optimum under the current architecture configuration. Further progress requires either more sensors (K ≥ 5,000) or a physical prior that is, by construction, engineering-non-transferable.

§ 05 — Resources

Read deeper, run it yourself, or follow the experiment chain.