Past content

Browse all available daily entries.

  • 2026-03-23

    Superlinear SGD noise-curvature power law reframes implicit regularization, Bayesian optimality of selective SSMs over Transformers for in-context learning, SLAY's physics-inspired spherical linear attention via Bernstein's theorem, and Rigollet's mean-field PDE framework for transformer training dynamics.

  • 2026-03-16

    Q-learning for controlled diffusions with near-optimality rates, a microstructural derivation of rough Bergomi from order flow, exact LQG equilibria with endogenous signals and Volterra information wedges, transformers trapped by simplicity bias on Boolean functions, and Tao & Davis launch a mathematics distillation challenge.

  • 2026-03-09

    Softmax gradient flow polarization explains attention sinks, f-divergence policy gradients fix exponential blowup, a 524M-param foundation model for trade microstructure, and Riemannian geometry reveals the optimal AMM rebalancing path.

  • 2026-02-28

    Research highlights: A Model-Free Universal AI, Conformalized Neural Networks for Federated Uncertainty Quantification under Dual Heterogeneity, and Mean Estimation from Coarse Data: Characterizations and Efficient Algorithms (+2 more).

  • 2026-02-27

    A set of recent papers and talks on model agreement, risk-aware POMDP evaluation, and viscous HJB control with direct implications for practical learning systems.

  • 2026-02-26

    Three transport-focused ML/finance papers on optimal transport, diffusion dynamics, and risk-adjusted prediction, plus two OT transport-focused talks.

  • 2026-02-25

    This cycle highlights discrete diffusion control advances, distributionally robust online learning, and gap-dependent reinforcement guarantees, plus two transport-focused videos.

  • 2026-02-24

    This cycle features one-step generative transport, scalable cooperative multi-agent gradient control, and AMM impulse-control finance, plus two flow-matching explainers worth studying.

  • 2026-02-23

    Research highlights: Training-Free Adaptation of Diffusion Models via Doob's *h*-Transform, Flow Matching with Injected Noise for Offline-to-Online Reinforcement Learning, and Autodeleveraging as Online Learning, plus 2 more.

  • 2026-02-22

    Optimal transport geometry and online-learning-style autodeleveraging dominate today’s feed, with paper-adjacent videos on GW alignment and perp market design.

  • 2026-02-21

    Today links a unified Hawkes-style microstructure theory, a proximal-geometry view of flow matching, and DeFi liquidation fee design, with concise companion talks on point processes and MEV market structure.

  • 2026-02-20

    Today’s highlights connect robust offline-to-online RL transfer, fast continuous-denoising language generation, and formal online-learning views of crypto auto-deleveraging, with two strong lecture companions on actor-critic and text diffusion.

  • 2026-02-19

    Today’s feed highlights new theory on diffusion self-training stability, practical convergence guarantees for average-reward TD learning, and Wiener-chaos implied-volatility calibration, plus two Stanford CS236 lectures that reinforce score-based and latent-variable fundamentals.

  • 2026-02-18

    A mathematically rich set spanning infinite-dimensional diffusion generation, transformer noise stability, and DeFi liquidation microstructure, plus two high-signal lectures connecting discrete diffusion and continuous noise operators.

  • 2026-02-17

    Today’s picks connect sharp discrete-diffusion sampling guarantees, bridge-based maximum-entropy RL, and a unified order-flow microstructure theory, with two complementary lectures on discrete flows and market impact.

  • 2026-02-16

    Today’s picks span provable RL-vs-SFT learning dynamics for sparse Boolean functions, reliability signals in conditional flows, and entropy-based language structure, plus two strong video lectures on flow matching and Fourier analysis.

  • 2026-02-15

    Today’s picks connect geometric control in flow matching, kinetic-energy diagnostics, and sampling-driven alignment dynamics, with two CS236 lectures reinforcing the objective-level foundations behind modern generative modeling.

  • 2026-02-14

    Today’s feed highlights a variance-based diffusion alignment objective, a CFM–IFM duality theorem, and a Bayesian-filtering acceleration for sequential flow matching, plus two high-signal diffusion lectures.

  • 2026-02-13

    Research highlights on IO-aware Sinkhorn, crypto microstructure explainability, sequential flow matching, and deterministic L3 order-book replay, plus one OT lecture.

  • 2026-02-12

    Research highlights on attention-as-OT, tilt matching for reward-steered generation, high-dimensional mean-field games, and Hawkes LOB market making.