Back to past content

Daily Feed - 2026-02-18

Date:

3 paper picks + 2 video picks (same bundle for Telegram/email).

Author-talk check: I searched YouTube using exact paper titles for today’s picks and did not find clear author/conference talks yet, so I included two high-signal topic-adjacent lectures.


Infinite-dimensional generative diffusions via Doob’s h-transform

Domain: ML / Generative Modeling Theory | Time cost: ~15min abstract+setup, ~60min full read

Intuition: Instead of deriving a reverse-time diffusion and hoping the approximation behaves, this paper builds the generative process by a principled change of measure on a reference diffusion. That gives a mathematically clean path to function-space / infinite-dimensional settings (where reverse-time constructions are often brittle).

Concrete punch: The construction uses a Doob transform on path measure, with Radon–Nikodym form

and induced drift correction of the form

where is the diffusion covariance operator. The paper then links approximation of this forced process to a score-matching objective under verifiable conditions.

Significance: This is a high-leverage bridge from finite-dimensional diffusion practice to principled infinite-dimensional generation (fields, trajectories, operator-valued states), with explicit control on measure mismatch.

Why it matches: Strong variational/measure-theoretic core, explicit mechanism rather than heuristics, and clean unification of diffusion modeling with control-style change-of-measure reasoning.


Noise Stability of Transformer Models

Domain: ML Theory / Analysis of Boolean Functions | Time cost: ~20min abstract+theory skim, ~70min full read

Intuition: Average sensitivity (single-coordinate flips) is too narrow to explain modern LLM simplicity biases. The paper promotes noise stability (correlated perturbations on all coordinates) as the right global robustness lens and derives tractable bounds for attention and ReLU blocks.

Concrete punch: The central object is

which generalizes beyond purely Boolean regimes. In Boolean analysis, average sensitivity can be viewed as a boundary derivative of stability:

The paper couples this with covariance-interval propagation across layers and introduces noise-stability regularization that empirically accelerates grokking.

Significance: Gives a mathematically interpretable regularization target for robustness and simplicity in transformers, with a path from theorem-level quantities to train-time objectives.

Why it matches: Directly in your Boolean/Fourier taste zone, mechanism-first, and delivers a reusable spectral/robustness lens rather than benchmark-only claims.


Liquidation Dynamics in DeFi and the Role of Transaction Fees

Domain: Blockchain / Quant Finance Microstructure | Time cost: ~15min abstract+model skim, ~55min full read

Intuition: The paper models liquidation as a dynamic program where liquidators can manipulate a constant-product market maker (CPMM) oracle (Oracle Extractable Value, OEV) to trigger profitable liquidations. The key result is that transaction fees are not just “friction” — they can act as a security control variable.

Concrete punch: CPMM mechanics impose

so a manipulation path must pay fee-adjusted trading cost each leg. The paper derives closed-form liquidation bounds and shows a fee regime where expected attack payoff crosses below zero:

So fees can endogenously harden oracle manipulation channels, not merely reduce attacker margin.

Significance: Useful for protocol design: fee calibration becomes part of solvency and anti-manipulation policy, not only LP compensation policy.

Why it matches: Strong mechanism-level microstructure logic, explicit optimization framing, and direct blockchain-market-structure relevance.


Stanford CS236: Deep Generative Models I 2023 I Lecture 18 — Diffusion Models for Discrete Data

Domain: ML / Deep Generative Modeling (Video) | Time cost: 1h 00m

Intuition: High-quality lecture tying score-based intuition to discrete domains (text/tokens), exactly the conceptual gap behind current discrete diffusion work.

Concrete punch: A representative discrete noising kernel can be written as

followed by denoising/score-style training objectives that recover efficient reverse updates for categorical states.

Significance: Clarifies when discrete diffusion can outperform autoregressive baselines and where sampling-speed/quality tradeoffs actually come from.

Why it matches: Stanford-level pedagogy, first-principles derivation style, and directly useful for your generative-model unification thread.


Noise Stability — Beyond the Boolean Cube

Domain: Math/ML Theory (Video) | Time cost: 59m

Intuition: A rigorous extension of Boolean noise-stability intuition into Gaussian/continuous settings, which is exactly the conceptual upgrade needed to reason about modern real-valued transformer representations.

Concrete punch: The Gaussian noise operator (Ornstein–Uhlenbeck semigroup) is

with correlation parameter . This provides a principled bridge from discrete influence-style arguments to continuous robustness analysis.

Significance: Gives reusable math for translating “robustness under perturbation” into analyzable operators, not just empirical stress tests.

Why it matches: High-signal theory talk, tight alignment with your Boolean-analysis interests, and directly complementary to today’s transformer noise-stability paper.


Source-discovery note

  • ArXiv: searched recent (6-12 month window) theory-heavy candidates across generative modeling, Boolean/transformer theory, and blockchain microstructure.
  • YouTube: searched exact paper-title author-talk matches first; none were clearly available yet for today’s fresh papers, so selected topic-adjacent high-quality lectures.
  • Hacker News / Lobsters: scanned recent (<1 week) hits; nothing met today’s concrete-punch + mechanism-first threshold.

Comments