Back to past content

Daily Feed - 2026-02-22

Date:

On sparsity, extremal structure, and monotonicity properties of Wasserstein and Gromov-Wasserstein optimal transport plans

Domain: ML / Optimal Transport / Geometry | Time cost: ~35min read

Intuition: Standard optimal transport (OT) is linear in the coupling, while Gromov-Wasserstein (GW) compares relational geometry and is quadratic in the coupling. This note asks a sharp structural question: when can GW optimizers be sparse (or even permutation-supported) instead of diffuse soft matchings?

Concrete punch: GW solves

so unlike linear OT, geometry enters via pairwise distance distortions. The paper highlights a conditionally negative semidefinite (CNSD) condition under which GW admits optimal plans that are sparse and can be supported on a permutation.

Significance: If your alignment problem satisfies that structure, you can search for low-support correspondences instead of dense couplings—better interpretability and often better numerical behavior for downstream matching/alignment systems.

Why it matches: This is exactly your preferred pattern: variational objective + geometric invariants + a crisp condition that changes algorithmic design (sparse/permutation support vs diffuse transport).

Author talk search: No direct author talk found.


Neural Optimal Transport in Hilbert Spaces: Characterizing Spurious Solutions and Gaussian Smoothing

Domain: ML / Optimal Transport / Functional Data | Time cost: ~40min read

Intuition: Neural OT in infinite-dimensional spaces can produce fake optima when regularity assumptions break. This paper pinpoints when those spurious solutions appear and uses Gaussian smoothing to restore well-posedness.

Concrete punch: The approach perturbs inputs by a Gaussian process term (Brownian-style smoothing), i.e. conceptually , . Main result: with a regular source measure, the smoothed semi-dual problem is well-posed and recovers a unique Monge map; success depends sharply on the kernel of the covariance operator.

Significance: Gives a practical recipe for OT on trajectories/functions: if your solver is unstable or mode-collapsing in functional spaces, structured smoothing is not just a trick—it has a theorem-level justification.

Why it matches: Strong fit to your continuous-time and operator-level taste (infinite-dimensional geometry, regularity, covariance operators) with immediate implications for time-series transport modeling.

Author talk search: No direct author talk found.


Autodeleveraging as Online Learning

Domain: Quant Finance / Market Microstructure / Online Learning | Time cost: ~30min read

Intuition: Autodeleveraging (ADL) in perpetuals is usually treated as exchange policy plumbing; this paper reframes it as sequential decision-making with regret guarantees. The venue chooses haircut actions each round to restore solvency with minimal over-liquidation of profitable traders.

Concrete punch: ADL is modeled as online learning over haircut decisions with regret benchmark

On the 2025-10-10 Hyperliquid stress episode, reported regret is ~50% of an upper bound for production ADL vs ~2.6% for the optimized method; estimated over-liquidation gap improves from up to $51.7M to about $3M.

Significance: Turns ADL design from ad hoc queue rules into a measurable control problem with provable performance bounds—directly relevant for exchange risk engines and policy simulation.

Why it matches: Clean bridge from mechanism design to formal online optimization, with concrete dollar-scale consequences in real market events.

Author talk search: No direct author talk found.


Kengo Kato — “Entropic optimal transport and Gromov-Wasserstein alignment”

Domain: ML / Optimal Transport / Statistics | Time cost: 52min watch

Intuition: Seminar-style walkthrough of entropic regularization and GW alignment with a strong statistics lens: what is tractable, what is identifiable, and where regularization changes the geometry.

Concrete punch: Entropic OT introduces

which converts hard combinatorial transport into smoother optimization and enables scalable Sinkhorn-type computation. This is the operational counterpart to today’s GW sparsity/permutation discussion.

Significance: Useful as the conceptual bridge from theorem statements to implementable solvers when you need both speed and geometry control.

Why it matches: High-signal, first-principles, mathematically explicit, and directly paper-adjacent.


Lighter vs Hyperliquid: Fees, Fairness, and ADLs — The Chopping Block

Domain: Blockchain Market Design / Perpetuals Microstructure | Time cost: 1h06min watch

Intuition: A concrete post-mortem-style discussion of liquidation pipelines, insurance buffers, and ADL fairness under stress. Best used as systems context alongside the ADL paper above.

Concrete punch: Toy ADL allocation framing used in practice: for shortfall and profitable-account gains , proportional haircut with . This makes fairness/solvency trade-offs explicit and measurable.

Significance: Helps connect formal regret-minimizing ADL policies to operational policy knobs exchanges actually expose (priority rules, fee buffers, haircut schedules).

Why it matches: Mechanism-first analysis with real market plumbing and direct relevance to your finance↔systems lens.


Notes on sourcing and recency

  • Papers are all from Feb 2026 (fresh frontier window).
  • YouTube requirement satisfied with 2 videos (one paper-adjacent technical seminar, one market-structure systems discussion).
  • HN/Lobsters were scanned; no <1-week discussion met the quality bar versus today’s selected set.
  • No direct author talks were found for today’s three papers at run time.

Comments