Antonio Ocello (ENSAE)
Score-based generative models, also known as diffusion models, can be naturally formulated in continuous time as the time-reversal of a stochastic differential equation. In this talk, we present this formalism and highlight its connection with stochastic optimal control, where generation is interpreted as steering a reference diffusion toward a target distribution. Within this framework, we discuss the convergence bounds established by Conforti, Durmus, and Gentiloni-Silveri (2025, SIAM Journal on Mathematics of Data Science), providing non-asymptotic guarantees under minimal regularity assumptions.
We then extend this perspective to discrete data generation via Discrete Markov Probabilistic Models (DMPMs). Here, the forward process is a continuous-time Markov chain on discrete states, and the reverse-time jump intensity is governed by a discrete analogue of the score function, characterized as a conditional expectation of the forward process. We present convergence guarantees in this discrete setting and illustrate their effectiveness on Bernoulli and binary MNIST data. This unified view connects diffusion models, optimal control, and discrete generative modeling within a rigorous convergence framework. This talk is based on joint work with Le-Tuyet-Nhi Pham, Dario Shariatian, Giovanni Conforti, and Alain Oliviero Durmus (ICML 2025 -https://openreview.net/forum?id=biJiSMLGOV¬eId=e3zQbOpzXX ).
Lieu
à BioSP