Artykuł w czasopiśmie
Brak miniatury
Licencja

CC-BYCC-BY - Uznanie autorstwa

Analysis of Langevin Monte Carlo via Convex Optimization

Autor
Miasojedow, Błażej
Majewski, Szymon
Durmus, Alain
Data publikacji
2019
Abstrakt (EN)

In this paper, we provide new insights on the Unadjusted Langevin Algorithm. We show that this method can be formulated as the first order optimization algorithm for an objective functional defined on the Wasserstein space of order 2. Using this interpretation and techniques borrowed from convex optimization, we give a non-asymptotic analysis of this method to sample from log-concave smooth target distribution on R d . Based on this interpretation, we propose two new methods for sampling from a non-smooth target distribution. These new algorithms are natural extensions of the Stochastic Gradient Langevin Dynamics (SGLD) algorithm, which is a popular extension of the Unadjusted Langevin Algorithm for largescale Bayesian inference. Using the optimization perspective, we provide non-asymptotic convergence analysis for the newly proposed methods.

Słowa kluczowe EN
Unadjasted Langevin Algorithm
convex optimization
Bayesian inference
gradient flow
Wasserstein metric
Dyscyplina PBN
matematyka
Czasopismo
Journal of Machine Learning Research
Tom
20
Zeszyt
73
Strony od-do
1-46
ISSN
1532-4435
Licencja otwartego dostępu
Uznanie autorstwa