UPDF AI

Bayesian Learning via Stochastic Gradient Langevin Dynamics

M. Welling,Y. Teh

2011 · DBLP: conf/icml/WellingT11
International Conference on Machine Learning · 引用数 2,712

TLDR

This paper proposes a new framework for learning from large scale datasets based on iterative learning from small mini-batches by adding the right amount of noise to a standard stochastic gradient optimization algorithm and shows that the iterates will converge to samples from the true posterior distribution as the authors anneal the stepsize.

参考文献
引用文献