MASAGA: A Linearly-Convergent Stochastic First-Order Method for Optimization on Manifolds

View Researcher II's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Reza Babanezhad, Issam H. Laradji, Alireza Shafaei, Mark W. Schmidt
Journal/Conference Name ECML/PKDD
Paper Category
Paper Abstract We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.
Date of publication 2018
Code Programming Language Pytorch
Comment

Copyright Researcher II 2021