A Stochastic Gradient Method with an Exponential Convergence Rate for Finite Training Sets

View Researcher II's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Authors Nicolas Le Roux, Mark W. Schmidt, Francis R. Bach
Journal/Conference Name NIPS
Paper Category
Paper Abstract We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.
Date of publication 2012
Code Programming Language MATLAB
Comment

Copyright Researcher II 2021