Non-Uniform Stochastic Average Gradient Method for Training Conditional Random Fields

View Researcher II's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Mark W. Schmidt, Reza Babanezhad, Mohamed Osama Ahmed, Aaron Defazio, Ann Clifton
Journal/Conference Name International Conference on Artificial Intelligence and Statistics
Paper Category
Paper Abstract We apply stochastic average gradient (SAG) algorithms for training conditional random elds (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance, and analyze the rate of convergence of the SAGA variant under nonuniform sampling. Our experimental results reveal that our method signicantly outperforms existing methods in terms of the training objective, and performs as well or better than optimally-tuned stochastic gradient methods in terms of test error.
Date of publication 2015
Code Programming Language MATLAB
Comment

Copyright Researcher II 2021