ASAGA: Asynchronous Parallel SAGA
View Researcher's Other CodesDisclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).
Please contact us in case of a broken link from here
Authors | Simon Lacoste-Julien, Rémi Leblond, Fabian Pedregosa |
Journal/Conference Name | Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017 |
Paper Category | Artificial Intelligence |
Paper Abstract | We describe ASAGA, an asynchronous parallel version of the incremental gradient algorithm SAGA that enjoys fast linear convergence rates. Through a novel perspective, we revisit and clarify a subtle but important technical issue present in a large fraction of the recent convergence rate proofs for asynchronous parallel optimization algorithms, and propose a simplification of the recently introduced "perturbed iterate" framework that resolves it. We thereby prove that ASAGA can obtain a theoretical linear speedup on multi-core systems even without sparsity assumptions. We present results of an implementation on a 40-core architecture illustrating the practical speedup as well as the hardware overhead. |
Date of publication | 2016 |
Code Programming Language | Scala |
Comment |