Error Feedback Fixes SignSGD and other Gradient Compression Schemes

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Martin Jaggi, Sebastian U. Stich, Quentin Rebjock, Sai Praneeth Karimireddy
Journal/Conference Name 36th International Conference on Machine Learning, ICML 2019
Paper Category
Paper Abstract Sign-based algorithms (e.g. signSGD) have been proposed as a biased gradient compression technique to alleviate the communication bottleneck in training large neural networks across multiple workers. We show simple convex counter-examples where signSGD does not converge to the optimum. Further, even when it does converge, signSGD may generalize poorly when compared with SGD. These issues arise because of the biased nature of the sign compression operator. We then show that using error-feedback, i.e. incorporating the error made by the compression operator into the next step, overcomes these issues. We prove that our algorithm EF-SGD with arbitrary compression operator achieves the same rate of convergence as SGD without any additional assumptions. Thus EF-SGD achieves gradient compression for free. Our experiments thoroughly substantiate the theory and show that error-feedback improves both convergence and generalization. Code can be found at \url{https//github.com/epfml/error-feedback-SGD}.
Date of publication 2019
Code Programming Language Jupyter Notebook
Comment

Copyright Researcher 2022