XGBoost: Scalable GPU Accelerated Learning

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors R. Mitchell, Andrey Adinets, T. Rao, Eibe Frank
Journal/Conference Name ArXiv
Paper Category
Paper Abstract We describe the multi-GPU gradient boosting algorithm implemented in the XGBoost library (this https URL). Our algorithm allows fast, scalable training on multi-GPU systems with all of the features of the XGBoost library. We employ data compression techniques to minimise the usage of scarce GPU memory while still allowing highly efficient implementation. Using our algorithm we show that it is possible to process 115 million training instances in under three minutes on a publicly available cloud computing instance. The algorithm is implemented using end-to-end GPU parallelism, with prediction, gradient calculation, feature quantisation, decision tree construction and evaluation phases all computed on device.
Date of publication 2018
Code Programming Language C++
Comment

Copyright Researcher 2022