Continuous control with deep reinforcement learning

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Nicolas Heess, Tom Erez, Jonathan J. Hunt, David Silver, Yuval Tassa, Alexander Pritzel, Timothy P. Lillicrap, Daan Wierstra
Journal/Conference Name Proceedings - 2019 Chinese Automation Congress, CAC 2019
Paper Category
Paper Abstract We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end directly from raw pixel inputs.
Date of publication 2015
Code Programming Language Multiple

Copyright Researcher 2022