Noisy Networks for Exploration

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Meire Fortunato, Bilal Piot, Olivier Pietquin, Ian Osband, Vlad Mnih, Alex Graves, Demis Hassabis, Shane Legg, Remi Munos, Charles Blundell, Jacob Menick, Mohammad Gheshlaghi Azar
Journal/Conference Name ICLR 2018 1
Paper Category
Paper Abstract We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration. The parameters of the noise are learned with gradient descent along with the remaining network weights. NoisyNet is straightforward to implement and adds little computational overhead. We find that replacing the conventional exploration heuristics for A3C, DQN and dueling agents (entropy reward and $\epsilon$-greedy respectively) with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance.
Date of publication 2017
Code Programming Language Multiple

Copyright Researcher 2022