Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Download
Authors Nan Rosemary Ke, Mohammad Pezeshki, Tegan Maharaj, Aaron Courville, Chris Pal, Yoshua Bengio, David Krueger, Anirudh Goyal, Nicolas Ballas, János Kramár
Journal/Conference Name 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
Paper Category
Paper Abstract We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.
Date of publication 2016
Code Programming Language Multiple
Comment

Copyright Researcher 2022