Designing recurrent neural networks by unfolding an l1-l1 minimization algorithm

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Nikos Deligiannis, Hung Duy Le, Huynh Van Luong
Journal/Conference Name Proceedings - International Conference on Image Processing, ICIP
Paper Category
Paper Abstract We propose a new deep recurrent neural network (RNN) architecture for sequential signal reconstruction. Our network is designed by unfolding the iterations of the proximal gradient method that solves the l1-l1 minimization problem. As such, our network leverages by design that signals have a sparse representation and that the difference between consecutive signal representations is also sparse. We evaluate the proposed model in the task of reconstructing video frames from compressive measurements and show that it outperforms several state-of-the-art RNN models.
Date of publication 2019
Code Programming Language Python
Comment

Copyright Researcher 2022