Deep Predictive Coding for Spatiotemporal Representation Learning

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Marcio Fonseca
Journal/Conference Name Philosophical Transactions of the Royal Society B: Biological Sciences
Paper Category
Paper Abstract The recent advances and pitfalls of deep learning approaches reignited the debate about the importance of innate structures or inductive biases humans use to learn common sense with limited supervision. In machine learning parlance, common sense reasoning relates to the capacity of learning representations that disentangle hidden factors behind spatiotemporal sensory data. In this work, we hypothesise that the predictive coding theory of perception and learning from neuroscience literature may be a good candidate for implementing such common sense inductive biases. We build upon a previous deep learning implementation of predictive coding by Lotter et al. (2016) and extend its application to the challenging task of inferring abstract, everyday human actions such as cooking and diving. Furthermore, we propose a novel application of the same architecture to process auditory data, and find that with a simple sensory substitution trick, the predictive coding model can learning useful representations. Our transfer learning experiments also demonstrate good generalisation of learned representations on the UCF-101 action classification dataset.
Date of publication 2018
Code Programming Language Jupyter Notebook
Comment

Copyright Researcher 2021