Show, Attend and Tell: Neural Image Caption Generation with Visual Attention

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Download
Authors Jimmy Ba, Aaron Courville, Yoshua Bengio, Ryan Kiros, Kelvin Xu, Ruslan Salakhutdinov, Kyunghyun Cho, Richard Zemel
Journal/Conference Name 32nd International Conference on Machine Learning, ICML 2015
Paper Category
Paper Abstract Inspired by recent work in machine translation and object detection, we introduce an attention based model that automatically learns to describe the content of images. We describe how we can train this model in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound. We also show through visualization how the model is able to automatically learn to fix its gaze on salient objects while generating the corresponding words in the output sequence. We validate the use of attention with state-of-the-art performance on three benchmark datasets Flickr8k, Flickr30k and MS COCO.
Date of publication 2015
Code Programming Language Multiple
Comment

Copyright Researcher 2022