Generative Adversarial Text to Image Synthesis
View Researcher's Other CodesDisclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).
Please contact us in case of a broken link from here
Authors | Scott Reed, Xinchen Yan, Lajanugen Logeswaran, Zeynep Akata, Honglak Lee, Bernt Schiele |
Journal/Conference Name | 33rd International Conference on Machine Learning, ICML 2016 |
Paper Category | Artificial Intelligence |
Paper Abstract | Automatic synthesis of realistic images from text would be interesting and useful, but current AI systems are still far from this goal. However, in recent years generic and powerful recurrent neural network architectures have been developed to learn discriminative text feature representations. Meanwhile, deep convolutional generative adversarial networks (GANs) have begun to generate highly compelling images of specific categories, such as faces, album covers, and room interiors. In this work, we develop a novel deep architecture and GAN formulation to effectively bridge these advances in text and image model- ing, translating visual concepts from characters to pixels. We demonstrate the capability of our model to generate plausible images of birds and flowers from detailed text descriptions. |
Date of publication | 2016 |
Code Programming Language | Multiple |
Comment |