Learning to Generate Wikipedia Summaries for Underserved Languages from Wikidata

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Elena Simperl, Hady Elsahar, Lucie-Aimée Kaffee, Frédérique Laforest, Christophe Gravier, Pavlos Vougiouklis, Jonathon Hare
Journal/Conference Name NAACL 2018 6
Paper Category
Paper Abstract While Wikipedia exists in 287 languages, its content is unevenly distributed among them. In this work, we investigate the generation of open domain Wikipedia summaries in underserved languages using structured data from Wikidata. To this end, we propose a neural network architecture equipped with copy actions that learns to generate single-sentence and comprehensible textual summaries from Wikidata triples. We demonstrate the effectiveness of the proposed approach by evaluating it against a set of baselines on two languages of different natures: Arabic, a morphological rich language with a larger vocabulary than English, and Esperanto, a constructed language known for its easy acquisition.
Date of publication 2018
Code Programming Language Lua
Comment

Copyright Researcher 2022