Why We Need New Evaluation Metrics for NLG
View Researcher's Other CodesDisclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).
Please contact us in case of a broken link from here
Authors | Amanda Cercas Curry, Verena Rieser, Ondřej Dušek, Jekaterina Novikova |
Journal/Conference Name | EMNLP 2017 9 |
Paper Category | Artificial Intelligence |
Paper Abstract | The majority of NLG evaluation relies on automatic metrics, such as BLEU . In this paper, we motivate the need for novel, system- and data-independent automatic evaluation methods: We investigate a wide range of metrics, including state-of-the-art word-based and novel grammar-based ones, and demonstrate that they only weakly reflect human judgements of system outputs as generated by data-driven, end-to-end NLG. We also show that metric performance is data- and system-specific. Nevertheless, our results also suggest that automatic metrics perform reliably at system-level and can support system development by finding cases where a system performs poorly. |
Date of publication | 2017 |
Code Programming Language | R |
Comment |