A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
View Researcher's Other CodesDisclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).
Please contact us in case of a broken link from here
Authors | Kevin Gimpel, Dan Hendrycks |
Journal/Conference Name | 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings |
Paper Category | Artificial Intelligence |
Paper Abstract | We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks. |
Date of publication | 2016 |
Code Programming Language | Multiple |
Comment |