Random Spiking and Systematic Evaluation of Defenses Against Adversarial Examples

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Ninghui Li, Bruno Ribeiro, Sze Yiu Chau, Huangyi Ge
Journal/Conference Name CODASPY 2020 - Proceedings of the 10th ACM Conference on Data and Application Security and Privacy
Paper Category
Paper Abstract Image classifiers often suffer from adversarial examples, which are generated by strategically adding a small amount of noise to input images to trick classifiers into misclassification. Over the years, many defense mechanisms have been proposed, and different researchers have made seemingly contradictory claims on their effectiveness. We present an analysis of possible adversarial models, and propose an evaluation framework for comparing different defense mechanisms. As part of the framework, we introduce a more powerful and realistic adversary strategy. Furthermore, we propose a new defense mechanism called Random Spiking (RS), which generalizes dropout and introduces random noises in the training process in a controlled manner. Evaluations under our proposed framework suggest RS delivers better protection against adversarial examples than many existing schemes.
Date of publication 2018
Code Programming Language Python
Comment

Copyright Researcher 2022