Show simple item record

dc.contributor.advisorMing, Jiang
dc.creatorBarton, Armon
dc.date.accessioned2019-02-26T19:15:09Z
dc.date.available2019-02-26T19:15:09Z
dc.date.created2018-12
dc.date.issued2018-12-12
dc.date.submittedDecember 2018
dc.identifier.urihttp://hdl.handle.net/10106/27743
dc.description.abstractDeep learning is becoming a technology central to the safety of cars, the security of networks, and the correct functioning of many other types of systems. Unfortunately, attackers can create adversarial examples, small perturbations to inputs that trick deep neural networks into making a misclassification. Researchers have explored various defenses against this attack, but many of them have been broken. The most robust approaches are Adversarial Training and its extension, Adversarial Logit Pairing, but Adversarial Training requires generating and training on adversarial examples from any possible attack. This is not only expensive, but it is inherently vulnerable to novel attack strategies. We propose PadNet, a stacked defense against adversarial examples that does not require knowledge of the attack techniques used by the attacker. PadNet combines two novel techniques: Defensive Padding and Targeted Gradient Minimizing (TGM). Prior research suggests that adversarial examples exist near the decision boundary of the classifier. Defensive Padding is designed to reinforce the decision boundary of the model by introducing a new class of augmented data within the training set that exists near the decision boundary, called the padding class. Targeted Gradient Minimizing is designed to produce low gradients from the input data point toward the decision boundary, thus making adversarial examples more difficult to find. In this study, we show that: 1) PadNet significantly increases robustness against adversarial examples compared to adversarial logit pairing, and 2) PadNet is adaptable to various attacks without knowing the attacker's techniques, and therefore allows the training cost to be fixed unlike adversarial logit pairing.
dc.format.mimetypeapplication/pdf
dc.language.isoen_US
dc.subjectDeep learning
dc.subjectSecure machine learning
dc.subjectAdversarial examples
dc.titleDefending Neural Networks Against Adversarial Examples
dc.typeThesis
dc.degree.departmentComputer Science and Engineering
dc.degree.nameDoctor of Philosophy in Computer Science
dc.date.updated2019-02-26T19:15:09Z
thesis.degree.departmentComputer Science and Engineering
thesis.degree.grantorThe University of Texas at Arlington
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy in Computer Science
dc.type.materialtext
dc.creator.orcid0000-0002-5372-1480


Files in this item

Thumbnail


This item appears in the following Collection(s)

Show simple item record