ATTENTION: The works hosted here are being migrated to a new repository that will consolidate resources, improve discoverability, and better show UTA's research impact on the global community. We will update authors as the migration progresses. Please see MavMatrix for more information.
Show simple item record
dc.contributor.advisor | Ming, Jiang | |
dc.creator | Barton, Armon | |
dc.date.accessioned | 2019-02-26T19:15:09Z | |
dc.date.available | 2019-02-26T19:15:09Z | |
dc.date.created | 2018-12 | |
dc.date.issued | 2018-12-12 | |
dc.date.submitted | December 2018 | |
dc.identifier.uri | http://hdl.handle.net/10106/27743 | |
dc.description.abstract | Deep learning is becoming a technology central to the safety of cars, the security of networks, and the correct functioning of many other types of systems. Unfortunately, attackers can create adversarial examples, small perturbations to inputs that trick deep neural networks into making a misclassification. Researchers have explored various defenses against this attack, but many of them have been broken. The most robust approaches are Adversarial Training and its extension, Adversarial Logit Pairing, but Adversarial Training requires generating and training on adversarial examples from any possible attack. This is not only expensive, but it is inherently vulnerable to novel attack strategies. We propose PadNet, a stacked defense against adversarial examples that does not require knowledge of the attack techniques used by the attacker. PadNet combines two novel techniques: Defensive Padding and Targeted Gradient Minimizing (TGM). Prior research suggests that adversarial examples exist near the decision boundary of the classifier. Defensive Padding is designed to reinforce the decision boundary of the model by introducing a new class of augmented data within the training set that exists near the decision boundary, called the padding class. Targeted Gradient Minimizing is designed to produce low gradients from the input data point toward the decision boundary, thus making adversarial examples more difficult to find. In this study, we show that: 1) PadNet significantly increases robustness against adversarial examples compared to adversarial logit pairing, and 2) PadNet is adaptable to various attacks without knowing the attacker's techniques, and therefore allows the training cost to be fixed unlike adversarial logit pairing. | |
dc.format.mimetype | application/pdf | |
dc.language.iso | en_US | |
dc.subject | Deep learning | |
dc.subject | Secure machine learning | |
dc.subject | Adversarial examples | |
dc.title | Defending Neural Networks Against Adversarial Examples | |
dc.type | Thesis | |
dc.degree.department | Computer Science and Engineering | |
dc.degree.name | Doctor of Philosophy in Computer Science | |
dc.date.updated | 2019-02-26T19:15:09Z | |
thesis.degree.department | Computer Science and Engineering | |
thesis.degree.grantor | The University of Texas at Arlington | |
thesis.degree.level | Doctoral | |
thesis.degree.name | Doctor of Philosophy in Computer Science | |
dc.type.material | text | |
dc.creator.orcid | 0000-0002-5372-1480 | |
Files in this item
- Name:
- BARTON-DISSERTATION-2018.pdf
- Size:
- 740.9Kb
- Format:
- PDF
This item appears in the following Collection(s)
Show simple item record