Adversarial Robustness and Fairness in Deep Learning
dc.contributor.advisor | Goldstein, Tom | en_US |
dc.contributor.author | Cherepanova, Valeriia | en_US |
dc.contributor.department | Applied Mathematics and Scientific Computation | en_US |
dc.contributor.publisher | Digital Repository at the University of Maryland | en_US |
dc.contributor.publisher | University of Maryland (College Park, Md.) | en_US |
dc.date.accessioned | 2023-10-07T05:37:49Z | |
dc.date.available | 2023-10-07T05:37:49Z | |
dc.date.issued | 2023 | en_US |
dc.description.abstract | While deep learning has led to remarkable advancements across various domains, the widespread adoption of neural network models has brought forth significant challenges such as vulnerability to adversarial attacks and model unfairness. These challenges have profound implications for privacy, security, and societal impact, requiring thorough investigation and development of effective mitigation strategies. In this work we address both these challenges. We study adversarial robustness of deep learning models and explore defense mechanisms against poisoning attacks. We also explore the sources of algorithmic bias and evaluate existing bias mitigation strategies in neural networks. Through this work, we aim to contribute to the understanding and enhancement of both adversarial robustness and fairness of deep learning systems. | en_US |
dc.identifier | https://doi.org/10.13016/dspace/abcb-jnd6 | |
dc.identifier.uri | http://hdl.handle.net/1903/30841 | |
dc.language.iso | en | en_US |
dc.subject.pqcontrolled | Artificial intelligence | en_US |
dc.subject.pquncontrolled | Adversarial Robustness | en_US |
dc.subject.pquncontrolled | Deep Learning | en_US |
dc.subject.pquncontrolled | Face Recognition | en_US |
dc.subject.pquncontrolled | Fairness | en_US |
dc.subject.pquncontrolled | Neural Networks | en_US |
dc.title | Adversarial Robustness and Fairness in Deep Learning | en_US |
dc.type | Dissertation | en_US |
Files
Original bundle
1 - 1 of 1