Adversarial Robustness and Fairness in Deep Learning

dc.contributor.advisorGoldstein, Tomen_US
dc.contributor.authorCherepanova, Valeriiaen_US
dc.contributor.departmentApplied Mathematics and Scientific Computationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2023-10-07T05:37:49Z
dc.date.available2023-10-07T05:37:49Z
dc.date.issued2023en_US
dc.description.abstractWhile deep learning has led to remarkable advancements across various domains, the widespread adoption of neural network models has brought forth significant challenges such as vulnerability to adversarial attacks and model unfairness. These challenges have profound implications for privacy, security, and societal impact, requiring thorough investigation and development of effective mitigation strategies. In this work we address both these challenges. We study adversarial robustness of deep learning models and explore defense mechanisms against poisoning attacks. We also explore the sources of algorithmic bias and evaluate existing bias mitigation strategies in neural networks. Through this work, we aim to contribute to the understanding and enhancement of both adversarial robustness and fairness of deep learning systems.en_US
dc.identifierhttps://doi.org/10.13016/dspace/abcb-jnd6
dc.identifier.urihttp://hdl.handle.net/1903/30841
dc.language.isoenen_US
dc.subject.pqcontrolledArtificial intelligenceen_US
dc.subject.pquncontrolledAdversarial Robustnessen_US
dc.subject.pquncontrolledDeep Learningen_US
dc.subject.pquncontrolledFace Recognitionen_US
dc.subject.pquncontrolledFairnessen_US
dc.subject.pquncontrolledNeural Networksen_US
dc.titleAdversarial Robustness and Fairness in Deep Learningen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Cherepanova_umd_0117E_23673.pdf
Size:
16.35 MB
Format:
Adobe Portable Document Format