Robust Learning under Distributional Shifts

dc.contributor.advisorChellappa, Ramaen_US
dc.contributor.advisorFeizi, Soheilen_US
dc.contributor.authorBalaji, Yogeshen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2021-09-17T05:36:28Z
dc.date.available2021-09-17T05:36:28Z
dc.date.issued2021en_US
dc.description.abstractDesigning robust models is critical for reliable deployment of artificial intelligence systems. Deep neural networks perform exceptionally well on test samples that are drawn from the same distribution as the training set. However, they perform poorly when there is a mismatch between training and test conditions, a phenomenon called distributional shift. For instance, the perception system of a self-driving car can produce erratic predictions when it encounters a new test sample with a different illumination or weather condition not seen during training. Such inconsistencies are undesirable, and can potentially create life-threatening conditions as these models are deployed in safety-critical applications. In this dissertation, we develop several techniques for effectively handling distributional shifts in deep learning systems. In the first part of the dissertation, we focus on detecting out-of-distribution shifts that can be used for flagging outlier samples at test-time. We develop a likelihood estimation framework based on deep generative models for this task. In the second part, we study the domain adaptation problem where the objective is to tune the neural network models to adapt to a specific target distribution of interest. We design novel adaptation algorithms, understand and analyze them under various settings. In the last part of the dissertation, we develop robust learning algorithms that can generalize to novel distributional shifts. In particular, we focus on two types of shifts - covariate and adversarial shifts. All developed algorithms are rigorously evaluated on several benchmark datasets.en_US
dc.identifierhttps://doi.org/10.13016/p0ih-yx4j
dc.identifier.urihttp://hdl.handle.net/1903/27823
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledDeep learningen_US
dc.subject.pquncontrolledDistributional shiftsen_US
dc.subject.pquncontrolledDomain adaptationen_US
dc.subject.pquncontrolledGANsen_US
dc.subject.pquncontrolledGenerative modelsen_US
dc.subject.pquncontrolledRobust learningen_US
dc.titleRobust Learning under Distributional Shiftsen_US
dc.typeDissertationen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Balaji_umd_0117E_21841.pdf
Size:
25.16 MB
Format:
Adobe Portable Document Format