Practical Robust Learning Under Domain Shifts

dc.contributor.advisorShrivastava, Abhinav ASen_US
dc.contributor.advisorDavis, Larry LDen_US
dc.contributor.authorYang, Luyuen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2023-02-01T06:31:28Z
dc.date.available2023-02-01T06:31:28Z
dc.date.issued2022en_US
dc.description.abstractWith the constantly upgraded devices, the data we capture is shifting with time. Despite the domain shifts among the images, we as humans can put aside the difference and still recognize the content. However, these shifts are a bigger challenge for machines. It is widely known that humans are naturally adaptive to the visual changes in the environment, without learning all over again. However, to make machines work in the changed environment we need new annotations from human. The fundamental question is: can we make machines as adaptive as humans? In this thesis, we have worked towards addressing this question through advances in the study of robust learning under domain shifts via domain adaptation. Our goal is to facilitate the transfer of information of the machines while minimizing the need for human supervision. To enable real systems with demonstrated robustness, the study of domain adaptation needs to move from ideals to realities. In current domain adaptation research, there are few ideals that are not consistent with reality: i) The assumption that domains are perfectly sliced and that domain labels are available. ii) The assumption that the annotations from the target domain should be treated equally as those of the source domain. iii) The assumption that the samples of target domains are constantly accessible. In this thesis, we try to address the issue that true domain labels are hard to obtain, the target domain labels have better ways to exploited, and that in reality the target domain is often time-sensitive. In the scope of problem settings, this thesis has covered the following scenarios with practical values. Unsupervised multi-source domain adaptation, semi-supervised domain adaptation and online domain adaptation. Three completed works are reviewed corresponding to each problem setting. The first work proposes an adversarial learning strategy that learns a dynamic curriculum for source samples to maximize the utility of source labels of multiple domains. The model iteratively learns which domains or samples are best suited for aligning to the target. The intuition is to force the adversarial agent to constantly re-measure the transferability of latent domains over time to adversarially raise the error rate of the domain discriminator. The method has removed the need of domain labels, yet it outperforms other methods on four well-known benchmarks by significant margins. The second work aims to address the problem that current methods have not effectively used the target supervision by treating source and target supervision without distinction. The work points out that the labeled target data needs to be distinguished from the source, and propose to explicitly decompose the task into two sub-tasks: a semi-supervised learning task in the target domain and an unsupervised domain adaptation task across domains. By doing so, the two sub-tasks can better leverage the corresponding supervision and thus yield very different classifiers. The third work is proposed in the context of online privacy, i.e. each online sample of the target domain is permanently deleted after processed. The proposed framework utilizes the labels from the public data and predicts on the unlabeled sensitive private data. To tackle the inevitable distribution shift from the public data to the private data, the work proposes a novel domain adaptation algorithm that directly aims at the fundamental challenge of this online setting--the lack of diverse source-target data pairs.en_US
dc.identifierhttps://doi.org/10.13016/vul8-xdbk
dc.identifier.urihttp://hdl.handle.net/1903/29541
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolleddistribution shiften_US
dc.subject.pquncontrolleddomain adaptationen_US
dc.subject.pquncontrolledonline learningen_US
dc.subject.pquncontrolledrobust learningen_US
dc.subject.pquncontrolledsemi-supervised learningen_US
dc.subject.pquncontrolledunsupervised learningen_US
dc.titlePractical Robust Learning Under Domain Shiftsen_US
dc.typeDissertationen_US

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Yang_umd_0117E_22791.pdf
Size:
17.73 MB
Format:
Adobe Portable Document Format