RELIABLE MACHINE LEARNING: ROBUSTNESS, CALIBRATION, AND REPRODUCIBILITY

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2021

Citation

Abstract

Modern machine learning (ML) algorithms are being applied today to a rapidly increasing number of tasks that affect the lives and well-being of people across the globe. Despite all the successes of artificial intelligence (AI), these methods are not always reliable and in fact are often quite brittle. It has been shown that a wide range of recent ML algorithms are vulnerable to adversarial attacks, and are over-confident even when they are not accurate. In this dissertation, we focus on the overall goal of making machine learning algorithms more reliable in terms of adversarial robustness, confidence calibration, and reproducibility.

In the first part of the thesis, we explore novel approaches to improve the adversarial robustness of a deep neural network. We present a method that involves feature regularization and attention-based feature prioritization to motivate the model to only learn and rely on robust features that are not manipulated by the adversarial perturbation. We show that the resulting model is significantly more robust than other existing methods.

In the second part of the thesis, we discover that the current training scheme of using one-hot labels under cross-entropy loss is a major cause of the over-confident behavior of deep neural networks. We propose a generalized definition of confidence calibration that requires the entire output to be calibrated. This approach leads to a novel form of the smooth labeling algorithm, called \textit{class-similarity based label smoothing}, which tries to approximate a distribution that is optimal for generalized confidence calibration. We show that a model trained with the proposed smooth labels is significantly better calibrated than all existing methods.

In the third part of the thesis, we propose an approach that can improve the calibration performance of robust models. We first learn a representation space using prototypical learning which bases its classification on the distances between the representation of a sample and the representations of each class prototype. We then use the distance information to train a confidence prediction network to encourage the model to make calibrated predictions. We demonstrate through extensive experiments that our method can improve the calibration performance of a model while maintaining comparable accuracy and adversarial robustness levels.

In the fourth part of the thesis, we tackle the problem of determining reproducible, large-scale functional patterns for the whole brain from a group of fMRI subjects. Because of the non-linear nature of the signals and significant inter-subject variability, how to reliably extract patterns that are reproducible across subjects has been a challenging task. We propose a group-level model, called LEICA, that uses Laplacian eigenmaps as the main data reduction step to preserve the correlation information in the original data as best as possible in a certain rigorous sense. The nonlinear map is robust relative to noise in the data and inter-subject variability. We show that LEICA detects functionally cohesive maps that are much more reproducible than the state-of-the-art methods.

Notes

Rights