Reliability of Machine Learning Models in the Real World

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2023

Citation

Abstract

Neural networks have consistently showcased exceptional performance in various applications. Yet, their deployment in adversarial settings is limited due to concerns about reliability. In this paper, we’ll first explore methods to verify a model’s reliability in diverse scenarios, including classification, detection, auctions, and watermarking. We’ll then discuss the challenges and limitations of these verification techniques in real-world situations and suggest potential remedies. We’ll wrap up by examining the reliability of neural networks in the context of the model’s implicit bias.

Our initial research investigated three critical areas where deep learning model reliability is crucial: object detection, deep auctions, and model watermarking. We found that without rigorous verification, systems could be vulnerable to accidents, manipulation of auction systems, and potential intellectual property theft. To counteract this, we introduced verification algorithms tailored to these respective scenarios.

However, while certificates affirm the resilience of our models within a predefined threatframework, they don’t guarantee real-world infallibility. Hence, in the subsequent section, we explored strategies to improve model’s adaptability to domain shifts. While the pyramid adversarial training technique is effective in improving reliability with respect to domain shift, it is very computationally intensive. In response, we devised an alternative technique, universal pyramid adversarial training, which offers comparable advantages while being 30-70% more efficient. Finally, we try to understand the inherent non-robustness of neural networks through the lens of the model’s implicit bias. Surprisingly, we found that the generalization ability of deep learning models comes almost entirely from the architecture and not the optimizer as commonly believed. This architectural bias might be a crucial factor in explaining the inherent non-robustness of neural networks.

Looking ahead, we intend to probe deeper into how neural networks’ innate biases can lead to their frailties. Moreover, we posit that refining these implicit biases could offer avenues to enhance model reliability.

Notes

Rights