Security Enhancement and Bias Mitigation for Emerging Sensing and Learning Systems

Loading...
Thumbnail Image

Files

Publication or External Link

Date

2021

Advisor

Citation

Abstract

Artificial intelligence (AI) is being used across various practical tasks in recent years, facilitating many aspects of our daily life. With AI-based sensing and learning systems, we can enjoy the services of automated decision making, computer-assisted medical diagnosis, and health monitoring. Since these algorithms have entered human society and are influencing our daily life, such important issues as intellectual property protection, access control, privacy protection, and fairness/equity, should be considered when we are developing the algorithms, in addition to their successful performance. In this dissertation, we improve the design of emerging AI-based sensing and learning systems from security and fairness perspectives.

The first part is the security protection of deep neural networks (DNN). DNNs are becoming an emerging form of intellectual property for model owners and should be protected from unauthorized access and piracy to encourage healthy business investment and competition. Taking advantage of DNN's intrinsic mechanism, we propose a novel framework to provide access control to the trained DNNs so that only authorized users can utilize them properly to prevent piracy and illicit usage.

The second part is privacy protection in facial videos. Remote Photoplethysmography (rPPG) can be used to collect a person's physiological signal when his/her face is captured by a video camera, which may raise privacy issues from two aspects. First, individual health conditions may be revealed from a facial recording unintentionally by a person without his/her explicit consent from a facial recording. To avoid the physiological privacy issue, we develop \textit{PulseEdit}, a novel and efficient algorithm that can edit the physiological signals in facial videos without affecting visual appearance to protect the person's physiological signal from disclosure.On the other hand, R&D of rPPG technology also has a potential leakage of identity privacy. We usually require public benchmark facial datasets to develop rPPG algorithms, but facial videos are often very sensitive and have a high leakage risk in identity privacy. We develop an anonymization transform that removes sensitive visual information identifying an individual, but in the meantime, preserves the physiological information for rPPG analysis.

In the last part, we investigate fairness in machine learning inference. Various fairness definitions in prior art were proposed to ensure that decisions guided by the machine learning models are equitable. Unfortunately, the ``fair'' model trained with these fairness definitions is sensitive to threshold, i.e., the condition of fairness will no longer hold when tuning the decision threshold. To this end, we introduce the notion of threshold-invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.

Notes

Rights