Investigating the Application of Interpretability Techniques to Computational Toxicology

Abstract

A barrier to the incorporation of predictive models for drug design lies in their lack of interpretability. To this end, we examine on three fronts the interpretability of benchmark models for the 2014 Tox21 Data Challenge, an initiative in the domain with a dataset of measurements across twelve toxicity experiments. On existing measures of model performance, we assess the current benchmark metrics' ability to describe model behavior and recommend an alternative set of metrics for the task. On the existing interpretability methods for machine learning models, we quantitatively and qualitatively evaluate their application to this domain by measuring desirable properties of explanations they produce. Additionally, we incorporate a recently described method for partial charge prediction as novel input for a toxicological model and observe its resulting model performance and model interpretability.

Notes

Gemstone Team TOXIC

Rights