Interpretable Deep Learning for Toxicity Prediction
Files
Publication or External Link
Date
Advisor
Citation
DRUM DOI
Abstract
Team TOXIC (“Understanding Computational Toxicology”) seeks to apply interpretability techniques to machine learning models which predict drug safety. Currently, such models have been developed with relative accuracy and are used in industry for drug development. However, because they are not sufficiently rooted in chemical knowledge, they are not widely used in regulatory processes. To contribute towards a solution, we evaluate existing explanation methods for toxicity predction models trained on open-source data sets. Additionally, we are working towards models involving the usage of more interpretable data representations. Ultimately, we hope to demonstrate a proof-of-concept for an interpretable model for predicting drug safety which can illustrate its reasoning.