Show simple item record

Investigating the Application of Interpretability Techniques to Computational Toxicology

dc.contributor.authorBanerjee, Aranya
dc.contributor.authorBoby, Kevin
dc.contributor.authorLam, Samuel
dc.contributor.authorPolefrone, David
dc.contributor.authorSan, Robert
dc.contributor.authorSchlunk, Erika
dc.contributor.authorWynn, Sean
dc.contributor.authorYancey, Colin
dc.descriptionGemstone Team TOXICen_US
dc.description.abstractA barrier to the incorporation of predictive models for drug design lies in their lack of interpretability. To this end, we examine on three fronts the interpretability of benchmark models for the 2014 Tox21 Data Challenge, an initiative in the domain with a dataset of measurements across twelve toxicity experiments. On existing measures of model performance, we assess the current benchmark metrics' ability to describe model behavior and recommend an alternative set of metrics for the task. On the existing interpretability methods for machine learning models, we quantitatively and qualitatively evaluate their application to this domain by measuring desirable properties of explanations they produce. Additionally, we incorporate a recently described method for partial charge prediction as novel input for a toxicological model and observe its resulting model performance and model interpretability.en_US
dc.subjectGemstone Team TOXICen_US
dc.titleInvestigating the Application of Interpretability Techniques to Computational Toxicologyen_US
dc.relation.isAvailableAtDigital Repository at the University of Maryland
dc.relation.isAvailableAtGemstone Program, University of Maryland (College Park, Md)

Files in this item


This item appears in the following Collection(s)

Show simple item record