Show simple item record

dc.contributor.advisorMemon, Atifen_US
dc.contributor.authorRobbins, Bryan Thomasen_US
dc.date.accessioned2016-06-22T05:50:29Z
dc.date.available2016-06-22T05:50:29Z
dc.date.issued2016en_US
dc.identifierdoi:10.13016/M2K189
dc.identifier.urihttp://hdl.handle.net/1903/18233
dc.description.abstractModern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.en_US
dc.language.isoenen_US
dc.titleA Binary Classifier for Test Case Feasibility Applied to Automatically Generated Tests of Event-Driven Softwareen_US
dc.typeDissertationen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.contributor.departmentComputer Scienceen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledAutomationen_US
dc.subject.pquncontrolledGraphical User Interfacesen_US
dc.subject.pquncontrolledLogistic Regressionen_US
dc.subject.pquncontrolledSoftware Engineeringen_US
dc.subject.pquncontrolledSoftware Testingen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record