A Framework for Benchmarking Graph-Based Artificial Intelligence

dc.contributor.advisorRegli, William Cen_US
dc.contributor.authorO'Sullivan, Kent Danielen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2024-07-02T05:44:41Z
dc.date.available2024-07-02T05:44:41Z
dc.date.issued2024en_US
dc.description.abstractGraph-based Artificial Intelligence (GraphAI) encompasses AI problems formulated using graphs, operating on graphs, or relying on graph structures for learning. Contemporary Artificial Intelligence (AI) research explores how structured knowledge from graphs can enhance existing approaches to meet the real world’s demands for transparency, explainability, and performance. Characterizing GraphAI performance is challenging because different combinations of graph abstractions, representations, algorithms, and hardware acceleration techniques can trigger unpredictable changes in efficiency. Although benchmarks enable testing different GraphAI implementations, most cannot currently capture the complex interaction between effectiveness and efficiency, especially across dynamic knowledge graphs. This work proposes an empirical ‘grey-box’ approach to GraphAI benchmarking, providing a method that enables experimentally trading between effectiveness and efficiency across different combinations of graph abstractions, representations, algorithms, and hardware accelerators. A systematic literature review yields a taxonomy of GraphAI tasks and a collection of intelligence and security problems that interact with GraphAI . The taxonomy and problem survey guide the development of a framework that fuses empirical computer science with constraint theory in an approach to benchmarking that does not require invasive workload analyses or code instrumentation. We formalize a methodology for developing problem-centric GraphAI benchmarks and develop a tool to create graphs from OpenStreetMaps data to fill a gap in real-world mesh graph datasets required for benchmark inputs. Finally, this work provides a completed benchmark for the Population Segmentation Intelligence and Security problem developed using the GraphAI benchmark problem development methodology. It provides experimental results that validate the utility of the GraphAI benchmark framework for evaluating if, how, and when GraphAI acceleration should be applied to the population segmentation problem.en_US
dc.identifierhttps://doi.org/10.13016/gwgs-7jgd
dc.identifier.urihttp://hdl.handle.net/1903/33055
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledArtificial Intelligenceen_US
dc.subject.pquncontrolledBenchmarken_US
dc.subject.pquncontrolledGraphen_US
dc.subject.pquncontrolledIntelligence and Securityen_US
dc.titleA Framework for Benchmarking Graph-Based Artificial Intelligenceen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
OSullivan_umd_0117N_24337.pdf
Size:
2.36 MB
Format:
Adobe Portable Document Format