Theses and Dissertations from UMD

Permanent URI for this communityhttp://hdl.handle.net/1903/2

New submissions to the thesis/dissertation collections are added automatically as they are received from the Graduate School. Currently, the Graduate School deposits all theses and dissertations from a given semester after the official graduation date. This means that there may be up to a 4 month delay in the appearance of a give thesis/dissertation in DRUM

More information is available at Theses and Dissertations at University of Maryland Libraries.

Browse

Search Results

Now showing 1 - 1 of 1
  • Thumbnail Image
    Item
    A Framework for Benchmarking Graph-Based Artificial Intelligence
    (2024) O'Sullivan, Kent Daniel; Regli, William C; Computer Science; Digital Repository at the University of Maryland; University of Maryland (College Park, Md.)
    Graph-based Artificial Intelligence (GraphAI) encompasses AI problems formulated using graphs, operating on graphs, or relying on graph structures for learning. Contemporary Artificial Intelligence (AI) research explores how structured knowledge from graphs can enhance existing approaches to meet the real world’s demands for transparency, explainability, and performance. Characterizing GraphAI performance is challenging because different combinations of graph abstractions, representations, algorithms, and hardware acceleration techniques can trigger unpredictable changes in efficiency. Although benchmarks enable testing different GraphAI implementations, most cannot currently capture the complex interaction between effectiveness and efficiency, especially across dynamic knowledge graphs. This work proposes an empirical ‘grey-box’ approach to GraphAI benchmarking, providing a method that enables experimentally trading between effectiveness and efficiency across different combinations of graph abstractions, representations, algorithms, and hardware accelerators. A systematic literature review yields a taxonomy of GraphAI tasks and a collection of intelligence and security problems that interact with GraphAI . The taxonomy and problem survey guide the development of a framework that fuses empirical computer science with constraint theory in an approach to benchmarking that does not require invasive workload analyses or code instrumentation. We formalize a methodology for developing problem-centric GraphAI benchmarks and develop a tool to create graphs from OpenStreetMaps data to fill a gap in real-world mesh graph datasets required for benchmark inputs. Finally, this work provides a completed benchmark for the Population Segmentation Intelligence and Security problem developed using the GraphAI benchmark problem development methodology. It provides experimental results that validate the utility of the GraphAI benchmark framework for evaluating if, how, and when GraphAI acceleration should be applied to the population segmentation problem.