Scaling Parallel Full-graph GNN Training to Thousands of GPUs

dc.contributor.advisorBhatele, Abhinaven_US
dc.contributor.authorRanjan, Aditya Kishoreen_US
dc.contributor.departmentComputer Scienceen_US
dc.contributor.publisherDigital Repository at the University of Marylanden_US
dc.contributor.publisherUniversity of Maryland (College Park, Md.)en_US
dc.date.accessioned2025-08-08T12:36:26Z
dc.date.issued2025en_US
dc.description.abstractGraph neural networks have emerged as a potent class of neural networks capableof leveraging the connectivity and structure of real-world graphs to learn intricate properties and relationships between nodes. Many real-world graphs exceed the memory capacity of a GPU due to their sheer size, and using GNNs on them requires techniques such as mini-batch sampling to scale. However, this can lead to reduced accuracy in some cases, and sampling and data transfer from the CPU to the GPU can also slow down training. On the other hand, distributed full-graph training suffers from high communication overhead and load imbalance due to the irregular structure of graphs. In this thesis, we propose Plexus, a three-dimensional (3D)parallel approach for full-graph training that tackles these issues and scales to billion-edge graphs. Additionally, we introduce performance optimizations such as a permutation scheme for load balancing, and a performance model to predict the optimal 3D configuration. Plexus is evaluated on several graph datasets and scaling results are shown for up to 2048 A100 GPUs on Perlmutter, which is 33% of the supercomputer, and 1024 MI250X GPUs on Frontier. Plexus achieves unprecedented speedups of 2.3x-12.5x over existing methods and a reduction in the time to solution by 5.2-8.7x on Perlmutter and 7-54.2x on Frontier.en_US
dc.identifierhttps://doi.org/10.13016/qj2o-3qka
dc.identifier.urihttp://hdl.handle.net/1903/34393
dc.language.isoenen_US
dc.subject.pqcontrolledComputer scienceen_US
dc.subject.pquncontrolledGraph Neural Networksen_US
dc.subject.pquncontrolledHigh Performance Computingen_US
dc.subject.pquncontrolledMachine Learningen_US
dc.subject.pquncontrolledParallel Computingen_US
dc.subject.pquncontrolledPerformanceen_US
dc.subject.pquncontrolledScalabilityen_US
dc.titleScaling Parallel Full-graph GNN Training to Thousands of GPUsen_US
dc.typeThesisen_US

Files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Ranjan_umd_0117N_25277.pdf
Size:
576.86 KB
Format:
Adobe Portable Document Format