Optimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDA

dc.contributor.authorWei, Zheng
dc.contributor.authorJaJa, Joseph
dc.date.accessioned2010-07-29T03:44:58Z
dc.date.available2010-07-29T03:44:58Z
dc.date.issued2010-07-13
dc.description.abstractWe present a number of optimization techniques to compute prefix sums on linked lists and implement them on multithreaded GPUs using CUDA. Prefix computations on linked structures involve in general highly irregular fine grain memory accesses that are typical of many computations on linked lists, trees, and graphs. While the current generation of GPUs provides substantial computational power and extremely high bandwidth memory accesses, they may appear at first to be primarily geared toward streamed, highly data parallel computations. In this paper, we introduce an optimized multithreaded GPU algorithm for prefix computations through a randomization process that reduces the problem to a large number of fine-grain computations. We map these fine-grain computations onto multithreaded GPUs in such a way that the processing cost per element is shown to be close to the best possible. Our experimental results show scalability for list sizes ranging from 1M nodes to 256M nodes, and significantly improve on the recently published parallel implementations of list ranking, including implementations on the Cell Processor, the MTA-8, and the NVIDIA GeForce 200 series. They also compare favorably to the performance of the best known CUDA algorithm for the scan operation on the Tesla C1060.en_US
dc.identifier.urihttp://hdl.handle.net/1903/10600
dc.language.isoen_USen_US
dc.relation.ispartofseriesUMIACS;UMIACS-TR-2010-08
dc.titleOptimization of Linked List Prefix Computations on Multithreaded GPUs Using CUDAen_US
dc.typeTechnical Reporten_US

Files