Graph computation has attracted a significant amount of attention since many real-world data come in the format of graph. Conventional graph platforms are designed to accommodate a single large graph at a time, while simultaneously processing a large number of in-memory graphs whose sizes are small enough to fit into the device memory are ignored. In fact, such a computation framework is needed as the in-memory graphs are ubiquitous in many applications, e.g., code graphs, paper citation networks, chemical compound graphs, and biology graphs. In this paper, we design the first large-scale in-memory graphs computation framework on Graphics Processing Units (GPUs), SWARMGRAPH. To accelerate single graph computation, SWARMGRAPH comes with two new techniques, i.e., memory-aware data placement and kernel invocation reduction. These techniques leverage the fast memory components in GPU, remove the expensive global memory synchronization, and reduce the costly kernel invocations. To rapidly compute many graphs, SWARMGRAPH pipelines the graph loading and computation. The evaluations on large-scale real-world and synthetic graphs show that SWARMGRAPH significantly outperforms the state-of-theart CPU- and GPU-based graph frameworks by orders of magnitude.
SWARMGRAPH: Analyzing Large-Scale In-Memory Graphs on GPUs
December 12, 2020