[2021 EuroSys] NextDoor: Accelerating graph sampling for graph machine learning using GPUs
Last updated
Last updated
In Graph Neural Network (GNN) training, existing approaches use CPUs to sample the graph before using GPUs to train the GNN, but sampling is a major overhead (up to 62% of training time). Nextdoor uses GPUs to accelerate graph sampling by up to 4x, and its main contributions are:
Simple abstractions & API to express diverse graph sampling algorithms
A new "transit parallel" approach to increase the parallelism of graph sampling
Optimizations (load balancing & caching) to improve GPU utilization
Takeaways from Shivaram's group meeting after discussing this paper include:
In evaluations, except from the relative numbers, post absolute values as well
Parallelization works very well on GPUs, more CPU-based tasks may be identified and transformed into GPU-based tasks with high levels of parallelization
Nextdoor introduces this abstraction/API that "bounds" graph sampling algorithms so that they can be properly parallelized
Introduction
Background and Motivation
Representation Learning on Graphs
Requirements for GPU Performance
An Abstraction for Graph Sampling
Graph Sampling using NEXTDOOR
Programming API
Use Cases
Paradigms for Graph Sampling on GPUs
Sample-Parallelism
Transit-Parallelism
Efficient Transit Parallelism on GPUs
Sampling in Individual Transit Sampling
Transit-Parallel Collective Transit Sampling
Unique Neighbors
Graph Sampling using Multiple GPUs
Integration in GNNs using Python API
Advantages of NEXTDOOR's API
Alternative Graph Processing Systems
Evaluation
Execution Time Breakdown
Graph Sampling Performance
Alternative GPU-Based Abstractions
Sampling Large Graphs
Sampling on Multiple GPUs
End-to-End Integration in GNN Systems
Related Work
Conclusion
GNNs maps vertices of (input) graphs to an embedding in an N-dimensional space so that the similarity of embeddings between nodes indicate their similarity in the network
The embeddings are then used for many downstream tasks (e.g., product recommendation, clustering)
There are two types of GNNs, and this work focuses on the first one (they're more common):
Sampling-based GNNs samples the input graph and train using these samples
Whole-Graph-based GNNs train on the whole input graph directly
Workflow of Sampling-based GNNs: First, a graph sampling algorithm is used to sample the input graph, and the samples are then used for data parallel training
Currently, most implementations use CPUs for sampling, because the implementation is easier
Level of parallelism should be high (in GPU computing, # threads == # samples)
Accesses to the global memory should be coalesced and aligned
Shared memory and registers for each SM can be used as software-managed cache
Avoid warp divergence
Currently, graph sampling is done on CPUs because of the ease of implementation. Nextdoor attempts to provide both easy-to-implement and fast graph sampling.
Input to Nextdoor
A graph
An initial set of samples, each with >=1 root vertices
User-defined functions to describe the algorithm
Output of Nextdoor: An expanded set of examples
Nextdoor abstractions:
A graph sampling appplication runs for k steps
At each step i,
A transit vertex for i is a vertex whose neighbors may be added to the sample
Sample mi of those neighbors
There are two types of sampling:
Individual transit sampling: Sample mi neighbors per-transit-node
Collective transit sampling: Sample mi neighbors per-sample
Status quo: One thread for each sample -> poor parallelism. How can we increase the parallelism?
Sample parallel: In each thread, one neighbor of a transit vertex is sampled, and samples are assigned to consecutive threads
The parallelism is better
However, sample parallel suffers from irregularity: The access to the global memory is random, and shared memory/registers cannot be used as caches
Transit parallel: Assign samples with common transits to consecutive threads
A GroupBy operation is needed to invert the sample-transit mapping to a transit-sample mapping
Here, consecutive threads access edges of the same transit vertices. Therefore, the global memory accesses are coalesced, and shared memory/registers can be used for caches
The Nextdoor API exposes three levels of parallelism
Each transit is mapped to a threadblock
Each sample is assigned to a group of mi threads at step i
Each thread samples one neighbor
Nextdoor uses different {types of kernels, caching strategies, neighbor access strategy, transit scheduling strategy} to process transit vertices based on the number of neighbors to sample for the transit vertex, which helps to best utilize the memory/compute resources
The original paper also included some microbenchmarks of speedups of SP, TP, and the overhead of the GroupBy operation, etc.