Rui's Blog
  • Rui's Blog/Paper Reading Notes - Introduction
  • Personal Blog
    • Personal Blog - Index
      • How to Create Picture-in-Picture Effect / Video Overlay for a Presentation Video
      • How to Do Your Part to Protect the Environment in Wisconsin
      • How to Get a Driver's License in Wisconsin
      • How to Travel from the U.S. to China onboard AA127 in June 2021
      • How to Transfer Credits Back to UW-Madison
      • Resources on Learning Academic Writing (for Computer Science)
    • Towards applying to CS Ph.D. programs
  • Machine Learning Systems
    • Machine Learning Systems - Index
      • MLSys Papers - Short Notes
      • [2011 NSDI] Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
      • [2014 OSDI] Scaling Distributed Machine Learning with the Parameter Server
      • [2018 OSDI] Gandiva: Introspective Cluster Scheduling for Deep Learning
      • [2018 SIGCOMM] Chameleon: Scalable Adaptation of Video Analytics via Temporal and Cross-camera ...
      • [2018 NIPS] Dynamic Space-Time Scheduling for GPU Inference
      • [2019 ATC] Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN Training Workloads
      • [2019 NSDI] Tiresias: A GPU Cluster Manager for Distributed Deep Learning
      • [2019 SOSP] ByteScheduler: A Generic Communication Scheduler for Distributed DNN Training ...
      • [2019 SOSP] PipeDream: Generalized Pipeline Parallelism for DNN Training
      • [2019 SOSP] Parity Models: Erasure-Coded Resilience for Prediction Serving Systems
      • [2019 NIPS] GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
      • [2019 SC] ZeRO: memory optimizations toward training trillion parameter models
      • [2020 OSDI] Gavel: Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
      • [2020 OSDI] AntMan: Dynamic Scaling on GPU Clusters for Deep Learning
      • [2020 OSDI] BytePS: A High Performance and Generic Framework for Distributed DNN Training
      • [2020 SIGCOMM] Reducto: On-Camera Filtering for Resource-Efficient Real-Time Video Analytics
        • [2020 MLSys] Salus: Fine-Grained GPU Sharing Primitives for Deep Learning Applications
      • [2020 EuroSys] AlloX: Compute Allocation in Hybrid Clusters
      • [2020 VLDB] PyTorch Distributed: Experiences on Accelerating Data Parallel Training
      • [2020 NetAI] Is Network the Bottleneck of Distributed Training?
      • [2020 NSDI] Themis: Fair and Efficient GPU Cluster Scheduling
      • [2021 MLSys] Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
      • [2021 VLDB] Analyzing and Mitigating Data Stalls in DNN Training
      • [2021 FAST] CheckFreq: Frequent, Fine-Grained DNN Checkpointing
      • [2021 EuroMLSys] Interference-Aware Scheduling for Inference Serving
      • [2021 OSDI] Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
      • [2021 MLSys] Wavelet: Efficient DNN Training with Tick-Tock Scheduling
      • [2021 NSDI] SwitchML: Scaling Distributed Machine Learning with In-Network Aggregation
    • Big Data Systems - Index
      • Big Data Systems Papers - Short Notes
      • [2003 SOSP] The Google File System
      • [2004 OSDI] MapReduce: Simplified Data Processing on Large Clusters
      • [2010 SIGMOD] Pregel: A System for Large-Scale Graph Processing
      • [2011 NSDI] Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center
      • [2012 NSDI] Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster ...
      • [2012 OSDI] PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs
      • [2019 FAST] DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed...
      • [2021 HotOS] From Cloud Computing to Sky Computing
      • [2021 EuroSys] NextDoor: Accelerating graph sampling for graph machine learning using GPUs
  • Earlier Readings & Notes
    • High Performance Computing Course Notes
      • Lecture 1: Course Overview
      • Lecture 2: From Code to Instructions. The FDX Cycle. Instruction Level Parallelism.
      • Lecture 3: Superscalar architectures. Measuring Computer Performance. Memory Aspects.
      • Lecture 4: The memory hierarchy. Caches.
      • Lecture 5: Caches, wrap up. Virtual Memory.
      • Lecture 6: The Walls to Sequential Computing. Moore’s Law.
      • Lecture 7: Parallel Computing. Flynn's Taxonomy. Amdahl's Law.
      • Lecture 8: GPU Computing Intro. The CUDA Programming Model. CUDA Execution Configuration.
      • Lecture 9: GPU Memory Spaces
      • Lecture 10: GPU Scheduling Issues.
      • Lecture 11: Execution Divergence. Control Flow in CUDA. CUDA Shared Memory Issues.
      • Lecture 12: Global Memory Access Patterns and Implications.
      • Lecture 13: Atomic operations in CUDA. GPU ode optimization rules of thumb.
      • Lecture 14: CUDA Case Studies. (1) 1D Stencil Operation. (2) Vector Reduction in CUDA.
      • Lecture 15: CUDA Case Studies. (3) Parallel Prefix Scan on the GPU. Using Multiple Streams in CUDA.
      • Lecture 16: Streams, and overlapping data copy with execution.
      • Lecture 17: GPU Computing: Advanced Features.
      • Lecture 18: GPU Computing with thrust and cub.
      • Lecture 19: Hardware aspects relevant in multi-core, shared memory parallel computing.
      • Lecture 20: Multi-core Parallel Computing with OpenMP. Parallel Regions.
      • Lecture 21: OpenMP Work Sharing.
      • Lecture 22: OpenMP Work Sharing
      • Lecture 23: OpenMP NUMA Aspects. Caching and OpenMP.
      • Lecture 24: Critical Thinking. Code Optimization Aspects.
      • Lecture 25: Computing with Supercomputers.
      • Lecture 26: MPI Parallel Programming General Introduction. Point-to-Point Communication.
      • Lecture 27: MPI Parallel Programming Point-to-Point communication: Blocking vs. Non-blocking sends.
      • Lecture 28: MPI Parallel Programming: MPI Collectives. Overview of topics covered in the class.
    • Cloud Computing Course Notes
      • 1.1 Introduction to Clouds, MapReduce
      • 1.2 Gossip, Membership, and Grids
      • 1.3 P2P Systems
      • 1.4 Key-Value Stores, Time, and Ordering
      • 1.5 Classical Distributed Algorithms
      • 4.1 Spark, Hortonworks, HDFS, CAP
      • 4.2 Large Scale Data Storage
    • Operating Systems Papers - Index
      • CS 736 @ UW-Madison Fall 2020 Reading List
      • All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications
      • ARC: A Self-Tuning, Low Overhead Replacement Cache
      • A File is Not a File: Understanding the I/O Behavior of Apple Desktop Applications
      • Biscuit: The benefits and costs of writing a POSIX kernel in a high-level language
      • Data Domain: Avoiding the Disk Bottleneck in the Data Domain Deduplication File System
      • Disco: Running Commodity Operating Systems on Scalable Multiprocessors
      • FFS: A Fast File System for UNIX
      • From WiscKey to Bourbon: A Learned Index for Log-Structured Merge Trees
      • LegoOS: A Disseminated, Distributed OS for Hardware Resource Disaggregation
      • LFS: The Design and Implementation of a Log-Structured File System
      • Lottery Scheduling: Flexible Proportional-Share Resource Management
      • Memory Resource Management in VMware ESX Server
      • Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks
      • NFS: Sun's Network File System
      • OptFS: Optimistic Crash Consistency
      • RAID: A Case for Redundant Arrays of Inexpensive Disks
      • RDP: Row-Diagonal Parity for Double Disk Failure Correction
      • Resource Containers: A New Facility for Resource Management in Server Systems
      • ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay
      • Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism
      • SnapMirror: File-System-Based Asynchronous Mirroring for Disaster Recovery
      • The Linux Scheduler: a Decade of Wasted Cores
      • The Unwritten Contract of Solid State Drives
      • Venti: A New Approach to Archival Storage
    • Earlier Notes
      • How to read a paper
  • FIXME
    • Template for Paper Reading Notes
Powered by GitBook
On this page
  • One-line Summary
  • Paper Structure Outline
  • Background & Motivation
  • Design and Implementation
  • Pipeline parallelism
  • Shenanigan 1: Work/model partitioning
  • Shenanigan 2: Work scheduling
  • Shenanigan 3: Weight version mismatch
  • Evaluation
  • Links

Was this helpful?

  1. Machine Learning Systems
  2. Machine Learning Systems - Index

[2019 SOSP] PipeDream: Generalized Pipeline Parallelism for DNN Training

One-line Summary

PipeDream uses pipeline parallelism to combine intra-batch parallelism with inter-batch parallelization and reduce the communication overhead in DNN training.

Paper Structure Outline

  1. Introduction

  2. Background & Related Work

    1. DNN Training

  3. Parallel Training in PipeDream

    1. Pipeline Parallelism

    2. Partitioning Layers Across Machines

    3. Work Scheduling

    4. Effective Learning

    5. GPU Memory Management

  4. Implementation

  5. Evaluation

    1. Experimental Setup

    2. PipeDream vs. Data Parallelism

    3. Value of Data Parallelism in stages

  6. Conclusion

Background & Motivation

Existing parallelization techniques (data, model, hybrid) use intra-batch parallelization where each iteration of the optimization algorithm is parallelized across a set of workers. In data parallelism, there exists a high communication overhead at a large scale, while in vanilla model parallelism, system resources are under-utilized.

Design and Implementation

Pipeline parallelism

With pipeline parallelism, multiple minibatches are injected into the pipeline one after another (Fig. 4).

Pipeline parallelism is superior because of 2 reasons:

  1. Pipelining communicates less: Compared to DP where the whole model is communicated in an all-to-all fashion, PipeDream reduces the amount of inter-worker communication by only communicating intermediate inputs & outputs across consecutive layers on different workers peer-to-peer.

  2. Pipelining overlaps computation and communication: Asynchronous communication of intermediate results in significant overlap of communication with the computation of a subsequent minibatch (Fig. 5).

Shenanigan 1: Work/model partitioning

PipeDream partitions the training of a model into stages in a pipeline. Ideally, each stage would have the same throughput rate, otherwise, there would be a load imbalance. The communication between stages should also be minimized as much as possible to improve the overall throughput. PipeDream uses an optimizer to output a balanced pipeline. To minimize the time taken by the slowest stage, PipeDream uses a partitioning algorithm that takes in:

  1. Do a short profiling run of 1000 minibatches. For each layer,

    1. The total computation time across the forward & backward passes

    2. The size of the output activations & input gradients

    3. The size of weight parameters

and outputs {a partitioning of layers into stages, the number of workers for each stage, and the optimal number of in-flight minibatches to keep the training pipeline busy}.

Shenanigan 2: Work scheduling

Each stage should decide whether to perform a forward pass or a backward pass on different minibatches. During the startup state, PipeDream admits enough minibatches to keep the pipeline full in steady state. Then, each stage alternates between a forward pass for a minibatch & a backward pass for a different minibatch (1F1B schedule). For a data parallel configuration, an 1F1B-RR (round robin) schedule is used.

Shenanigan 3: Weight version mismatch

Naive pipelining leads to mismatch in weight versions. For example, in Fig. 4, for minibatch 5, on stage 1, the forward pass is performed after the backprop of minibatch 1, whereas the backprop is performed after the backprop of minibatches 2-4. This discrepancy results in invalid gradients and may prevent model convergence. PipeDream's workaround is to use weight stashing to store multiple versions of the weights, one for each active minibatch. Even with the extra memory footprint of stashing the weight versions, PipeDream's peak per-worker memory usage is on par with data parallelism.

Evaluation

Links

Previous[2019 SOSP] ByteScheduler: A Generic Communication Scheduler for Distributed DNN Training ...Next[2019 SOSP] Parity Models: Erasure-Coded Resilience for Prediction Serving Systems

Last updated 2 years ago

Was this helpful?

Paper PDF
Presentation video at DATA + AI Summit Europe
Presentation video at SOSP '19
PipeDream on GitHub