Rui's Blog
  • Rui's Blog/Paper Reading Notes - Introduction
  • Personal Blog
    • Personal Blog - Index
      • How to Create Picture-in-Picture Effect / Video Overlay for a Presentation Video
      • How to Do Your Part to Protect the Environment in Wisconsin
      • How to Get a Driver's License in Wisconsin
      • How to Travel from the U.S. to China onboard AA127 in June 2021
      • How to Transfer Credits Back to UW-Madison
      • Resources on Learning Academic Writing (for Computer Science)
    • Towards applying to CS Ph.D. programs
  • Machine Learning Systems
    • Machine Learning Systems - Index
      • MLSys Papers - Short Notes
      • [2011 NSDI] Dominant Resource Fairness: Fair Allocation of Multiple Resource Types
      • [2014 OSDI] Scaling Distributed Machine Learning with the Parameter Server
      • [2018 OSDI] Gandiva: Introspective Cluster Scheduling for Deep Learning
      • [2018 SIGCOMM] Chameleon: Scalable Adaptation of Video Analytics via Temporal and Cross-camera ...
      • [2018 NIPS] Dynamic Space-Time Scheduling for GPU Inference
      • [2019 ATC] Analysis of Large-Scale Multi-Tenant GPU Clusters for DNN Training Workloads
      • [2019 NSDI] Tiresias: A GPU Cluster Manager for Distributed Deep Learning
      • [2019 SOSP] ByteScheduler: A Generic Communication Scheduler for Distributed DNN Training ...
      • [2019 SOSP] PipeDream: Generalized Pipeline Parallelism for DNN Training
      • [2019 SOSP] Parity Models: Erasure-Coded Resilience for Prediction Serving Systems
      • [2019 NIPS] GPipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism
      • [2019 SC] ZeRO: memory optimizations toward training trillion parameter models
      • [2020 OSDI] Gavel: Heterogeneity-Aware Cluster Scheduling Policies for Deep Learning Workloads
      • [2020 OSDI] AntMan: Dynamic Scaling on GPU Clusters for Deep Learning
      • [2020 OSDI] BytePS: A High Performance and Generic Framework for Distributed DNN Training
      • [2020 SIGCOMM] Reducto: On-Camera Filtering for Resource-Efficient Real-Time Video Analytics
        • [2020 MLSys] Salus: Fine-Grained GPU Sharing Primitives for Deep Learning Applications
      • [2020 EuroSys] AlloX: Compute Allocation in Hybrid Clusters
      • [2020 VLDB] PyTorch Distributed: Experiences on Accelerating Data Parallel Training
      • [2020 NetAI] Is Network the Bottleneck of Distributed Training?
      • [2020 NSDI] Themis: Fair and Efficient GPU Cluster Scheduling
      • [2021 MLSys] Accordion: Adaptive Gradient Communication via Critical Learning Regime Identification
      • [2021 VLDB] Analyzing and Mitigating Data Stalls in DNN Training
      • [2021 FAST] CheckFreq: Frequent, Fine-Grained DNN Checkpointing
      • [2021 EuroMLSys] Interference-Aware Scheduling for Inference Serving
      • [2021 OSDI] Pollux: Co-adaptive Cluster Scheduling for Goodput-Optimized Deep Learning
      • [2021 MLSys] Wavelet: Efficient DNN Training with Tick-Tock Scheduling
      • [2021 NSDI] SwitchML: Scaling Distributed Machine Learning with In-Network Aggregation
    • Big Data Systems - Index
      • Big Data Systems Papers - Short Notes
      • [2003 SOSP] The Google File System
      • [2004 OSDI] MapReduce: Simplified Data Processing on Large Clusters
      • [2010 SIGMOD] Pregel: A System for Large-Scale Graph Processing
      • [2011 NSDI] Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center
      • [2012 NSDI] Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster ...
      • [2012 OSDI] PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs
      • [2019 FAST] DistCache: Provable Load Balancing for Large-Scale Storage Systems with Distributed...
      • [2021 HotOS] From Cloud Computing to Sky Computing
      • [2021 EuroSys] NextDoor: Accelerating graph sampling for graph machine learning using GPUs
  • Earlier Readings & Notes
    • High Performance Computing Course Notes
      • Lecture 1: Course Overview
      • Lecture 2: From Code to Instructions. The FDX Cycle. Instruction Level Parallelism.
      • Lecture 3: Superscalar architectures. Measuring Computer Performance. Memory Aspects.
      • Lecture 4: The memory hierarchy. Caches.
      • Lecture 5: Caches, wrap up. Virtual Memory.
      • Lecture 6: The Walls to Sequential Computing. Moore’s Law.
      • Lecture 7: Parallel Computing. Flynn's Taxonomy. Amdahl's Law.
      • Lecture 8: GPU Computing Intro. The CUDA Programming Model. CUDA Execution Configuration.
      • Lecture 9: GPU Memory Spaces
      • Lecture 10: GPU Scheduling Issues.
      • Lecture 11: Execution Divergence. Control Flow in CUDA. CUDA Shared Memory Issues.
      • Lecture 12: Global Memory Access Patterns and Implications.
      • Lecture 13: Atomic operations in CUDA. GPU ode optimization rules of thumb.
      • Lecture 14: CUDA Case Studies. (1) 1D Stencil Operation. (2) Vector Reduction in CUDA.
      • Lecture 15: CUDA Case Studies. (3) Parallel Prefix Scan on the GPU. Using Multiple Streams in CUDA.
      • Lecture 16: Streams, and overlapping data copy with execution.
      • Lecture 17: GPU Computing: Advanced Features.
      • Lecture 18: GPU Computing with thrust and cub.
      • Lecture 19: Hardware aspects relevant in multi-core, shared memory parallel computing.
      • Lecture 20: Multi-core Parallel Computing with OpenMP. Parallel Regions.
      • Lecture 21: OpenMP Work Sharing.
      • Lecture 22: OpenMP Work Sharing
      • Lecture 23: OpenMP NUMA Aspects. Caching and OpenMP.
      • Lecture 24: Critical Thinking. Code Optimization Aspects.
      • Lecture 25: Computing with Supercomputers.
      • Lecture 26: MPI Parallel Programming General Introduction. Point-to-Point Communication.
      • Lecture 27: MPI Parallel Programming Point-to-Point communication: Blocking vs. Non-blocking sends.
      • Lecture 28: MPI Parallel Programming: MPI Collectives. Overview of topics covered in the class.
    • Cloud Computing Course Notes
      • 1.1 Introduction to Clouds, MapReduce
      • 1.2 Gossip, Membership, and Grids
      • 1.3 P2P Systems
      • 1.4 Key-Value Stores, Time, and Ordering
      • 1.5 Classical Distributed Algorithms
      • 4.1 Spark, Hortonworks, HDFS, CAP
      • 4.2 Large Scale Data Storage
    • Operating Systems Papers - Index
      • CS 736 @ UW-Madison Fall 2020 Reading List
      • All File Systems Are Not Created Equal: On the Complexity of Crafting Crash-Consistent Applications
      • ARC: A Self-Tuning, Low Overhead Replacement Cache
      • A File is Not a File: Understanding the I/O Behavior of Apple Desktop Applications
      • Biscuit: The benefits and costs of writing a POSIX kernel in a high-level language
      • Data Domain: Avoiding the Disk Bottleneck in the Data Domain Deduplication File System
      • Disco: Running Commodity Operating Systems on Scalable Multiprocessors
      • FFS: A Fast File System for UNIX
      • From WiscKey to Bourbon: A Learned Index for Log-Structured Merge Trees
      • LegoOS: A Disseminated, Distributed OS for Hardware Resource Disaggregation
      • LFS: The Design and Implementation of a Log-Structured File System
      • Lottery Scheduling: Flexible Proportional-Share Resource Management
      • Memory Resource Management in VMware ESX Server
      • Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks
      • NFS: Sun's Network File System
      • OptFS: Optimistic Crash Consistency
      • RAID: A Case for Redundant Arrays of Inexpensive Disks
      • RDP: Row-Diagonal Parity for Double Disk Failure Correction
      • Resource Containers: A New Facility for Resource Management in Server Systems
      • ReVirt: Enabling Intrusion Analysis through Virtual-Machine Logging and Replay
      • Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism
      • SnapMirror: File-System-Based Asynchronous Mirroring for Disaster Recovery
      • The Linux Scheduler: a Decade of Wasted Cores
      • The Unwritten Contract of Solid State Drives
      • Venti: A New Approach to Archival Storage
    • Earlier Notes
      • How to read a paper
  • FIXME
    • Template for Paper Reading Notes
Powered by GitBook
On this page
  • One-line Summary
  • Paper Structure Outline
  • Background & Motivation
  • Design and Implementation
  • Evaluation
  • When is MonoSpark slower than Spark?
  • Why is MonoSpark faster than Spark in some cases?
  • Reasoning about performance
  • Links

Was this helpful?

  1. Earlier Readings & Notes
  2. Operating Systems Papers - Index

Monotasks: Architecting for Performance Clarity in Data Analytics Frameworks

One-line Summary

Instead of breaking data analytics jobs into tasks that pipeline many resources, we break the jobs into monotasks, each of which uses a single resource. This makes the analysis on performance bottlenecks in data analytics frameworks easier for users while retaining performance.

Paper Structure Outline

  1. INTRODUCTION

  2. BACKGROUND

    1. Architecture of data analytics frameworks

    2. The challenge of reasoning about performance

  3. MONOTASKS ARCHITECTURE

    1. Design

    2. How are multitasks decomposed into monotasks?

    3. Scheduling monotasks on each worker

    4. How many multitasks should be assigned concurrently to each machine?

    5. How is memory access regulated?

  4. IMPLEMENTATION

  5. MONOTASKS PERFORMANCE

    1. Experimental setup

    2. Does getting rid of fine-grained pipelining hurt performance?

    3. When is MonoSpark slower than Spark?

    4. When is MonoSpark faster than Spark?

  6. REASONING ABOUT PERFORMANCE

    1. Modeling performance

    2. Predicting runtime on different hardware

    3. Predicting runtime with deserialized data

    4. Predicting with both hardware and software changes

    5. Understanding bottlenecks

    6. Can this model be used for Spark?

  7. LEVERAGING CLARITY: AUTO-CONFIGURATION

  8. LIMITATIONS AND OPPORTUNITIES

  9. RELATED WORK

  10. CONCLUSION

Background & Motivation

In current data analytics frameworks, it is very difficult for users to reason about the performance of their workloads, thus increasing the difficulty for optimizations. The challenges of reasoning about performance include:

  1. Tasks have non-uniform resource use

  2. Concurrent tasks on a machine may contend

  3. Resource use occurs outside the control of the analytics framework (controlled by OS)

Design and Implementation

Traditional fine-grained pipelining used in today's tasks (multitasks) are replaced with statistical multiplexing across monotasks that each use a single resource. The decomposing of multitasks into monotasks can be done internally by the framework w/o changing the existing API. To resolve the aforementioned issues, Monotasks has these design principles in mind:

  1. Each monotasks uses one resource

  2. Monotasks execute in isolation

  3. Per-resource schedulers control contention

  4. Per-resource schedulers have complete control over each resource

In this paper, the authors presented MonoSpark, which is essentially Apache Spark with the above design choices integrated.

On each worker, monotasks are scheduled using two layers of schedulers.

  • Top-level scheduler (local DAG scheduler): Manages the DAG of monotasks for each multitask. Tracks dependencies for monotasks and submit the monotask to the resource that it's waiting for when the dependencies are complete.

  • Low-level scheduler (dedicated, per-resource (CPU, disk, network) scheduler): Written at the application level and not within the OS, meaning that the resource use is not perfectly controlled.

When more monotasks are waiting for a resource that can run concurrently, monotasks will be queued. The queues implement round-robin over monotasks in different phases of the multitask DAG.

A MonoSpark job scheduler works like the Spark job scheduler but it assigns more concurrent multitasks to each machine to improve resource utilization.

MonoSpark is compatible with Spark's public API in that if someone has an application on top of Spark, switching to MonoSpark requires only a modification in the build file.

Evaluation

Three benchmark workloads are used: Sort, Big Data Benchmark, and Machine Learning.

When is MonoSpark slower than Spark?

  1. When a workload is not broken into sufficiently many multitasks: MonoSpark's coarser-grained pipelining will sacrifice performance when the pipelining is too coarse

  2. Disk writes: In disk monotasks, all writes are flushed to disk to ensure that future disk monotasks get dedicated use of the disk, while Spark writes data to buffer cache and does not force data to disk.

Why is MonoSpark faster than Spark in some cases?

  1. Per-resource schedulers control contention, which results in higher disk bandwidth for workloads that run on hard disk drives, due to avoiding unnecessary seeks.

  2. Per-resource schedulers allow monotasks to fully utilize the bottleneck resource without unnecessary contention.

Reasoning about performance

Links

PreviousMemory Resource Management in VMware ESX ServerNextNFS: Sun's Network File System

Last updated 4 years ago

Was this helpful?

Paper PDF
Presentation slides @ SOSP '17
Slides for CS34702 @ U Chicago
spark-monotasks on GitHub
Notice how in query 1c, MonoSpark is 9% slower
Predicting runtime on different hardware
Predicting with both hardware and software changes. 4x more machines -> 10x improvement predicted with at most 23% error
Monotasks schedulers automatically select ideal concurrency because they have better control