SQL to Parallel Execution: The 47ms Journey
Same 10 algorithms, different scales
Whether it's 5GB in SQLite or 5TB in BigQuery - it's still hash joins and sort-merge under the hood
The Challenge
Process 5.5TB of music streaming data to find the top 10 most-played songs since 2024.
The Query
SELECT s.title, s.artist, COUNT(*) AS plays
FROM songs s
JOIN listens l ON s.song_id = l.song_id
WHERE l.timestamp > '2024-01-01'
GROUP BY s.title, s.artist
ORDER BY plays DESC
LIMIT 10;
The Journey: From Text to Results in 47ms
What You're Seeing
Phase 1: Parsing (0-5ms)
The SQL text is tokenized and parsed into an Abstract Syntax Tree (AST). The query optimizer analyzes the AST to determine the most efficient execution strategy based on statistics about table sizes, indexes, and data distribution.
Phase 2: Tree Building (5-12ms)
The logical plan transforms into a physical execution tree. Each node represents a specific algorithmic operation. The tree structure determines the order of operations and data flow. Notice how LIMIT is at the top - we can stop processing once we have 10 results.
Phase 3: Distribution (12-15ms)
The execution tree is decomposed into tasks that can run in parallel. Hash partitioning ensures that related data (same song_id for joins, same grouping keys) ends up on the same machine. This minimizes network traffic during execution.
Phase 4: Parallel Execution (15-45ms)
100 machines work simultaneously, each processing their partition of the data:
-
Green: Join operations matching songs with listens
-
Amber: Group by and aggregation counting plays
-
Purple: Sorting and filtering for top results
Each machine processes ~55GB of data using the same algorithms we'll learn in CS145, just distributed across multiple nodes.
Phase 5: Result Assembly (45-47ms)
Partial results from all machines converge to a coordinator node. The coordinator performs a final merge-sort to identify the global top 10 songs from the local top results of each machine.
Scale Reference
Comparison: Single Machine vs Distributed
Aspect | Single Machine | 100 Machines |
---|---|---|
Data Size | 55GB | 5.5TB |
Execution Time | ~4.7 seconds | 47ms |
Parallelism | 8 cores | 800 cores |
Bottleneck | CPU/Memory | Coordination |
Complexity | Simple | Distributed protocols |
Real-World Applications
This same pattern powers:
-
Google Search: Scanning billions of pages in milliseconds
-
Facebook Feed: Aggregating posts from millions of friends
-
Netflix Recommendations: Processing viewing history of 200M users
-
Uber Matching: Finding nearest drivers from millions of locations
-
Credit Card Fraud: Analyzing transactions in real-time
The journey from SELECT * FROM songs
to processing petabytes at Google or Meta all builds on these same foundations.