Data Centers: The Backbone of Modern Databases

Concept. Latency in a modern data center spans seven orders of magnitude, from L1 cache at 1 ns to a cross-continent round trip at 100 ms, and an algorithm's cost depends entirely on which tier its data lives in.

Intuition. A SELECT against a 10-billion-row Listens table doesn't run on one machine. It runs in a data center the size of a Costco, and the cost of every read depends on which tier the data lives in, L1 cache (1 ns), RAM (100 ns), SSD (100 µs), or cross-continent network (100 ms). Seven orders of magnitude.

Latency Tiers: Seven Orders of Magnitude

Inside one data center, the same byte can be a nanosecond away or a hundred milliseconds away depending on where it lives. Every algorithm in this course is implicitly priced against this hierarchy.

Tier Latency Distance
L1 cache ~1 ns on-chip
RAM ~100 ns on-board
SSD ~100 µs same rack
Spinning disk ~10 ms same rack
Network (same DC) ~500 µs aisle away
Cross-region ~50 ms continent away
Cross-continent ~100 ms other side of Earth

That is seven orders of magnitude between the fastest and slowest read. An algorithm that touches the wrong tier pays 10,000,000× the price.

Why a Database Course Cares

Every cost model we will use, every page-count argument, every join algorithm, is shaped by this hierarchy. A query plan is, in the end, a routing decision: which tier do you read from, how often, and in what order. Get the tier right and a billion-row query finishes in seconds. Get it wrong and the same query never finishes.


Google Data Centers

These are the nerve centers for BigQuery, Cloud SQL, and Spanner. Each query you run is a tiny cog in this vast machine.


Microsoft Azure Documentary

Take a look at Azure's infrastructure. It's the kind of behind-the-scenes tour that shows you where your data really lives.