Microschedules: Deconstructing Concurrent Transactions

What is a Microschedule?

Key Concept: A microschedule is a precise sequence of atomic Read/Write (R/W) and Lock (Req/Get/Unlock) actions executed by concurrent transactions.

How to Build a Microschedule

1. Decompose Into Actions

Break down each transaction into atomic operations using a lock protocol.

Lock Pattern: Req→Get→[R/W]Unl
T1: Req(A)β†’Get(A)β†’R(A)β†’W(A)β†’Unl(A)
T2: Req(B)β†’Get(B)β†’R(B)β†’W(B)β†’Unl(B)

2. Interleave for Speed

Mix actions from different transactions to maximize CPU efficiency.

Smart: While T1 waits for IO, run T2
Result: 31 units (interleaved) vs 48 units (serial)
35% speedup from overlapping operations!

3. Track Dependencies

Construct a WaitsFor graph to ensure correctness.

Graph: T1β†’T2 = T1 waits for T2's lock
Safety: No cycles = No deadlock βœ“
Action: Detect & resolve if cycles appear

Example 1: Serial Schedule (S1)

Serial Schedule with T3 β†’ T2 β†’ T1

Serial schedule S1: T3 then T2 then T1, completion = 48 units

Here's an example micro schedule for S1, our serial schedule with T3 β†’ T2 β†’ T1, with completion = 48 units.

How to Read This Timeline

The Setup: 3 Transactions (T1, T2, T3) on 1 CPU Core

Time flows left β†’ right, each transaction gets its own horizontal lane.

The Lock Pattern: Req β†’ Get β†’ [IO] β†’ Unl
  • Req = Request lock
  • Get = Lock granted
  • IO = Rs/Re or Ws/We
  • Unl = Unlock (or Release)
Why This Matters: With 1 core for 3 transactions, the CPU constantly switches - when T1 waits for IO, T2 runs; when T2 waits for a lock, T3 runs. This interleaving maximizes CPU usage!

WaitsFor Graph: Detecting Deadlocks

Definitions

WaitsFor Graph

A directed graph representing lock dependencies between transactions in a concurrent execution schedule.

Components:
β€’ Each node corresponds to a transaction
β€’ Directed edges indicate waiting relationships
β€’ Labels show the variable and time span of waiting

⚠ Deadlock Detection

Cycles within the WaitsFor graph signify deadlock.

Example: T1 waits for T2, T2 waits for T1 β†’ Cycle = Deadlock

Impact of Deadlocks

Deadlocks can cause system stalls, leading to inefficiencies in resource utilization and transaction delays.

Consequences:
β€’ System becomes unresponsive
β€’ Transactions cannot proceed
β€’ Manual intervention required

Example 2: Interleaved Schedule (S2) - Fast Execution

Interleaved Schedule with Optimized Performance

Interleaved schedule S2 with completion = 31 units

Here's an example micro schedule for S2, our interleaved schedule, with completion = 31 units.

How to Read S2: Timeline, Locks & Dependencies

Timeline View: Microschedule Execution
  • Time flows left β†’ right (0 to 31 units)
  • Each row = one transaction (T1, T2, T3)
Per-Transaction Locks: Lock Sequences
  • T2: T2.Req(A) at 2, T2.Get(A) at 3, T2.Unl(A) at 10
  • Note: T1.Get(A) waits for T2.Unl(A)
  • Lock pattern determines execution order
WaitsFor Graph: No Deadlock
  • Nodes: T1, T2, T3 (transactions)
  • Edge: T1β†’T2, T1 waits for T2 on variable A (time units 3-10)
  • No cycles detected = No deadlock βœ“

Example 3: Interleaved Schedule (S3) - Different Microschedule

Different Interleaved Schedule for Same Transactions

Interleaved schedule S3 with completion = 45 units

Here's an example micro schedule for S3, a different interleaved schedule for the same set of transactions, with completion = 45 units.

S3 vs S2: Different Microschedule, Same Transactions

Different Timing:

S3 takes 45 units vs S2's 31 units due to different lock acquisition order and contention patterns.

Different Lock Pattern:

More contention leads to longer waits and slower completion - same transactions, different execution strategy.

Key Insight:

Same transactions can have vastly different performance based on microschedule scheduling decisions.


Example 4: Deadlock Detection

When Cycles Create System Deadlock

Deadlock scenario with 2 transactions showing circular dependencies

Here's an example where the same transaction pattern could lead to deadlock under different timing.

Understanding Deadlock Scenarios

Circular Dependencies:
  • T1 waits for T2 to release resource B
  • T2 waits for T1 to release resource A
  • Result: Circular wait = Deadlock!
WaitsFor Graph Cycles:
  • T1 β†’ T2 β†’ T1 (cycle detected)
  • Each arrow represents a "waits for" relationship
  • Cycle means no transaction can proceed
  • System requires deadlock detection & resolution
Deadlock Complexity:
  • Can occur with 2+ transactions
  • More likely with β‰₯3 transactions due to dependencies
  • Probability increases with more concurrent transactions
  • Prevention: careful resource ordering

Performance: Why Interleaved Schedules Win

The Power of Overlapping CPU and IO

Performance comparison showing speedup benefits of interleaved schedules

Interleaved schedules provide significant advantages over serial schedules through smart resource utilization.

Speedup Analysis: CPU + IO Optimization

Core Idea: Overlap Operations
  • CPU operations run while waiting for IO completion
  • No idle CPU time during disk/network waits
  • Multiple transactions progress simultaneously
  • Maximum resource utilization achieved
Performance Impact:
  • Serial: T1 complete β†’ T2 complete β†’ T3 complete
  • Interleaved: T1, T2, T3 run concurrently
  • Higher IO lag = bigger speedup opportunity
  • More concurrent transactions = greater parallelism
Real-World Example:
  • 100 concurrent transactions
  • IO lag: 100 time units
  • Result: 24x speedup!
  • Serial: 10,000 units vs Interleaved: 417 units