Database Concurrency: A Guide to Locking Strategies

Published on 2026-04-09 11:22 by Frugle Me (Last updated: 2026-04-09 11:22)

#database #concurrency #locks
Share:

Database Concurrency: A Guide to Locking Strategies

In a world where thousands of users might try to buy the last seat on a flight at the exact same second, databases need a way to prevent "stomp-over" errors. This is handled through locking.

Depending on your performance needs and the likelihood of conflict, you can choose different "levels" (granularity) and "philosophies" (optimistic vs. pessimistic) for your locks.


1. Lock Granularity: How Much Are You Locking?

Granularity refers to the size of the data resource you are locking. Choosing the right level is a balancing act between concurrency (how many people can work at once) and overhead (how much memory the database uses to track locks).

Table-Level Locking

  • What it is: The database locks the entire table.
  • Best for: Massive batch updates or maintenance where you need to change every row.
  • Pros: Very low overhead; the database only needs to track one lock.
  • Cons: Terrible for concurrency. If User A is updating one row, User B cannot even read a different row in that same table.

Page-Level Locking

  • What it is: The database locks a "page" (a block of memory, typically 8KB, containing multiple rows).
  • Best for: Systems that frequently access adjacent rows (like sequential IDs).
  • Pros: A middle ground between table and row locking.
  • Cons: Can lead to "false contention"—User B might be blocked because they want a row that happens to live on the same page as User A's row, even if the rows are unrelated.

Row-Level Locking

  • What it is: The database locks only the specific row being modified.
  • Best for: High-concurrency web applications (e.g., e-commerce, social media).
  • Pros: Maximum concurrency. Thousands of users can update different rows in the same table simultaneously.
  • Cons: High overhead. Tracking millions of individual row locks can consume significant CPU and memory.

2. Locking Philosophies: Optimistic vs. Pessimistic

This isn't about the size of the lock, but when and how you apply it.

Pessimistic Locking

  • The Mindset: "I expect a fight."
  • How it works: You grab a lock the moment you read the data (e.g., SELECT ... FOR UPDATE). No one else can touch that record until you are done with your transaction.
  • When to use:
    • High-contention environments (e.g., a "Flash Sale" where 1,000 people want 1 item).
    • When the cost of a collision is too high (e.g., banking transfers).
  • Downside: It blocks others, which can lead to slow performance and deadlocks.

Optimistic Locking

  • The Mindset: "I expect peace."
  • How it works: You don't actually lock the database record while you're editing it. Instead, you keep a version number or timestamp. When you go to save, the database checks: "Is the version still the same as when I started?"
    • If Yes: Save the data and increment the version.
    • If No: Someone else changed it; your update fails, and you must retry.
  • When to use:
    • Low-contention environments (e.g., editing a user profile).
    • Web apps where users might leave a page open for a long time (pessimistic locks would time out or block others indefinitely).
  • Downside: If conflicts are frequent, "retry storms" can occur, wasting CPU cycles.

Summary Comparison Table

Feature Table-Level Row-Level Pessimistic Optimistic
Concurrency Very Low Very High Low High
Overhead Very Low High Medium Low
Strategy Broad Precise Prevention Detection
Common Use Bulk Imports Daily OLTP Finance/Inventory CMS/Profiles

Conclusion

Most modern applications default to Row-Level locking combined with Optimistic concurrency for general tasks. However, for critical sections like inventory or payments, switching to a Pessimistic approach ensures that data remains 100% consistent, even if it means a few users have to wait.

Comments (0)

Want to join the conversation?

Please log in to add a comment.