blog/database/database-transactions-isolation-levels
Database Design & Internals

Database Transactions: The Isolation Level You're Probably Not Setting

Your database has four isolation levels, each with different guarantees about what concurrent transactions can see. Most developers never change the default -- and that default might not be what you think.

ยท11 min read

The Bug That Only Shows Up Under Load

You have a balance check: read the account balance, verify it's sufficient, then debit. Works perfectly in testing. Works perfectly with one user. Then two users hit the endpoint at the same time, and suddenly your account balance goes negative -- even though you checked it first.

This isn't a code bug. Your logic is correct. The problem is that two transactions read the same balance simultaneously, both see enough funds, and both debit. Neither transaction knew the other existed.

Welcome to isolation levels -- the setting that determines what concurrent transactions can see of each other's work. Most developers leave it at the database default and never think about it. That's fine until it isn't.


What "Isolation" Actually Means

In ACID, the "I" stands for Isolation. It answers a deceptively simple question: when Transaction A is in the middle of making changes, what can Transaction B see?

The answer isn't binary. Databases give you a spectrum of guarantees, each trading off between data correctness and performance.

The Isolation Spectrum
Read Uncommitted
No protection
more isolation
Read Committed
Default (PostgreSQL)
Repeatable Read
Default (MySQL)
Serializable
Full protection

Moving left to right, you get more safety but less concurrency. Moving right to left, you get more throughput but more anomalies. The right choice depends on what your data can tolerate.


The Four Anomalies You're Designing Around

Before we look at each level, you need to understand the problems they prevent. There are four classic anomalies that happen when transactions run concurrently.

๐Ÿ“Œ Dirty Read

Transaction A writes a value but hasn't committed yet. Transaction B reads that uncommitted value. If A rolls back, B has now acted on data that never existed. This is like reading someone's draft email -- they might delete it before sending.

๐Ÿ“Œ Non-Repeatable Read

Transaction A reads a row, does some work, then reads the same row again. Between those two reads, Transaction B committed a change to that row. A gets a different value the second time. The same query, in the same transaction, returns different results.

๐Ÿ“Œ Phantom Read

Transaction A queries rows matching a condition (e.g., WHERE status = 'active'). Transaction B inserts a new row that matches that condition and commits. When A re-runs the same query, a new "phantom" row appears that wasn't there before.

๐Ÿ“Œ Lost Update

Both transactions read the same value, both compute a new value based on it, and both write. The second write overwrites the first. The first transaction's update is silently lost. This is the "double-spend" problem.

Isolation LevelDirty ReadNon-Repeatable ReadPhantom Read
Read UncommittedPossiblePossiblePossible
Read CommittedPreventedPossiblePossible
Repeatable ReadPreventedPreventedPossible*
SerializablePreventedPreventedPrevented

โœ… The asterisk on Repeatable Read

PostgreSQL's Repeatable Read actually prevents phantom reads too (it uses snapshot isolation under the hood). MySQL's InnoDB also mostly prevents them with gap locking. The SQL standard says phantoms are possible at this level, but most modern databases do better than the spec requires.


Walking Through Each Level

Read Uncommitted: The Wild West

At this level, your transaction can see everything -- including uncommitted changes from other transactions. If another transaction writes a value and then rolls back, you've already read data that never actually persisted.

Almost nobody uses this intentionally. It exists in the spec, and some databases support it, but there's rarely a good reason to allow dirty reads. The performance gain over Read Committed is negligible.

When it makes sense: Honestly, almost never. Some analytics workloads use it for approximate counts where absolute accuracy doesn't matter, but even then, Read Committed is usually fine.

Read Committed: The Sensible Default

This is the default for PostgreSQL and Oracle. Your transaction only sees data that has been committed. No dirty reads. But if you read the same row twice in your transaction, you might get different values if someone committed a change between your reads.

This is sufficient for most web applications. Each SQL statement sees a fresh snapshot of committed data. You won't read garbage, but you might see changes happening mid-transaction.

The gotcha: If you do SELECT balance, then some application logic, then SELECT balance again, the value might have changed. If your logic depends on the balance staying the same across both reads, Read Committed won't protect you.

Repeatable Read: Snapshot Consistency

At this level, your transaction gets a snapshot of the database at the moment it starts. No matter what other transactions commit while you're running, you keep seeing the same data. Your two SELECT balance queries will always return the same value within the same transaction.

This is the default for MySQL's InnoDB engine.

The gotcha: You're reading from a snapshot, not current reality. If Transaction B commits a change while Transaction A is running at Repeatable Read, A won't see it. This is usually what you want -- but when it isn't, it's confusing. Your transaction is making decisions based on stale data, and it doesn't know it.

Serializable: The Strongest Guarantee

Serializable means transactions behave as if they ran one after another, even though they actually run concurrently. The database enforces this by detecting conflicts and aborting one of the transactions if they would produce a result different from serial execution.

This is the safest, but it comes at a cost: more aborted transactions, more retries, and lower throughput.

When it makes sense: Financial transactions where correctness is non-negotiable. Inventory systems where overselling is unacceptable. Any scenario where "the data was stale" is not an acceptable excuse.


Try It Yourself

Use the simulator below to see how different isolation levels handle concurrent transactions. Start two transactions, read and write to the same row, and observe the differences.

Interactive Isolation Level Simulator
accounts.balance
100
committed
tx-aIDLE
tx-bIDLE
Transaction log
Begin a transaction to start...

Try these scenarios:

  1. Dirty Read: Set to Read Uncommitted. TX-A writes, then TX-B reads (sees uncommitted data). Rollback TX-A.
  2. Non-Repeatable Read: Set to Read Committed. TX-A reads, then TX-B writes and commits, then TX-A reads again (different value).
  3. Snapshot protection: Set to Repeatable Read. TX-A reads, TX-B writes and commits. TX-A reads again (same value from snapshot).
  4. Write conflict: Set to Serializable. TX-A writes, then TX-B tries to write (gets blocked).

The Default Trap

Here's what catches developers off guard: PostgreSQL and MySQL have different defaults.

DatabaseDefault LevelImplication
PostgreSQLRead CommittedEach statement sees latest committed data
MySQL (InnoDB)Repeatable ReadTransaction sees snapshot from its start
SQL ServerRead CommittedUses locking (not MVCC) by default
OracleRead CommittedStrong MVCC implementation

This means the exact same application code can behave differently depending on your database. A race condition that never happens on MySQL might appear the moment you migrate to PostgreSQL, or vice versa. Your ORM won't save you here -- it doesn't set isolation levels for you.

โš ๏ธ Migration landmine

If you're migrating from MySQL to PostgreSQL (or vice versa), explicitly test any code that does read-then-write patterns. The different default isolation levels mean your concurrent behavior will change. Set the isolation level explicitly in your code rather than relying on database defaults.


Choosing Your Isolation Level

The decision isn't "use the highest level everywhere." That kills throughput. The decision is: what can each part of your system tolerate?

Which isolation level do you need?

Does your transaction read a value and then make a decision based on it?


The Practical Pattern: SELECT FOR UPDATE

Most real-world concurrency problems don't need Serializable. They need row-level locking at Read Committed. This is the pattern experienced developers use:

BEGIN;
 
-- Lock the row while reading it
SELECT balance FROM accounts WHERE id = 123 FOR UPDATE;
 
-- Now you know nobody else can modify this row until you commit
-- Do your check and update safely
UPDATE accounts SET balance = balance - 50 WHERE id = 123;
 
COMMIT;

FOR UPDATE tells the database: "I'm going to modify this row, so lock it now." Any other transaction trying to SELECT ... FOR UPDATE the same row will block until you commit or rollback. This gives you Serializable-like behavior for specific rows without the overhead of full serialization.

โœ… The key insight

You don't need to change your global isolation level. Use Read Committed as default, and reach for SELECT ... FOR UPDATE in the specific queries where concurrent access would cause problems. This is surgical precision versus a global lock.


Common Mistakes

Mistake 1: Checking then acting without locking

-- WRONG: another transaction can change balance between these statements
SELECT balance FROM accounts WHERE id = 123;
-- application checks: if balance >= 50...
UPDATE accounts SET balance = balance - 50 WHERE id = 123;

The gap between SELECT and UPDATE is where race conditions live. Either use FOR UPDATE or do the check atomically:

-- RIGHT: atomic check-and-update
UPDATE accounts SET balance = balance - 50
WHERE id = 123 AND balance >= 50;
-- Check affected rows: 0 means insufficient funds

Mistake 2: Long transactions at high isolation

Higher isolation levels hold locks longer. A Serializable transaction that takes 30 seconds blocks other transactions for 30 seconds. Keep transactions short -- read, compute, write, commit. Don't do HTTP calls or slow computation inside a transaction.

Mistake 3: Ignoring serialization failures

At Serializable, the database will abort your transaction if it detects a conflict. Your application must catch that error (usually error code 40001) and retry. If you don't handle retries, users see random errors under load.


Key Takeaways

โœ… The mental model

Isolation levels are about what lie you're willing to accept. Read Committed says "I might see changes mid-transaction, and that's OK." Serializable says "I refuse to see any inconsistency, even if it means retrying." The right choice depends on what your data can tolerate, not what sounds safest.

The decisions that matter:

  • Know your default: PostgreSQL uses Read Committed, MySQL uses Repeatable Read. Don't assume.
  • Read-then-write = danger zone: Any pattern that reads a value, makes a decision, and writes back is vulnerable to races at Read Committed.
  • Use FOR UPDATE surgically: Lock specific rows when needed instead of raising the global isolation level.
  • Keep transactions short: The longer your transaction, the more it blocks others, regardless of isolation level.
  • Handle retries at Serializable: If you choose the strongest level, build retry logic into your application.

Don't default to "use Serializable everywhere" -- that's the database equivalent of putting a global mutex on your entire application. Instead, understand where your race conditions actually live and protect those specific paths.


References