Skip to main content

Isolation Levels and Locking

Learning Focus

Use this lesson to select the right PostgreSQL isolation level and apply explicit row locks to avoid concurrency anomalies, build work queues, and reduce deadlocks.

Concept Overview

Concurrency problems happen when multiple transactions read and write the same data.

PostgreSQL uses MVCC (multi-version concurrency control): reads generally do not block writes, and writes generally do not block reads. But you still need to think about:

  • what snapshot of data a transaction can see (isolation)
  • when to lock rows for safe read-modify-write (locking)

PostgreSQL default isolation is READ COMMITTED, which is a good default for many OLTP workloads.

Why is it important?

  • Correctness: prevent double-processing and lost updates
  • Predictability: stable reads in reporting/critical flows
  • Throughput: use the least strict isolation that meets correctness needs
  • Safety under contention: locking patterns prevent races

Where does it fit?

This is essential for:

  • payment and inventory workflows
  • work-queue consumers (parallel workers)
  • “select then update” logic
  • serializable correctness-critical workflows (requires retries)

Syntax & Rules

Core Isolation Levels (PostgreSQL)

PostgreSQL supports:

  • READ COMMITTED (default)
  • REPEATABLE READ
  • SERIALIZABLE

Note: READ UNCOMMITTED is accepted as syntax but behaves like READ COMMITTED in PostgreSQL.

Set isolation level for the current transaction:

BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
-- statements
COMMIT;

Row Locking

Lock rows for update:

SELECT *
FROM accounts
WHERE account_id = 1
FOR UPDATE;

Add lock behavior modifiers:

SELECT * FROM jobs
WHERE status = 'queued'
FOR UPDATE NOWAIT;

SELECT * FROM jobs
WHERE status = 'queued'
FOR UPDATE SKIP LOCKED;

Available Options / Parameters

Isolation:

LevelWhat you getTypical use
READ COMMITTEDeach statement sees a fresh committed snapshotdefault OLTP
REPEATABLE READtransaction sees one consistent snapshotconsistent reads inside one txn
SERIALIZABLEstrongest guarantee; may abort under contentioncorrectness-critical flows with retries

Locking clauses:

ClauseMeaning
FOR UPDATElock selected rows against concurrent updates/deletes
FOR NO KEY UPDATEweaker than FOR UPDATE (advanced)
FOR SHAREshared lock (blocks updates)
FOR KEY SHAREweaker shared lock (advanced)
NOWAITfail immediately if lock can’t be obtained
SKIP LOCKEDskip rows that are already locked (work queue pattern)

Key Rules and Considerations

  • Under READ COMMITTED, two SELECTs in the same transaction can see different results if another transaction commits between them.
  • Under REPEATABLE READ, the snapshot is stable within the transaction.
  • Under SERIALIZABLE, PostgreSQL may abort a transaction with a serialization failure; this is expected and requires retry logic.
  • SELECT ... FOR UPDATE is the standard pattern for safe read-modify-write.

Step-by-Step Examples

These examples use two sessions. In psql, open two terminals:

  • Session A
  • Session B

Example 1: READ COMMITTED Can See New Commits Between Statements (Intermediate)

Setup (run once):

DROP TABLE IF EXISTS counters;
CREATE TABLE counters (id int PRIMARY KEY, v int NOT NULL);
INSERT INTO counters (id, v) VALUES (1, 0);

Session A:

BEGIN;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

SELECT v FROM counters WHERE id = 1;
-- keep transaction open

Expected output:

BEGIN
SET
v
---
0
(1 row)

Session B:

BEGIN;
UPDATE counters SET v = 1 WHERE id = 1;
COMMIT;

Expected output:

BEGIN
UPDATE 1
COMMIT

Back to Session A:

SELECT v FROM counters WHERE id = 1;
COMMIT;

Expected output:

 v
---
1
(1 row)

COMMIT

Explanation:

  • In READ COMMITTED, each statement sees the latest committed state at statement start.

Example 2: REPEATABLE READ Freezes the Snapshot (Intermediate)

Reset the counter:

UPDATE counters SET v = 0 WHERE id = 1;

Session A:

BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
SELECT v FROM counters WHERE id = 1;

Expected output:

BEGIN
SET
v
---
0
(1 row)

Session B (commit a change):

UPDATE counters SET v = 2 WHERE id = 1;

Expected output:

UPDATE 1

Back to Session A:

SELECT v FROM counters WHERE id = 1;
COMMIT;

Expected output:

 v
---
0
(1 row)

COMMIT

Explanation:

  • REPEATABLE READ uses a transaction snapshot, so Session A keeps seeing the original value.

Example 3: Use FOR UPDATE to Prevent Lost Updates (Advanced)

Setup:

DROP TABLE IF EXISTS accounts;
CREATE TABLE accounts (account_id int PRIMARY KEY, balance int NOT NULL);
INSERT INTO accounts (account_id, balance) VALUES (1, 100);

Session A:

BEGIN;
SELECT balance FROM accounts WHERE account_id = 1 FOR UPDATE;
-- imagine business logic here
UPDATE accounts SET balance = balance - 10 WHERE account_id = 1;
-- keep transaction open

Expected output:

BEGIN
balance
---------
100
(1 row)

UPDATE 1

Session B (tries to lock the same row):

BEGIN;
SELECT balance FROM accounts WHERE account_id = 1 FOR UPDATE NOWAIT;

Expected output (typical):

BEGIN
ERROR: could not obtain lock on row in relation "accounts"

Session A commits:

COMMIT;

Session B can now proceed:

ROLLBACK;
BEGIN;
SELECT balance FROM accounts WHERE account_id = 1 FOR UPDATE;
COMMIT;

Expected output (example):

ROLLBACK
BEGIN
balance
---------
90
(1 row)

COMMIT

Explanation:

  • NOWAIT turns “wait on lock” into a fast failure (useful for interactive systems).

Example 4: Work Queue with SKIP LOCKED (Advanced)

Setup a jobs table:

DROP TABLE IF EXISTS jobs;
CREATE TABLE jobs (
job_id bigint GENERATED ALWAYS AS IDENTITY PRIMARY KEY,
status text NOT NULL,
created_at timestamptz NOT NULL DEFAULT now()
);

INSERT INTO jobs (status) VALUES ('queued'), ('queued'), ('queued');

Worker transaction:

BEGIN;

WITH picked AS (
SELECT job_id
FROM jobs
WHERE status = 'queued'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 2
)
UPDATE jobs
SET status = 'processing'
WHERE job_id IN (SELECT job_id FROM picked)
RETURNING job_id;

COMMIT;

Expected output (example):

BEGIN
job_id
--------
1
2
(2 rows)

COMMIT

Explanation:

  • SKIP LOCKED allows many workers to run concurrently without blocking.

Practical Use Cases

1) Safe Read-Modify-Write

BEGIN;
SELECT * FROM accounts WHERE account_id = 1 FOR UPDATE;
UPDATE accounts SET balance = balance - 10 WHERE account_id = 1;
COMMIT;

2) Parallel Workers

SELECT job_id FROM jobs WHERE status='queued' FOR UPDATE SKIP LOCKED LIMIT 10;

3) Correctness-Critical Flows

BEGIN;
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
-- do work
COMMIT;

Common Mistakes & Troubleshooting

Mistake 1: Using SERIALIZABLE Without Retry Logic

What happens: transactions can abort with:

ERROR:  could not serialize access due to read/write dependencies among transactions

Fix: implement retries with backoff (see next lesson).


Mistake 2: Locking Too Many Rows

What happens: reduced concurrency and higher contention.

Fix: lock only what you need; use narrow predicates and LIMIT when appropriate.


Mistake 3: Forgetting NOWAIT/SKIP LOCKED for Work Queues

What happens: workers block each other.

Fix: use SKIP LOCKED (preferred) or NOWAIT depending on workflow.

Debugging tips:

  1. Confirm isolation level in your transaction: SHOW transaction_isolation;.
  2. Use pg_stat_activity to find sessions waiting on locks.
  3. Use pg_locks to understand what is blocked.
  4. Reproduce concurrency issues with two sessions before changing production settings.

Best Practices

  • ✅ Default to READ COMMITTED; ❌ raise isolation “just in case”.
  • ✅ Use FOR UPDATE for read-modify-write; ❌ do read then update without locking.
  • ✅ Use SKIP LOCKED for worker queues; ❌ let workers block each other.
  • ✅ Use NOWAIT for interactive UX; ❌ wait indefinitely on hot rows.
  • ✅ Treat SERIALIZABLE failures as retryable; ❌ surface them to users without retries.

Hands-On Practice

Exercise 1 (Easy): Set isolation level

Task: Start a transaction and set isolation to REPEATABLE READ.

-- Your SQL here

Solution:

BEGIN;
SET TRANSACTION ISOLATION LEVEL REPEATABLE READ;
COMMIT;

Exercise 2 (Medium): Use FOR UPDATE

Task: Lock a row for update with FOR UPDATE.

-- Your SQL here

Solution:

BEGIN;
SELECT * FROM accounts WHERE account_id = 1 FOR UPDATE;
COMMIT;

Exercise 3 (Advanced): Work queue pick

Task: Pick 1 queued job using FOR UPDATE SKIP LOCKED and mark it processing.

-- Your SQL here

Solution:

BEGIN;
WITH picked AS (
SELECT job_id
FROM jobs
WHERE status = 'queued'
ORDER BY created_at
FOR UPDATE SKIP LOCKED
LIMIT 1
)
UPDATE jobs
SET status = 'processing'
WHERE job_id IN (SELECT job_id FROM picked)
RETURNING job_id;
COMMIT;

Connection to Other Concepts

ConceptWhy it matters
Transaction controlisolation and locks live inside transactions
Deadlocks and retriesstrong isolation/locks can increase aborts; handle them
Indexesnarrow predicates + indexes reduce lock scope
Schedulingbackground jobs often use SKIP LOCKED patterns
Triggerstriggers can acquire additional locks during writes

Visual Learning Diagram

flowchart LR
A[Transaction] --> B[Isolation level\nREAD COMMITTED/REPEATABLE/SERIALIZABLE]
A --> C[Explicit locks\nFOR UPDATE]
C --> D[NOWAIT/SKIP LOCKED]
B --> E[Visibility rules\nwhat you can see]
C --> F[Concurrency rules\nwho can write]

classDef allNodes fill:#3e3e3e,stroke:#ffffff,stroke-width:2px,color:#f5f5f5
classDef highlight fill:#3e3e3e,stroke:#ffffff,stroke-width:4px,color:#f5f5f5
class A,B,C,D,E,F allNodes
class C highlight

Common Pitfalls

PitfallConsequencePrevention
SERIALIZABLE without retriesuser-visible failuresimplement retry loops
Over-lockingpoor throughputlock minimal rows
No work-queue lockingdouble-processingFOR UPDATE SKIP LOCKED
Waiting forever on lockshung requestslock_timeout / NOWAIT
Assuming READ UNCOMMITTED existswrong mental modelPostgres treats it as READ COMMITTED

Quick Reference

BEGIN; SET TRANSACTION ISOLATION LEVEL READ COMMITTED; COMMIT;
BEGIN; SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; COMMIT;
SELECT * FROM t WHERE id=1 FOR UPDATE;
SELECT * FROM t WHERE status='queued' FOR UPDATE SKIP LOCKED LIMIT 1;
SELECT * FROM t WHERE id=1 FOR UPDATE NOWAIT;

What's Next