Skip to content

Commit 1847929

Browse files
committed
Fix serialization anomalies due to race conditions on INSERT.
On insert the CheckForSerializableConflictIn() test was performed before the page(s) which were going to be modified had been locked (with an exclusive buffer content lock). If another process acquired a relation SIReadLock on the heap and scanned to a page on which an insert was going to occur before the page was so locked, a rw-conflict would be missed, which could allow a serialization anomaly to be missed. The window between the check and the page lock was small, so the bug was generally not noticed unless there was high concurrency with multiple processes inserting into the same table. This was reported by Peter Bailis as bug #11732, by Sean Chittenden as bug #13667, and by others. The race condition was eliminated in heap_insert() by moving the check down below the acquisition of the buffer lock, which had been the very next statement. Because of the loop locking and unlocking multiple buffers in heap_multi_insert() a check was added after all inserts were completed. The check before the start of the inserts was left because it might avoid a large amount of work to detect a serialization anomaly before performing the all of the inserts and the related WAL logging. While investigating this bug, other SSI bugs which were even harder to hit in practice were noticed and fixed, an unnecessary check (covered by another check, so redundant) was removed from heap_update(), and comments were improved. Back-patch to all supported branches. Kevin Grittner and Thomas Munro
1 parent 71b1a0a commit 1847929

File tree

2 files changed

+69
-31
lines changed

2 files changed

+69
-31
lines changed

src/backend/access/heap/heapam.c

Lines changed: 65 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -2035,26 +2035,31 @@ heap_insert(Relation relation, HeapTuple tup, CommandId cid,
20352035
*/
20362036
heaptup = heap_prepare_insert(relation, tup, xid, cid, options);
20372037

2038+
/*
2039+
* Find buffer to insert this tuple into. If the page is all visible,
2040+
* this will also pin the requisite visibility map page.
2041+
*/
2042+
buffer = RelationGetBufferForTuple(relation, heaptup->t_len,
2043+
InvalidBuffer, options, bistate,
2044+
&vmbuffer, NULL);
2045+
20382046
/*
20392047
* We're about to do the actual insert -- but check for conflict first, to
20402048
* avoid possibly having to roll back work we've just done.
20412049
*
2050+
* This is safe without a recheck as long as there is no possibility of
2051+
* another process scanning the page between this check and the insert
2052+
* being visible to the scan (i.e., an exclusive buffer content lock is
2053+
* continuously held from this point until the tuple insert is visible).
2054+
*
20422055
* For a heap insert, we only need to check for table-level SSI locks. Our
20432056
* new tuple can't possibly conflict with existing tuple locks, and heap
20442057
* page locks are only consolidated versions of tuple locks; they do not
2045-
* lock "gaps" as index page locks do. So we don't need to identify a
2046-
* buffer before making the call.
2058+
* lock "gaps" as index page locks do. So we don't need to specify a
2059+
* buffer when making the call, which makes for a faster check.
20472060
*/
20482061
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
20492062

2050-
/*
2051-
* Find buffer to insert this tuple into. If the page is all visible,
2052-
* this will also pin the requisite visibility map page.
2053-
*/
2054-
buffer = RelationGetBufferForTuple(relation, heaptup->t_len,
2055-
InvalidBuffer, options, bistate,
2056-
&vmbuffer, NULL);
2057-
20582063
/* NO EREPORT(ERROR) from here till changes are logged */
20592064
START_CRIT_SECTION();
20602065

@@ -2278,13 +2283,26 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
22782283

22792284
/*
22802285
* We're about to do the actual inserts -- but check for conflict first,
2281-
* to avoid possibly having to roll back work we've just done.
2286+
* to minimize the possibility of having to roll back work we've just
2287+
* done.
22822288
*
2283-
* For a heap insert, we only need to check for table-level SSI locks. Our
2284-
* new tuple can't possibly conflict with existing tuple locks, and heap
2289+
* A check here does not definitively prevent a serialization anomaly;
2290+
* that check MUST be done at least past the point of acquiring an
2291+
* exclusive buffer content lock on every buffer that will be affected,
2292+
* and MAY be done after all inserts are reflected in the buffers and
2293+
* those locks are released; otherwise there race condition. Since
2294+
* multiple buffers can be locked and unlocked in the loop below, and it
2295+
* would not be feasible to identify and lock all of those buffers before
2296+
* the loop, we must do a final check at the end.
2297+
*
2298+
* The check here could be omitted with no loss of correctness; it is
2299+
* present strictly as an optimization.
2300+
*
2301+
* For heap inserts, we only need to check for table-level SSI locks. Our
2302+
* new tuples can't possibly conflict with existing tuple locks, and heap
22852303
* page locks are only consolidated versions of tuple locks; they do not
2286-
* lock "gaps" as index page locks do. So we don't need to identify a
2287-
* buffer before making the call.
2304+
* lock "gaps" as index page locks do. So we don't need to specify a
2305+
* buffer when making the call, which makes for a faster check.
22882306
*/
22892307
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
22902308

@@ -2444,6 +2462,22 @@ heap_multi_insert(Relation relation, HeapTuple *tuples, int ntuples,
24442462
ndone += nthispage;
24452463
}
24462464

2465+
/*
2466+
* We're done with the actual inserts. Check for conflicts again, to
2467+
* ensure that all rw-conflicts in to these inserts are detected. Without
2468+
* this final check, a sequential scan of the heap may have locked the
2469+
* table after the "before" check, missing one opportunity to detect the
2470+
* conflict, and then scanned the table before the new tuples were there,
2471+
* missing the other chance to detect the conflict.
2472+
*
2473+
* For heap inserts, we only need to check for table-level SSI locks. Our
2474+
* new tuples can't possibly conflict with existing tuple locks, and heap
2475+
* page locks are only consolidated versions of tuple locks; they do not
2476+
* lock "gaps" as index page locks do. So we don't need to specify a
2477+
* buffer when making the call.
2478+
*/
2479+
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
2480+
24472481
/*
24482482
* If tuples are cachable, mark them for invalidation from the caches in
24492483
* case we abort. Note it is OK to do this after releasing the buffer,
@@ -2730,6 +2764,11 @@ heap_delete(Relation relation, ItemPointer tid,
27302764
/*
27312765
* We're about to do the actual delete -- check for conflict first, to
27322766
* avoid possibly having to roll back work we've just done.
2767+
*
2768+
* This is safe without a recheck as long as there is no possibility of
2769+
* another process scanning the page between this check and the delete
2770+
* being visible to the scan (i.e., an exclusive buffer content lock is
2771+
* continuously held from this point until the tuple delete is visible).
27332772
*/
27342773
CheckForSerializableConflictIn(relation, &tp, buffer);
27352774

@@ -3299,12 +3338,6 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
32993338
goto l2;
33003339
}
33013340

3302-
/*
3303-
* We're about to do the actual update -- check for conflict first, to
3304-
* avoid possibly having to roll back work we've just done.
3305-
*/
3306-
CheckForSerializableConflictIn(relation, &oldtup, buffer);
3307-
33083341
/* Fill in transaction status data */
33093342

33103343
/*
@@ -3493,14 +3526,20 @@ heap_update(Relation relation, ItemPointer otid, HeapTuple newtup,
34933526
}
34943527

34953528
/*
3496-
* We're about to create the new tuple -- check for conflict first, to
3529+
* We're about to do the actual update -- check for conflict first, to
34973530
* avoid possibly having to roll back work we've just done.
34983531
*
3499-
* NOTE: For a tuple insert, we only need to check for table locks, since
3500-
* predicate locking at the index level will cover ranges for anything
3501-
* except a table scan. Therefore, only provide the relation.
3532+
* This is safe without a recheck as long as there is no possibility of
3533+
* another process scanning the pages between this check and the update
3534+
* being visible to the scan (i.e., exclusive buffer content lock(s) are
3535+
* continuously held from this point until the tuple update is visible).
3536+
*
3537+
* For the new tuple the only check needed is at the relation level, but
3538+
* since both tuples are in the same relation and the check for oldtup
3539+
* will include checking the relation level, there is no benefit to a
3540+
* separate check for the new tuple.
35023541
*/
3503-
CheckForSerializableConflictIn(relation, NULL, InvalidBuffer);
3542+
CheckForSerializableConflictIn(relation, &oldtup, buffer);
35043543

35053544
/*
35063545
* At this point newbuf and buffer are both pinned and locked, and newbuf

src/backend/storage/lmgr/predicate.c

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3209,22 +3209,21 @@ ReleasePredicateLocks(bool isCommit)
32093209
return;
32103210
}
32113211

3212+
LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3213+
32123214
Assert(!isCommit || SxactIsPrepared(MySerializableXact));
32133215
Assert(!isCommit || !SxactIsDoomed(MySerializableXact));
32143216
Assert(!SxactIsCommitted(MySerializableXact));
32153217
Assert(!SxactIsRolledBack(MySerializableXact));
32163218

32173219
/* may not be serializable during COMMIT/ROLLBACK PREPARED */
3218-
if (MySerializableXact->pid != 0)
3219-
Assert(IsolationIsSerializable());
3220+
Assert(MySerializableXact->pid == 0 || IsolationIsSerializable());
32203221

32213222
/* We'd better not already be on the cleanup list. */
32223223
Assert(!SxactIsOnFinishedList(MySerializableXact));
32233224

32243225
topLevelIsDeclaredReadOnly = SxactIsReadOnly(MySerializableXact);
32253226

3226-
LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
3227-
32283227
/*
32293228
* We don't hold XidGenLock lock here, assuming that TransactionId is
32303229
* atomic!
@@ -4361,7 +4360,7 @@ CheckTableForSerializableConflictIn(Relation relation)
43614360
LWLockAcquire(SerializablePredicateLockListLock, LW_EXCLUSIVE);
43624361
for (i = 0; i < NUM_PREDICATELOCK_PARTITIONS; i++)
43634362
LWLockAcquire(FirstPredicateLockMgrLock + i, LW_SHARED);
4364-
LWLockAcquire(SerializableXactHashLock, LW_SHARED);
4363+
LWLockAcquire(SerializableXactHashLock, LW_EXCLUSIVE);
43654364

43664365
/* Scan through target list */
43674366
hash_seq_init(&seqstat, PredicateLockTargetHash);

0 commit comments

Comments
 (0)