Skip to content

Commit 00283ca

Browse files
committed
Use SnapshotDirty rather than an active snapshot to probe index endpoints.
If there are lots of uncommitted tuples at the end of the index range, get_actual_variable_range() ends up fetching each one and doing an MVCC visibility check on it, until it finally hits a visible tuple. This is bad enough in isolation, considering that we don't need an exact answer only an approximate one. But because the tuples are not yet committed, each visibility check does a TransactionIdIsInProgress() test, which involves scanning the ProcArray. When multiple sessions do this concurrently, the ensuing contention results in horrid performance loss. 20X overall throughput loss on not-too-complicated queries is easy to demonstrate in the back branches (though someone's made it noticeably less bad in HEAD). We can dodge the problem fairly effectively by using SnapshotDirty rather than a normal MVCC snapshot. This will cause the index probe to take uncommitted tuples as good, so that we incur only one tuple fetch and test even if there are many such tuples. The extent to which this degrades the estimate is debatable: it's possible the result is actually a more accurate prediction than before, if the endmost tuple has become committed by the time we actually execute the query being planned. In any case, it's not very likely that it makes the estimate a lot worse. SnapshotDirty will still reject tuples that are known committed dead, so we won't give bogus answers if an invalid outlier has been deleted but not yet vacuumed from the index. (Because btrees know how to mark such tuples dead in the index, we shouldn't have a big performance problem in the case that there are many of them at the end of the range.) This consideration motivates not using SnapshotAny, which was also considered as a fix. Note: the back branches were using SnapshotNow instead of an MVCC snapshot, but the problem and solution are the same. Per performance complaints from Bartlomiej Romanski, Josh Berkus, and others. Back-patch to 9.0, where the issue was introduced (by commit 40608e7).
1 parent d7cd6a9 commit 00283ca

File tree

1 file changed

+21
-4
lines changed

1 file changed

+21
-4
lines changed

src/backend/utils/adt/selfuncs.c

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4955,6 +4955,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
49554955
HeapTuple tup;
49564956
Datum values[INDEX_MAX_KEYS];
49574957
bool isnull[INDEX_MAX_KEYS];
4958+
SnapshotData SnapshotDirty;
49584959

49594960
estate = CreateExecutorState();
49604961
econtext = GetPerTupleExprContext(estate);
@@ -4977,6 +4978,7 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
49774978
slot = MakeSingleTupleTableSlot(RelationGetDescr(heapRel));
49784979
econtext->ecxt_scantuple = slot;
49794980
get_typlenbyval(vardata->atttype, &typLen, &typByVal);
4981+
InitDirtySnapshot(SnapshotDirty);
49804982

49814983
/* set up an IS NOT NULL scan key so that we ignore nulls */
49824984
ScanKeyEntryInitialize(&scankeys[0],
@@ -4993,8 +4995,23 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
49934995
/* If min is requested ... */
49944996
if (min)
49954997
{
4996-
index_scan = index_beginscan(heapRel, indexRel, SnapshotNow,
4997-
1, 0);
4998+
/*
4999+
* In principle, we should scan the index with our current
5000+
* active snapshot, which is the best approximation we've got
5001+
* to what the query will see when executed. But that won't
5002+
* be exact if a new snap is taken before running the query,
5003+
* and it can be very expensive if a lot of uncommitted rows
5004+
* exist at the end of the index (because we'll laboriously
5005+
* fetch each one and reject it). What seems like a good
5006+
* compromise is to use SnapshotDirty. That will accept
5007+
* uncommitted rows, and thus avoid fetching multiple heap
5008+
* tuples in this scenario. On the other hand, it will reject
5009+
* known-dead rows, and thus not give a bogus answer when the
5010+
* extreme value has been deleted; that case motivates not
5011+
* using SnapshotAny here.
5012+
*/
5013+
index_scan = index_beginscan(heapRel, indexRel,
5014+
&SnapshotDirty, 1, 0);
49985015
index_rescan(index_scan, scankeys, 1, NULL, 0);
49995016

50005017
/* Fetch first tuple in sortop's direction */
@@ -5025,8 +5042,8 @@ get_actual_variable_range(PlannerInfo *root, VariableStatData *vardata,
50255042
/* If max is requested, and we didn't find the index is empty */
50265043
if (max && have_data)
50275044
{
5028-
index_scan = index_beginscan(heapRel, indexRel, SnapshotNow,
5029-
1, 0);
5045+
index_scan = index_beginscan(heapRel, indexRel,
5046+
&SnapshotDirty, 1, 0);
50305047
index_rescan(index_scan, scankeys, 1, NULL, 0);
50315048

50325049
/* Fetch first tuple in reverse direction */

0 commit comments

Comments
 (0)