Skip to content

Commit 4ac5d33

Browse files
committed
Fix extreme skew detection in Parallel Hash Join.
After repartitioning the inner side of a hash join that would have exceeded the allowed size, we check if all the tuples from a parent partition moved to one child partition. That is evidence that it contains duplicate keys and later attempts to repartition will also fail, so we should give up trying to limit memory (for lack of a better fallback strategy). A thinko prevented the check from working correctly in partition 0 (the one that is partially loaded into memory already). After repartitioning, we should check for extreme skew if the *parent* partition's space_exhausted flag was set, not the child partition's. The consequence was repeated futile repartitioning until per-partition data exceeded various limits including "ERROR: invalid DSA memory alloc request size 1811939328", OS allocation failure, or temporary disk space errors. (We could also do something about some of those symptoms, but that's material for separate patches.) This problem only became likely when PostgreSQL 16 introduced support for Parallel Hash Right/Full Join, allowing NULL keys into the hash table. Repartitioning always leaves NULL in partition 0, no matter how many times you do it, because the hash value is all zero bits. That's unlikely for other hashed values, but they might still have caused wasted extra effort before giving up. Back-patch to all supported releases. Reported-by: Craig Milhiser <craig@milhiser.com> Reviewed-by: Andrei Lepikhov <lepihov@gmail.com> Discussion: https://postgr.es/m/CA%2BwnhO1OfgXbmXgC4fv_uu%3DOxcDQuHvfoQ4k0DFeB0Qqd-X-rQ%40mail.gmail.com
1 parent e90d108 commit 4ac5d33

File tree

1 file changed

+12
-5
lines changed

1 file changed

+12
-5
lines changed

src/backend/executor/nodeHash.c

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1244,32 +1244,39 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
12441244
if (BarrierArriveAndWait(&pstate->grow_batches_barrier,
12451245
WAIT_EVENT_HASH_GROW_BATCHES_DECIDE))
12461246
{
1247+
ParallelHashJoinBatch *old_batches;
12471248
bool space_exhausted = false;
12481249
bool extreme_skew_detected = false;
12491250

12501251
/* Make sure that we have the current dimensions and buckets. */
12511252
ExecParallelHashEnsureBatchAccessors(hashtable);
12521253
ExecParallelHashTableSetCurrentBatch(hashtable, 0);
12531254

1255+
old_batches = dsa_get_address(hashtable->area, pstate->old_batches);
1256+
12541257
/* Are any of the new generation of batches exhausted? */
12551258
for (int i = 0; i < hashtable->nbatch; ++i)
12561259
{
1257-
ParallelHashJoinBatch *batch = hashtable->batches[i].shared;
1260+
ParallelHashJoinBatch *batch;
1261+
ParallelHashJoinBatch *old_batch;
1262+
int parent;
12581263

1264+
batch = hashtable->batches[i].shared;
12591265
if (batch->space_exhausted ||
12601266
batch->estimated_size > pstate->space_allowed)
1261-
{
1262-
int parent;
1263-
12641267
space_exhausted = true;
12651268

1269+
parent = i % pstate->old_nbatch;
1270+
old_batch = NthParallelHashJoinBatch(old_batches, parent);
1271+
if (old_batch->space_exhausted ||
1272+
batch->estimated_size > pstate->space_allowed)
1273+
{
12661274
/*
12671275
* Did this batch receive ALL of the tuples from its
12681276
* parent batch? That would indicate that further
12691277
* repartitioning isn't going to help (the hash values
12701278
* are probably all the same).
12711279
*/
1272-
parent = i % pstate->old_nbatch;
12731280
if (batch->ntuples == hashtable->batches[parent].shared->old_ntuples)
12741281
extreme_skew_detected = true;
12751282
}

0 commit comments

Comments
 (0)