Skip to content

Commit 2a67b5a

Browse files
committed
Fix oversized memory allocation in Parallel Hash Join
During the calculations of the maximum for the number of buckets, take into account that later we round that to the next power of 2. Reported-by: Karen Talarico Bug: #16925 Discussion: https://postgr.es/m/16925-ec96d83529d0d629%40postgresql.org Author: Thomas Munro, Andrei Lepikhov, Alexander Korotkov Reviewed-by: Alena Rybakina Backpatch-through: 12
1 parent 5ef34a8 commit 2a67b5a

File tree

1 file changed

+10
-2
lines changed

1 file changed

+10
-2
lines changed

src/backend/executor/nodeHash.c

Lines changed: 10 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1155,6 +1155,7 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11551155
double dtuples;
11561156
double dbuckets;
11571157
int new_nbuckets;
1158+
uint32 max_buckets;
11581159

11591160
/*
11601161
* We probably also need a smaller bucket array. How many
@@ -1167,9 +1168,16 @@ ExecParallelHashIncreaseNumBatches(HashJoinTable hashtable)
11671168
* array.
11681169
*/
11691170
dtuples = (old_batch0->ntuples * 2.0) / new_nbatch;
1171+
/*
1172+
* We need to calculate the maximum number of buckets to
1173+
* stay within the MaxAllocSize boundary. Round the
1174+
* maximum number to the previous power of 2 given that
1175+
* later we round the number to the next power of 2.
1176+
*/
1177+
max_buckets = pg_prevpower2_32((uint32)
1178+
(MaxAllocSize / sizeof(dsa_pointer_atomic)));
11701179
dbuckets = ceil(dtuples / NTUP_PER_BUCKET);
1171-
dbuckets = Min(dbuckets,
1172-
MaxAllocSize / sizeof(dsa_pointer_atomic));
1180+
dbuckets = Min(dbuckets, max_buckets);
11731181
new_nbuckets = (int) dbuckets;
11741182
new_nbuckets = Max(new_nbuckets, 1024);
11751183
new_nbuckets = pg_nextpower2_32(new_nbuckets);

0 commit comments

Comments
 (0)