Skip to content

Commit ff4cbc1

Browse files
committed
Fix possible "invalid memory alloc request size" failure in nodeHash.c.
Limit the size of the hashtable pointer array to not more than MaxAllocSize. We've seen reports of failures due to this in HEAD/9.5, and it seems possible in older branches as well. The change in NTUP_PER_BUCKET in 9.5 may have made the problem more likely, but surely it didn't introduce it. Tomas Vondra, slightly modified by me
1 parent 9955798 commit ff4cbc1

File tree

1 file changed

+4
-2
lines changed

1 file changed

+4
-2
lines changed

src/backend/executor/nodeHash.c

+4-2
Original file line numberDiff line numberDiff line change
@@ -460,10 +460,12 @@ ExecChooseHashTableSize(double ntuples, int tupwidth, bool useskew,
460460
* Set nbuckets to achieve an average bucket load of NTUP_PER_BUCKET when
461461
* memory is filled. Set nbatch to the smallest power of 2 that appears
462462
* sufficient. The Min() steps limit the results so that the pointer
463-
* arrays we'll try to allocate do not exceed work_mem.
463+
* arrays we'll try to allocate do not exceed work_mem nor MaxAllocSize.
464464
*/
465-
max_pointers = (work_mem * 1024L) / sizeof(void *);
465+
max_pointers = (work_mem * 1024L) / sizeof(HashJoinTuple);
466+
max_pointers = Min(max_pointers, MaxAllocSize / sizeof(HashJoinTuple));
466467
/* also ensure we avoid integer overflow in nbatch and nbuckets */
468+
/* (this step is redundant given the current value of MaxAllocSize) */
467469
max_pointers = Min(max_pointers, INT_MAX / 2);
468470

469471
if (inner_rel_bytes > hash_table_bytes)

0 commit comments

Comments
 (0)