Skip to content

Commit 586250c

Browse files
committed
Fix hash_search to avoid corruption of the hash table on out-of-memory.
An out-of-memory error during expand_table() on a palloc-based hash table would leave a partially-initialized entry in the table. This would not be harmful for transient hash tables, since they'd get thrown away anyway at transaction abort. But for long-lived hash tables, such as the relcache hash, this would effectively corrupt the table, leading to crash or other misbehavior later. To fix, rearrange the order of operations so that table enlargement is attempted before we insert a new entry, rather than after adding it to the hash table. Problem discovered by Hitoshi Harada, though this is a bit different from his proposed patch.
1 parent c47b408 commit 586250c

File tree

1 file changed

+25
-16
lines changed

1 file changed

+25
-16
lines changed

src/backend/utils/hash/dynahash.c

Lines changed: 25 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -817,6 +817,27 @@ hash_search_with_hash_value(HTAB *hashp,
817817
hctl->accesses++;
818818
#endif
819819

820+
/*
821+
* If inserting, check if it is time to split a bucket.
822+
*
823+
* NOTE: failure to expand table is not a fatal error, it just means we
824+
* have to run at higher fill factor than we wanted. However, if we're
825+
* using the palloc allocator then it will throw error anyway on
826+
* out-of-memory, so we must do this before modifying the table.
827+
*/
828+
if (action == HASH_ENTER || action == HASH_ENTER_NULL)
829+
{
830+
/*
831+
* Can't split if running in partitioned mode, nor if frozen, nor if
832+
* table is the subject of any active hash_seq_search scans. Strange
833+
* order of these tests is to try to check cheaper conditions first.
834+
*/
835+
if (!IS_PARTITIONED(hctl) && !hashp->frozen &&
836+
hctl->nentries / (long) (hctl->max_bucket + 1) >= hctl->ffactor &&
837+
!has_seq_scans(hashp))
838+
(void) expand_table(hashp);
839+
}
840+
820841
/*
821842
* Do the initial lookup
822843
*/
@@ -937,24 +958,12 @@ hash_search_with_hash_value(HTAB *hashp,
937958
currBucket->hashvalue = hashvalue;
938959
hashp->keycopy(ELEMENTKEY(currBucket), keyPtr, keysize);
939960

940-
/* caller is expected to fill the data field on return */
941-
942961
/*
943-
* Check if it is time to split a bucket. Can't split if running
944-
* in partitioned mode, nor if table is the subject of any active
945-
* hash_seq_search scans. Strange order of these tests is to try
946-
* to check cheaper conditions first.
962+
* Caller is expected to fill the data field on return. DO NOT
963+
* insert any code that could possibly throw error here, as doing
964+
* so would leave the table entry incomplete and hence corrupt the
965+
* caller's data structure.
947966
*/
948-
if (!IS_PARTITIONED(hctl) &&
949-
hctl->nentries / (long) (hctl->max_bucket + 1) >= hctl->ffactor &&
950-
!has_seq_scans(hashp))
951-
{
952-
/*
953-
* NOTE: failure to expand table is not a fatal error, it just
954-
* means we have to run at higher fill factor than we wanted.
955-
*/
956-
expand_table(hashp);
957-
}
958967

959968
return (void *) ELEMENTKEY(currBucket);
960969
}

0 commit comments

Comments
 (0)