Skip to content

Commit f66fcc5

Browse files
author
Amit Kapila
committed
Fix an uninitialized access in hash_xlog_squeeze_page().
Commit 861f86b changed hash_xlog_squeeze_page() to start reading the write buffer conditionally but forgot to initialize it leading to an uninitialized access. Reported-by: Alexander Lakhin Author: Hayato Kuroda Reviewed-by: Alexander Lakhin, Amit Kapila Discussion: http://postgr.es/m/62ed1a9f-746a-8e86-904b-51b9b806a1d9@gmail.com
1 parent aa11a9c commit f66fcc5

File tree

3 files changed

+32
-1
lines changed

3 files changed

+32
-1
lines changed

src/backend/access/hash/hash_xlog.c

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -632,7 +632,7 @@ hash_xlog_squeeze_page(XLogReaderState *record)
632632
XLogRecPtr lsn = record->EndRecPtr;
633633
xl_hash_squeeze_page *xldata = (xl_hash_squeeze_page *) XLogRecGetData(record);
634634
Buffer bucketbuf = InvalidBuffer;
635-
Buffer writebuf;
635+
Buffer writebuf = InvalidBuffer;
636636
Buffer ovflbuf;
637637
Buffer prevbuf = InvalidBuffer;
638638
Buffer mapbuf;

src/test/regress/expected/hash_index.out

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -298,6 +298,20 @@ ROLLBACK;
298298
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;
299299
CHECKPOINT;
300300
VACUUM hash_cleanup_heap;
301+
TRUNCATE hash_cleanup_heap;
302+
-- Insert tuples to both the primary bucket page and overflow pages.
303+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;
304+
-- Fill overflow pages by "dead" tuples.
305+
BEGIN;
306+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1500) as i;
307+
ROLLBACK;
308+
-- And insert some tuples again. During squeeze operation, these will be moved
309+
-- to other overflow pages and also allow overflow pages filled by dead tuples
310+
-- to be freed. Note the main purpose of this test is to test the case where
311+
-- we don't need to move any tuple from the overflow page being freed.
312+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 50) as i;
313+
CHECKPOINT;
314+
VACUUM hash_cleanup_heap;
301315
-- Clean up.
302316
DROP TABLE hash_cleanup_heap;
303317
-- Index on temp table.

src/test/regress/sql/hash_index.sql

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -284,6 +284,23 @@ INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;
284284
CHECKPOINT;
285285
VACUUM hash_cleanup_heap;
286286

287+
TRUNCATE hash_cleanup_heap;
288+
289+
-- Insert tuples to both the primary bucket page and overflow pages.
290+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 500) as i;
291+
-- Fill overflow pages by "dead" tuples.
292+
BEGIN;
293+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 1500) as i;
294+
ROLLBACK;
295+
-- And insert some tuples again. During squeeze operation, these will be moved
296+
-- to other overflow pages and also allow overflow pages filled by dead tuples
297+
-- to be freed. Note the main purpose of this test is to test the case where
298+
-- we don't need to move any tuple from the overflow page being freed.
299+
INSERT INTO hash_cleanup_heap SELECT 1 FROM generate_series(1, 50) as i;
300+
301+
CHECKPOINT;
302+
VACUUM hash_cleanup_heap;
303+
287304
-- Clean up.
288305
DROP TABLE hash_cleanup_heap;
289306

0 commit comments

Comments
 (0)