Skip to content

Commit abdf81a

Browse files
Fix sscanf limits in pg_dump
Make sure that the string parsing is limited by the size of the destination buffer. The buffer is bounded by MAXPGPATH, and thus the limit must be inserted via preprocessor expansion and the buffer increased by one to account for the terminator. There is no risk of overflow here, since in this case, the buffer scanned is smaller than the destination buffer. Backpatch all the way down to 9.6. Reviewed-by: Tom Lane Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se Backpatch-through: 9.6
1 parent d36bdc4 commit abdf81a

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

src/bin/pg_dump/pg_backup_directory.c

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -458,11 +458,11 @@ _LoadBlobs(ArchiveHandle *AH)
458458
/* Read the blobs TOC file line-by-line, and process each blob */
459459
while ((cfgets(ctx->blobsTocFH, line, MAXPGPATH)) != NULL)
460460
{
461-
char fname[MAXPGPATH];
461+
char fname[MAXPGPATH + 1];
462462
char path[MAXPGPATH];
463463

464464
/* Can't overflow because line and fname are the same length. */
465-
if (sscanf(line, "%u %s\n", &oid, fname) != 2)
465+
if (sscanf(line, "%u %" CppAsString2(MAXPGPATH) "s\n", &oid, fname) != 2)
466466
exit_horribly(modulename, "invalid line in large object TOC file \"%s\": \"%s\"\n",
467467
fname, line);
468468

0 commit comments

Comments
 (0)