Skip to content

Commit 5c97243

Browse files
committed
Redesign handling of SIGTERM/control-C in parallel pg_dump/pg_restore.
Formerly, Unix builds of pg_dump/pg_restore would trap SIGINT and similar signals and set a flag that was tested in various data-transfer loops. This was prone to errors of omission (cf commit 3c8aa66); and even if the client-side response was prompt, we did nothing that would cause long-running SQL commands (e.g. CREATE INDEX) to terminate early. Also, the master process would effectively do nothing at all upon receipt of SIGINT; the only reason it seemed to work was that in typical scenarios the signal would also be delivered to the child processes. We should support termination when a signal is delivered only to the master process, though. Windows builds had no console interrupt handler, so they would just fall over immediately at control-C, again leaving long-running SQL commands to finish unmolested. To fix, remove the flag-checking approach altogether. Instead, allow the Unix signal handler to send a cancel request directly and then exit(1). In the master process, also have it forward the signal to the children. On Windows, add a console interrupt handler that behaves approximately the same. The main difference is that a single execution of the Windows handler can send all the cancel requests since all the info is available in one process, whereas on Unix each process sends a cancel only for its own database connection. In passing, fix an old problem that DisconnectDatabase tends to send a cancel request before exiting a parallel worker, even if nothing went wrong. This is at least a waste of cycles, and could lead to unexpected log messages, or maybe even data loss if it happened in pg_restore (though in the current code the problem seems to affect only pg_dump). The cause was that after a COPY step, pg_dump was leaving libpq in PGASYNC_BUSY state, causing PQtransactionStatus() to report PQTRANS_ACTIVE. That's normally harmless because the next PQexec() will silently clear the PGASYNC_BUSY state; but in a parallel worker we might exit without any additional SQL commands after a COPY step. So add an extra PQgetResult() call after a COPY to allow libpq to return to PGASYNC_IDLE state. This is a bug fix, IMO, so back-patch to 9.3 where parallel dump/restore were introduced. Thanks to Kyotaro Horiguchi for Windows testing and code suggestions. Original-Patch: <7005.1464657274@sss.pgh.pa.us> Discussion: <20160602.174941.256342236.horiguchi.kyotaro@lab.ntt.co.jp>
1 parent 0a1485f commit 5c97243

File tree

8 files changed

+504
-168
lines changed

8 files changed

+504
-168
lines changed

src/bin/pg_dump/compress_io.c

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -183,9 +183,6 @@ size_t
183183
WriteDataToArchive(ArchiveHandle *AH, CompressorState *cs,
184184
const void *data, size_t dLen)
185185
{
186-
/* Are we aborting? */
187-
checkAborting(AH);
188-
189186
switch (cs->comprAlg)
190187
{
191188
case COMPR_ALG_LIBZ:
@@ -355,9 +352,6 @@ ReadDataFromArchiveZlib(ArchiveHandle *AH, ReadFunc readF)
355352
/* no minimal chunk size for zlib */
356353
while ((cnt = readF(AH, &buf, &buflen)))
357354
{
358-
/* Are we aborting? */
359-
checkAborting(AH);
360-
361355
zp->next_in = (void *) buf;
362356
zp->avail_in = cnt;
363357

@@ -418,9 +412,6 @@ ReadDataFromArchiveNone(ArchiveHandle *AH, ReadFunc readF)
418412

419413
while ((cnt = readF(AH, &buf, &buflen)))
420414
{
421-
/* Are we aborting? */
422-
checkAborting(AH);
423-
424415
ahwrite(buf, 1, cnt, AH);
425416
}
426417

0 commit comments

Comments
 (0)