|
472 | 472 | <para>
|
473 | 473 | The server's checkpointer process automatically performs
|
474 | 474 | a checkpoint every so often. A checkpoint is begun every <xref
|
475 |
| - linkend="guc-checkpoint-segments"> log segments, or every <xref |
476 |
| - linkend="guc-checkpoint-timeout"> seconds, whichever comes first. |
477 |
| - The default settings are 3 segments and 300 seconds (5 minutes), respectively. |
| 475 | + linkend="guc-checkpoint-timeout"> seconds, or if |
| 476 | + <xref linkend="guc-max-wal-size"> is about to be exceeded, |
| 477 | + whichever comes first. |
| 478 | + The default settings are 5 minutes and 128 MB, respectively. |
478 | 479 | If no WAL has been written since the previous checkpoint, new checkpoints
|
479 | 480 | will be skipped even if <varname>checkpoint_timeout</> has passed.
|
480 | 481 | (If WAL archiving is being used and you want to put a lower limit on how
|
|
486 | 487 | </para>
|
487 | 488 |
|
488 | 489 | <para>
|
489 |
| - Reducing <varname>checkpoint_segments</varname> and/or |
490 |
| - <varname>checkpoint_timeout</varname> causes checkpoints to occur |
| 490 | + Reducing <varname>checkpoint_timeout</varname> and/or |
| 491 | + <varname>max_wal_size</varname> causes checkpoints to occur |
491 | 492 | more often. This allows faster after-crash recovery, since less work
|
492 | 493 | will need to be redone. However, one must balance this against the
|
493 | 494 | increased cost of flushing dirty data pages more often. If
|
|
510 | 511 | parameter. If checkpoints happen closer together than
|
511 | 512 | <varname>checkpoint_warning</> seconds,
|
512 | 513 | a message will be output to the server log recommending increasing
|
513 |
| - <varname>checkpoint_segments</varname>. Occasional appearance of such |
| 514 | + <varname>max_wal_size</varname>. Occasional appearance of such |
514 | 515 | a message is not cause for alarm, but if it appears often then the
|
515 | 516 | checkpoint control parameters should be increased. Bulk operations such
|
516 | 517 | as large <command>COPY</> transfers might cause a number of such warnings
|
517 |
| - to appear if you have not set <varname>checkpoint_segments</> high |
| 518 | + to appear if you have not set <varname>max_wal_size</> high |
518 | 519 | enough.
|
519 | 520 | </para>
|
520 | 521 |
|
|
525 | 526 | <xref linkend="guc-checkpoint-completion-target">, which is
|
526 | 527 | given as a fraction of the checkpoint interval.
|
527 | 528 | The I/O rate is adjusted so that the checkpoint finishes when the
|
528 |
| - given fraction of <varname>checkpoint_segments</varname> WAL segments |
529 |
| - have been consumed since checkpoint start, or the given fraction of |
530 |
| - <varname>checkpoint_timeout</varname> seconds have elapsed, |
531 |
| - whichever is sooner. With the default value of 0.5, |
| 529 | + given fraction of |
| 530 | + <varname>checkpoint_timeout</varname> seconds have elapsed, or before |
| 531 | + <varname>max_wal_size</varname> is exceeded, whichever is sooner. |
| 532 | + With the default value of 0.5, |
532 | 533 | <productname>PostgreSQL</> can be expected to complete each checkpoint
|
533 | 534 | in about half the time before the next checkpoint starts. On a system
|
534 | 535 | that's very close to maximum I/O throughput during normal operation,
|
|
545 | 546 | </para>
|
546 | 547 |
|
547 | 548 | <para>
|
548 |
| - There will always be at least one WAL segment file, and will normally |
549 |
| - not be more than (2 + <varname>checkpoint_completion_target</varname>) * <varname>checkpoint_segments</varname> + 1 |
550 |
| - or <varname>checkpoint_segments</> + <xref linkend="guc-wal-keep-segments"> + 1 |
551 |
| - files. Each segment file is normally 16 MB (though this size can be |
552 |
| - altered when building the server). You can use this to estimate space |
553 |
| - requirements for <acronym>WAL</acronym>. |
554 |
| - Ordinarily, when old log segment files are no longer needed, they |
555 |
| - are recycled (that is, renamed to become future segments in the numbered |
556 |
| - sequence). If, due to a short-term peak of log output rate, there |
557 |
| - are more than 3 * <varname>checkpoint_segments</varname> + 1 |
558 |
| - segment files, the unneeded segment files will be deleted instead |
559 |
| - of recycled until the system gets back under this limit. |
| 549 | + The number of WAL segment files in <filename>pg_xlog</> directory depends on |
| 550 | + <varname>min_wal_size</>, <varname>max_wal_size</> and |
| 551 | + the amount of WAL generated in previous checkpoint cycles. When old log |
| 552 | + segment files are no longer needed, they are removed or recycled (that is, |
| 553 | + renamed to become future segments in the numbered sequence). If, due to a |
| 554 | + short-term peak of log output rate, <varname>max_wal_size</> is |
| 555 | + exceeded, the unneeded segment files will be removed until the system |
| 556 | + gets back under this limit. Below that limit, the system recycles enough |
| 557 | + WAL files to cover the estimated need until the next checkpoint, and |
| 558 | + removes the rest. The estimate is based on a moving average of the number |
| 559 | + of WAL files used in previous checkpoint cycles. The moving average |
| 560 | + is increased immediately if the actual usage exceeds the estimate, so it |
| 561 | + accommodates peak usage rather average usage to some extent. |
| 562 | + <varname>min_wal_size</> puts a minimum on the amount of WAL files |
| 563 | + recycled for future usage; that much WAL is always recycled for future use, |
| 564 | + even if the system is idle and the WAL usage estimate suggests that little |
| 565 | + WAL is needed. |
| 566 | + </para> |
| 567 | + |
| 568 | + <para> |
| 569 | + Independently of <varname>max_wal_size</varname>, |
| 570 | + <xref linkend="guc-wal-keep-segments"> + 1 most recent WAL files are |
| 571 | + kept at all times. Also, if WAL archiving is used, old segments can not be |
| 572 | + removed or recycled until they are archived. If WAL archiving cannot keep up |
| 573 | + with the pace that WAL is generated, or if <varname>archive_command</varname> |
| 574 | + fails repeatedly, old WAL files will accumulate in <filename>pg_xlog</> |
| 575 | + until the situation is resolved. A slow or failed standby server that |
| 576 | + uses a replication slot will have the same effect (see |
| 577 | + <xref linkend="streaming-replication-slots">). |
560 | 578 | </para>
|
561 | 579 |
|
562 | 580 | <para>
|
|
571 | 589 | master because restartpoints can only be performed at checkpoint records.
|
572 | 590 | A restartpoint is triggered when a checkpoint record is reached if at
|
573 | 591 | least <varname>checkpoint_timeout</> seconds have passed since the last
|
574 |
| - restartpoint. In standby mode, a restartpoint is also triggered if at |
575 |
| - least <varname>checkpoint_segments</> log segments have been replayed |
576 |
| - since the last restartpoint. |
| 592 | + restartpoint, or if WAL size is about to exceed |
| 593 | + <varname>max_wal_size</>. |
577 | 594 | </para>
|
578 | 595 |
|
579 | 596 | <para>
|
|
0 commit comments