Skip to content

Commit 533c657

Browse files
committed
Merge branch 'PGPROEE9_6_MULTIMASTER' of https://gitlab.postgrespro.ru/pgpro-dev/postgrespro into PGPROEE9_6_MULTIMASTER
2 parents f0ca8e8 + 2eee5ee commit 533c657

File tree

4 files changed

+75
-48
lines changed

4 files changed

+75
-48
lines changed

contrib/mmts/README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,16 +1,13 @@
11
# `PostgreSQL multimaster`
22

3-
`multimaster` is a PostgreSQL extension with a set of patches that turns PostgreSQL into a synchronous shared-nothing cluster to provide Online Transaction Processing (OLTP) scalability and high availability with automatic disaster recovery.
4-
5-
6-
## Features
3+
`multimaster` is a PostgreSQL extension with a set of patches that turns PostgreSQL into a synchronous shared-nothing cluster to provide Online Transaction Processing (OLTP) scalability and high availability with automatic disaster recovery. As compared to a standard PostgreSQL master-slave cluster, a cluster configured with the `multimaster` extension offers the following benefits:
74

85
* Cluster-wide transaction isolation
9-
* Synchronous logical replication
10-
* DDL Replication
11-
* Fault tolerance
12-
* Automatic node recovery
13-
6+
* Synchronous logical replication
7+
* DDL replication
8+
* Working with temporary tables on each cluster node
9+
* Fault tolerance and automatic node recovery
10+
* PostgreSQL online upgrades
1411

1512
## Overview
1613

@@ -49,7 +46,10 @@ For details on the `multimaster` internals, see the [Architecture](/contrib/mmts
4946

5047
* `multimaster` can only replicate one database per cluster.
5148

52-
* The replicated tables must have primary keys. Otherwise, `multimaster` cannot perform logical replication.
49+
* The replicated tables must have primary keys or replica identity. Otherwise, `multimaster` cannot perform logical replication. Unlogged tables are not replicated, as in the standard PostgreSQL.
50+
51+
* Sequence generation.
52+
To avoid conflicts between unique identifiers on different nodes, `multimaster` modifies the default behavior of sequence generators. For each node, ID generation is started with the node number and is incremented by the number of nodes in each iteration. For example, in a three-node cluster, 1, 4, and 7 IDs are allocated to the objects written onto the first node, while 2, 5, and 8 IDs are reserved for the second node.
5353

5454
* DDL replication.
5555
While `multimaster` replicates data on the logical level, DDL is replicated on the statement level, which causes distributed commits of the same statement on different nodes. As a result, complex DDL scenarios, such as stored procedures and temporary tables, may work differently as compared to the standard PostgreSQL.

contrib/mmts/doc/administration.md

Lines changed: 38 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -72,60 +72,57 @@ To use `multimaster`, you need to install Postgres Pro Enterprise on all nodes o
7272

7373
## Setting up a Multi-Master Cluster
7474

75+
You must have superuser rights to set up a multi-master cluster.
76+
7577
After installing Postgres Pro Enterprise on all nodes, you need to configure the cluster with `multimaster`. Suppose you are setting up a cluster of three nodes, with ```node1```, ```node2```, and ```node3``` domain names.
7678
To configure your cluster with `multimaster`, complete these steps on each cluster node:
7779

7880
1. Set up the database to be replicated with `multimaster`:
79-
80-
* If you are starting from scratch, initialize a cluster, create an empty database `mydb` and a new user `myuser`, as usual:
81+
* If you are starting from scratch, initialize a cluster, create an empty database `mydb` and a new user `myuser`, as usual:
8182
```
8283
initdb -D ./datadir
8384
pg_ctl -D ./datadir -l ./pg.log start
8485
createdb myuser -h localhost
8586
createdb mydb -O myuser -h localhost
8687
pg_ctl -D ./datadir -l ./pg.log stop
8788
```
88-
89-
* If you already have a database `mydb` running on the `node1` server, initialize new nodes from the working node using `pg_basebackup`. On each cluster node you are going to add, run:
89+
* If you already have a database `mydb` running on the `node1` server, initialize new nodes from the working node using `pg_basebackup`. On each cluster node you are going to add, run:
9090
```
9191
pg_basebackup -D ./datadir -h node1 mydb
9292
```
93-
For details, on `pg_basebackup`, see [pg_basebackup](https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html).
94-
95-
1. Modify the ```postgresql.conf``` configuration file, as follows:
96-
* Set up PostgreSQL parameters related to replication.
97-
98-
```
99-
wal_level = logical
100-
max_connections = 100
101-
max_prepared_transactions = 300
102-
max_wal_senders = 10 # at least the number of nodes
103-
max_replication_slots = 10 # at least the number of nodes
104-
```
105-
You must change the replication level to `logical` as `multimaster` relies on logical replication. For a cluster of N nodes, enable at least N WAL sender processes and replication slots. Since `multimaster` implicitly adds a `PREPARE` phase to each `COMMIT` transaction, make sure to set the number of prepared transactions to N*max_connections. Otherwise, prepared transactions may be queued.
93+
For details, see [pg_basebackup](https://www.postgresql.org/docs/9.6/static/app-pgbasebackup.html).
10694
107-
* Make sure you have enough background workers allocated for each node:
108-
109-
```
110-
max_worker_processes = 250
111-
```
112-
For example, for a three-node cluster with `max_connections` = 100, `multimaster` may need up to 206 background workers at peak times: 200 workers for connections from the neighbor nodes, two workers for walsender processes, two workers for walreceiver processes, and two workers for the arbiter sender and receiver processes. When setting this parameter, remember that other modules may also use background workers at the same time.
113-
114-
* Add `multimaster`-specific options:
115-
116-
```postgres
117-
multimaster.max_nodes = 3 # cluster size
118-
multimaster.node_id = 1 # the 1-based index of the node in the cluster
119-
multimaster.conn_strings = 'dbname=mydb user=myuser host=node1, dbname=mydb user=myuser host=node2, dbname=mydb user=myuser host=node3'
95+
2. Modify the ```postgresql.conf``` configuration file, as follows:
96+
* Change the isolation level for transactions to `repeatable read`:
97+
```
98+
default_transaction_isolation = "repeatable read"
99+
```
100+
`multimaster` supports only the `repeatable read` isolation level. You cannot set up `multimaster` with the default `read committed` level.
101+
* Set up PostgreSQL parameters related to replication.
102+
```
103+
wal_level = logical
104+
max_connections = 100
105+
max_prepared_transactions = 300
106+
max_wal_senders = 10 # at least the number of nodes
107+
max_replication_slots = 10 # at least the number of nodes
108+
```
109+
You must change the replication level to `logical` as `multimaster` relies on logical replication. For a cluster of N nodes, enable at least N WAL sender processes and replication slots. Since `multimaster` implicitly adds a `PREPARE` phase to each `COMMIT` transaction, make sure to set the number of prepared transactions to N*max_connections. Otherwise, prepared transactions may be queued.
110+
* Make sure you have enough background workers allocated for each node:
111+
```
112+
max_worker_processes = 250
113+
```
114+
For example, for a three-node cluster with `max_connections` = 100, `multimaster` may need up to 206 background workers at peak times: 200 workers for connections from the neighbor nodes, two workers for walsender processes, two workers for walreceiver processes, and two workers for the arbiter sender and receiver processes. When setting this parameter, remember that other modules may also use background workers at the same time.
115+
* Add `multimaster`-specific options:
116+
```postgres
117+
multimaster.max_nodes = 3 # cluster size
118+
multimaster.node_id = 1 # the 1-based index of the node in the cluster
119+
multimaster.conn_strings = 'dbname=mydb user=myuser host=node1, dbname=mydb user=myuser host=node2, dbname=mydb user=myuser host=node3'
120120
# comma-separated list of connection strings to neighbor nodes
121-
```
122-
123-
> **Important:** The `node_id` variable takes natural numbers starting from 1, without any gaps in numbering. For example, for a cluster of five nodes, set node IDs to 1, 2, 3, 4, and 5. In the `conn_strings` variable, make sure to list the nodes in the order of their IDs. The `conn_strings` variable must be the same on all nodes.
124-
121+
```
122+
> **Important:** The `node_id` variable takes natural numbers starting from 1, without any gaps in numbering. For example, for a cluster of five nodes, set node IDs to 1, 2, 3, 4, and 5. In the `conn_strings` variable, make sure to list the nodes in the order of their IDs. The `conn_strings` variable must be the same on all nodes.
125123
Depending on your network environment and usage patterns, you may want to tune other `multimaster` parameters. For details on all configuration parameters available, see [Tuning Configuration Parameters](#tuning-configuration-parameters).
126124
127-
1. Allow replication in `pg_hba.conf`:
128-
125+
3. Allow replication in `pg_hba.conf`:
129126
```
130127
host myuser all node1 trust
131128
host myuser all node2 trust
@@ -135,17 +132,18 @@ To configure your cluster with `multimaster`, complete these steps on each clust
135132
host replication all node3 trust
136133
```
137134
138-
1. Start PostgreSQL:
135+
4. Start PostgreSQL:
139136
140137
```
141138
pg_ctl -D ./datadir -l ./pg.log start
142139
```
143140
144-
1. When PostgreSQL is started on all nodes, connect to any node and create the `multimaster` extension:
141+
When PostgreSQL is started on all nodes, connect to any node and create the `multimaster` extension:
145142
```
146143
psql -h node1
147144
> CREATE EXTENSION multimaster;
148145
```
146+
The `CREATE EXTENSION` query is replicated to all the cluster nodes.
149147
150148
To ensure that `multimaster` is enabled, check the ```mtm.get_cluster_state()``` view:
151149
```
@@ -240,7 +238,9 @@ Suppose we have a working cluster of three nodes, with ```node1```, ```node2```,
240238
* Make sure the `pg_hba.conf` files allows replication to the new node.
241239
242240
**See Also**
241+
243242
[Setting up a Multi-Master Cluster](#setting-up-a-multi-master-cluster)
243+
244244
[Monitoring Cluster Status](#monitoring-cluster-status)
245245
246246

contrib/mmts/multimaster.c

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4489,10 +4489,13 @@ typedef struct MtmGucEntry
44894489

44904490
static HTAB *MtmGucHash = NULL;
44914491
static dlist_head MtmGucList = DLIST_STATIC_INIT(MtmGucList);
4492+
static inline void MtmGucUpdate(const char *key, char *value);
44924493

44934494
static void MtmGucInit(void)
44944495
{
44954496
HASHCTL hash_ctl;
4497+
char *current_role;
4498+
MemoryContext oldcontext;
44964499

44974500
MemSet(&hash_ctl, 0, sizeof(hash_ctl));
44984501
hash_ctl.keysize = GUC_KEY_MAXLEN;
@@ -4502,6 +4505,16 @@ static void MtmGucInit(void)
45024505
MTM_GUC_HASHSIZE,
45034506
&hash_ctl,
45044507
HASH_ELEM | HASH_CONTEXT);
4508+
4509+
/*
4510+
* If current role is not equal to MtmDatabaseUser, than set it bofore
4511+
* any other GUC vars.
4512+
*/
4513+
oldcontext = MemoryContextSwitchTo(TopMemoryContext);
4514+
current_role = GetConfigOptionByName("session_authorization", NULL, false);
4515+
if (strcmp(MtmDatabaseUser, current_role) != 0)
4516+
MtmGucUpdate("session_authorization", current_role);
4517+
MemoryContextSwitchTo(oldcontext);
45054518
}
45064519

45074520
static void MtmGucDiscard()
@@ -4595,6 +4608,9 @@ char* MtmGucSerialize(void)
45954608
dlist_iter iter;
45964609
int nvars = 0;
45974610

4611+
if (!MtmGucHash)
4612+
MtmGucInit();
4613+
45984614
serialized_gucs = makeStringInfo();
45994615

46004616
dlist_foreach(iter, &MtmGucList)

contrib/mmts/sql/multimaster.sql

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
create user user1;
2+
create schema user1;
3+
alter schema user1 owner to user1;
4+
5+
\c "user=user1 dbname=regression"
6+
create table user1.test(i int primary key);
7+
create table user1.test2(i int primary key);
8+
9+
\c "user=user1 dbname=regression port=5433"
10+
select * from test;
11+

0 commit comments

Comments
 (0)