diff --git a/README.md b/README.md index 9035b51..8889e1a 100644 --- a/README.md +++ b/README.md @@ -1,33 +1,85 @@ # PostgreSQL Database Replication -Basic ready-to-use PostgreSQL cluster, which implements asynchronous primary-secondary data replication within a pair of preconfigured database containers. +Basic ready-to-use PostgreSQL cluster, which implements asynchronous Primary-Secondary data replication within a pair of preconfigured database containers and load balancing. ## Package Implementation Specifics -The presented PostgreSQL Replication solution is built upon Jelastic certified stack template for **PostgreSQL**. It operates two database containers (primary and secondary, one per role) and makes data from primary DB server to be asynchronously replicated to a standby one. +The presented PostgreSQL Replication solution is based on Virtuozzo Application Platform(VAP) certified stack templates which built: + - for **[PostgreSQL](https://www.postgresql.org/)** database + - for **[Pgpool-II](https://www.pgpool.net/mediawiki/index.php/Main_Page)** load balancer + + By default, package operates two database containers (Primary and Secondary, one per role) and makes data from Primary DB server to be asynchronously replicated to a standby one. + In front of the cluster a scalable load balancer layer of Pgpool-II node can be added which provides load-balancing, monitoring and management of database cluster.

- +

-Within the package, each database container receives the [vertical scaling](https://docs.jelastic.com/automatic-vertical-scaling) up to **32 dynamic cloudlets** (or 4 GiB of RAM and 12.8 GHz of CPU) that are provided dynamically based on the incoming load. Subsequently, you can change the resource allocation limit by following the above-linked guide. +Within the package, each database container receives the [vertical scaling](https://www.virtuozzo.com/application-platform-docs/automatic-vertical-scaling/) up to **32 dynamic cloudlets** (or 4 GiB of RAM and 12.8 GHz of CPU) that are provided dynamically based on the incoming load. And for the load balancer node the **6 dynamic cloudlets** provided by default. Subsequently, you can change the resource allocation limit by following the above-linked guide. ## How to Install PostgreSQL Database Replication Package -In order to get PostgreSQL Database Replication solution instantly deployed, click the **Deploy to Jelastic** button below and specify your email address within the opened widget. Then choose one of the [Jelastic Public Cloud](https://jelastic.cloud) providers (in case you don’t have an account at the appropriate platform, it will be created automatically) and press **Install**. +In order to get PostgreSQL Database Replication solution instantly deployed, click the **Deploy to Cloud** button below and specify your email address within the opened widget. Then choose one of the [Virtuozzo Public Cloud](https://www.virtuozzo.com/application-platform-partners/) providers (in case you don’t have an account at the appropriate platform, it will be created automatically) and press **Install**. -[![Deploy](images/deploy-to-jelastic.png)](https://jelastic.com/install-application/?manifest=https://raw.githubusercontent.com/jelastic-jps/postgres/master/manifest.yaml) +[![Deploy](images/deploy-to-cloud.png)](https://jelastic.com/install-application/?manifest=https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0/manifest.yaml) -To install the package manually, log in to the Jelastic dashboard with your credentials and [import](https://docs.jelastic.com/environment-import) link to the [**_manifest.yaml_**](https://github.com/jelastic-jps/postgres/blob/master/manifest.yaml) file (alternatively, you can locate this package via [Jelastic Marketplace](https://docs.jelastic.com/marketplace), *Clusters* section) +To install the package manually, log in to the Virtuozzo Application Platform dashboard with your credentials and [import](https://www.virtuozzo.com/application-platform-docs/environment-import/) link to the [**_manifest.yaml_**](https://github.com/jelastic-jps/postgres/blob/master/manifest.yaml) file (alternatively, you can locate this package via [VAP Marketplace](https://www.virtuozzo.com/application-platform-docs/marketplace/), *Clusters* section) -![postgresql-replication-installation](images/postgresql-replication-installation.png) +

+ +

-Within the opened installation window, type *Environment* name and optional *Display Name* ([environment alias](https://docs.jelastic.com/environment-aliases)). Also, select the preferable [*Region*](https://docs.jelastic.com/environment-regions) (if several ones are available) and click **Install**. -Wait a few minutes for Jelastic to prepare your environment and set up the required replication configurations. When finished, you’ll be shown the appropriate notification with data for PostgreSQL administration interface access. +Within the opened installation window, choose the PostgreSQL database version among available ones, type *Environment* name and optional *Display Name* ([environment alias](https://www.virtuozzo.com/application-platform-docs/environment-aliases/). Also, select the preferable [*Region*](https://www.virtuozzo.com/application-platform-docs/environment-regions/) (if several ones are available) and click **Install**. +If required you may disable Pgpool-II load balancer layer with respective toggle. + +Wait a few minutes for VAP to prepare your environment and set up the required replication configurations. When finished, you’ll be shown the appropriate notification with data for PostgreSQL administration interface access. + +

+ +

-![postgresql-replication-success-message](images/postgresql-replication-success-message.png) This information will be also duplicated to you via email. -To find more details on PostgreSQL Replication package installation and use, refer to the [article](http://blog.jelastic.com/2017/05/25/master-slave-postgresql-replication-automatic-installation/). +### Cluster Entry Point + +In case of no Pgpool-II nodes were added to cluster topology, use Primary node to access the cluster. And if the load balancing layer was deployed in front of the db cluster you may use any of Pgpool-II nodes as the entry point. + +### Cluster Management + +In VAP the PostgreSQL cluster components can be managed either via [CLI](https://www.virtuozzo.com/application-platform-docs/ssh-access/) or UI. + +#### Database Management + +Database nodes have a built-in management administration panel phpPgAdmin. Use the only one on Primary node. + +

+ +

+ +If required, the separate node can be installed with more advanced PostgreSQL database management software [pgAdmin4](https://www.pgadmin.org/) via importing [manifest](https://github.com/jelastic-jps/pgadmin/blob/master/manifest.yaml) from VAP collection. + +

+ +

+ +#### Pgpool-II Management + +Pgpool-II nodes can be also managed via user-friendly built-in Administration Panel [pgpoolAdmin](https://www.pgpool.net/docs/pgpoolAdmin/index_en.html). + +

+ +

+ +Pgpool-II admin panel provides an ability to tune: + - load balancing and even at database level. It means that you can specify how the requests to every database should be processed and balanced + - connection pools + - logging + - replication + - debugging + - failover and failback + - etc. + + +To find more details on PostgreSQL Replication package installation and use, refer to the [article](https://www.virtuozzo.com/company/blog/postgresql-auto-clustering-master-slave-replication/). diff --git a/README.md.backup b/README.md.backup new file mode 100644 index 0000000..8889e1a --- /dev/null +++ b/README.md.backup @@ -0,0 +1,85 @@ +# PostgreSQL Database Replication + +Basic ready-to-use PostgreSQL cluster, which implements asynchronous Primary-Secondary data replication within a pair of preconfigured database containers and load balancing. + +## Package Implementation Specifics + +The presented PostgreSQL Replication solution is based on Virtuozzo Application Platform(VAP) certified stack templates which built: + - for **[PostgreSQL](https://www.postgresql.org/)** database + - for **[Pgpool-II](https://www.pgpool.net/mediawiki/index.php/Main_Page)** load balancer + + By default, package operates two database containers (Primary and Secondary, one per role) and makes data from Primary DB server to be asynchronously replicated to a standby one. + In front of the cluster a scalable load balancer layer of Pgpool-II node can be added which provides load-balancing, monitoring and management of database cluster. + +

+ +

+ +Within the package, each database container receives the [vertical scaling](https://www.virtuozzo.com/application-platform-docs/automatic-vertical-scaling/) up to **32 dynamic cloudlets** (or 4 GiB of RAM and 12.8 GHz of CPU) that are provided dynamically based on the incoming load. And for the load balancer node the **6 dynamic cloudlets** provided by default. Subsequently, you can change the resource allocation limit by following the above-linked guide. + +## How to Install PostgreSQL Database Replication Package + +In order to get PostgreSQL Database Replication solution instantly deployed, click the **Deploy to Cloud** button below and specify your email address within the opened widget. Then choose one of the [Virtuozzo Public Cloud](https://www.virtuozzo.com/application-platform-partners/) providers (in case you don’t have an account at the appropriate platform, it will be created automatically) and press **Install**. + +[![Deploy](images/deploy-to-cloud.png)](https://jelastic.com/install-application/?manifest=https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0/manifest.yaml) + +To install the package manually, log in to the Virtuozzo Application Platform dashboard with your credentials and [import](https://www.virtuozzo.com/application-platform-docs/environment-import/) link to the [**_manifest.yaml_**](https://github.com/jelastic-jps/postgres/blob/master/manifest.yaml) file (alternatively, you can locate this package via [VAP Marketplace](https://www.virtuozzo.com/application-platform-docs/marketplace/), *Clusters* section) + +

+ +

+ + +Within the opened installation window, choose the PostgreSQL database version among available ones, type *Environment* name and optional *Display Name* ([environment alias](https://www.virtuozzo.com/application-platform-docs/environment-aliases/). Also, select the preferable [*Region*](https://www.virtuozzo.com/application-platform-docs/environment-regions/) (if several ones are available) and click **Install**. +If required you may disable Pgpool-II load balancer layer with respective toggle. + +Wait a few minutes for VAP to prepare your environment and set up the required replication configurations. When finished, you’ll be shown the appropriate notification with data for PostgreSQL administration interface access. + +

+ +

+ + +This information will be also duplicated to you via email. + +### Cluster Entry Point + +In case of no Pgpool-II nodes were added to cluster topology, use Primary node to access the cluster. And if the load balancing layer was deployed in front of the db cluster you may use any of Pgpool-II nodes as the entry point. + +### Cluster Management + +In VAP the PostgreSQL cluster components can be managed either via [CLI](https://www.virtuozzo.com/application-platform-docs/ssh-access/) or UI. + +#### Database Management + +Database nodes have a built-in management administration panel phpPgAdmin. Use the only one on Primary node. + +

+ +

+ +If required, the separate node can be installed with more advanced PostgreSQL database management software [pgAdmin4](https://www.pgadmin.org/) via importing [manifest](https://github.com/jelastic-jps/pgadmin/blob/master/manifest.yaml) from VAP collection. + +

+ +

+ +#### Pgpool-II Management + +Pgpool-II nodes can be also managed via user-friendly built-in Administration Panel [pgpoolAdmin](https://www.pgpool.net/docs/pgpoolAdmin/index_en.html). + +

+ +

+ +Pgpool-II admin panel provides an ability to tune: + - load balancing and even at database level. It means that you can specify how the requests to every database should be processed and balanced + - connection pools + - logging + - replication + - debugging + - failover and failback + - etc. + + +To find more details on PostgreSQL Replication package installation and use, refer to the [article](https://www.virtuozzo.com/company/blog/postgresql-auto-clustering-master-slave-replication/). diff --git a/addons/auto-cluster.yaml b/addons/auto-cluster.yaml index ca2f84c..976d651 100644 --- a/addons/auto-cluster.yaml +++ b/addons/auto-cluster.yaml @@ -1,6 +1,6 @@ type: update id: postgres-master-slave-auto-cluster -baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/master +baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0 logo: /images/postgres-70x70.png name: PostgreSQL Primary-Secondary Auto-Cluster @@ -10,24 +10,117 @@ nodeGroupAlias: onInstall: init +globals: + pgpoolintpass: ${settings.pgpoolintpass:[fn.password]} + postgresqlConf: '/var/lib/pgsql/data/postgresql.conf' + +onBeforeRemoveNode[sqldb]: + if (nodes.sqldb.length == 1): + install: + type: update + name: PostgreSQL Primary-Secondary Auto-Cluster + id: postgres-master-slave-auto-cluster + +onAfterResetServicePassword[sqldb]: + - copyPcpassFile + +onAfterResetNodePassword[sqldb]: + - copyPcpassFile + onAfterScaleOut[sqldb]: + - setMaxWalSenders + - copyPcpassFile + - if ('${settings.is_pgpool2}' == 'true'): + - adjustConfigs4PgpoolUser - getPswd + - getNodes - forEach(event.response.nodes): - initSlave: - id: ${@i.id} + initSecondary: + id: ${@i.id} ip: ${@i.address} + - forEach(nodes.sqldb): + if (${@i.id} != ${nodes.sqldb.master.id}): + cmd[${@i.id}]: jcm updateHbaConf ${globals.nodes_address} ${@i.address} + - if ('${settings.is_pgpool2}' == 'true'): + - forEach(event.response.nodes): + - addPgNodesToPgPool: + pgaddress: ${@i.address} + - forEach(pgpoolnode:nodes.pgpool): + - generateAndTransferSSHKeys: + id: ${@pgpoolnode.id} + - addToKnownHosts + - forEach(event.response.nodes): + - addAppNameToConinfo: + id: ${@i.id} + address: ${@i.address} + +onBeforeScaleIn[pgpool]: + - forEach(pgpoolnode:event.response.nodes): + - removeWatchdogConfig: + pgpoolAddress: ${@pgpoolnode.address} + - cmd[sqldb]: |- + sed -ci -e '/${@pgpoolnode.address}/d' /var/lib/pgsql/data/pg_hba.conf; + sed -ci -e '/postgres@node${@pgpoolnode.id}-/d' /var/lib/pgsql/.ssh/authorized_keys; + systemctl reset-failed pgpool-II.service + user: root + - cmd[sqldb,pgpool]: jem service restart + +onAfterScaleOut[pgpool]: + - preparePgpoolNodes + - cmd[${event.response.nodes.join(id,)}]: rm -f ~/.ssh/id_rsa ~/.ssh/id_rsa.pub + - forEach(pgpoolnode:event.response.nodes): + - generateAndTransferSSHKeys: + id: ${@pgpoolnode.id} + - cmd[pgpool]: jcm enableWatchdog + user: root + - forEach(pgpoolnode:nodes.pgpool): + - cmd[sqldb]: grep -q '${@pgpoolnode.address}' /var/lib/pgsql/data/pg_hba.conf || sed -ci -e '1i host all pgpool ${@pgpoolnode.address}/32 trust' /var/lib/pgsql/data/pg_hba.conf; + - getPgPoolNodesCount + - setPgpoolNodeId: + pgPoolNode: ${@pgpoolnode.id} + - addWatchdogConfig: + pgpoolAddress: ${@pgpoolnode.address} + - addToKnownHosts + - fixPgpassPermissions -onBeforeScaleIn[sqldb]: - - forEach(event.response.nodes): - cmd[${nodes.sqldb.master.id}]: jcm removeReplicaHost ${@i.address} +onAfterScaleIn[sqldb]: + - getNodes + - forEach(nodes.sqldb): + cmd[${@i.id}]: jcm updateHbaConf ${globals.nodes_address} ${@i.address} + - if ('${settings.is_pgpool2}' == 'true'): + - forEach(event.response.nodes): + - removePgNodesFromPgPool: + pgaddress: ${@i.address} + - cmd[pgpool]: |- + sed -ci -e '/^${@i.address}/d' ~/.ssh/known_hosts + - fixPgpassPermissions + +onAfterScaleIn[pgpool]: + - if (nodes.pgpool.length == 1): + - cmd[pgpool]: jcm disableWatchdog + - removeWatchdogConfig: + pgpoolAddress: ${nodes.pgpool.master.address} + - cmd[pgpool]: |- + rm -f /etc/pgpool-II/pgpool_node_id + sed -ci -e 's/use_watchdog/#use_watchdog/' /etc/pgpool-II/pgpool.conf + jem service restart onAfterClone: - script: delete MANIFEST.id; return {result:0, jps:MANIFEST}; - - install: ${response.jps} - envName: ${event.response.env.envName} - settings: - nodeGroup: ${settings.nodeGroup} - clone: true + - install: + jps: ${baseUrl}/addons/auto-cluster.yaml + envName: ${event.response.env.envName} + settings: + pgpoolintpass: ${settings.pgpoolintpass} + nodeGroup: ${settings.nodeGroup} + is_pgpool2: ${settings.is_pgpool2} + clone: true + +onBeforeMigrate: + if (${env.status} != 1): + stopEvent: + type: warning + message: Migration of stopped PostgreSQL Primary-Secondary Auto-Cluster is not supported. onAfterMigrate: init: @@ -35,52 +128,204 @@ onAfterMigrate: actions: init: - #- env.control.AddContainerEnvVars[sqldb]: - # vars: {"KEY_PASS":"${fn.password}"} + - setMaxWalSenders - if (${settings.clone:false} || ${this.update:false}): - cmd[${nodes.sqldb.master.id}]: jcm removeAllReplicaHosts + - getPswd + - getNodes - forEach(nodes.sqldb): - if (${@i.id} != ${nodes.sqldb.master.id}): - cmd[${nodes.sqldb.master.id}]: |- jcm addReplicaHost ${@i.address} &>> /var/log/run.log sudo jem service reload - cmd[${@i.id}]: |- - #jcm updateReplicaHost ${nodes.sqldb.master.address} &>> /var/log/run.log - jcm updatePrimaryConnInfo &>> /var/log/run.log + jcm updatePrimaryConnInfo &>>/var/log/run.log + jcm updateHbaConf ${globals.nodes_address} ${@i.address} sudo jem service restart + - if ('${settings.is_pgpool2}' == 'true'): + - cmd[pgpool]: |- + jcm cleanupNodesFromPgpool2Conf &>>/var/log/run.log + systemctl reset-failed pgpool-II.service + user: root - else: - setNodeDisplayName[${nodes.sqldb.master.id}]: Primary - - getPswd + - initPrimary + - getNodes - forEach(nodes.sqldb): - if (${@i.id} != ${nodes.sqldb.master.id}): - initSlave: + - if (${@i.id} != ${nodes.sqldb.master.id}): + - initSecondary: id: ${@i.id} ip: ${@i.address} - + - cmd[${@i.id}]: |- + jcm updateHbaConf ${globals.nodes_address} ${@i.address} + sudo jem service reload + - if ('${settings.is_pgpool2}' == 'true'): + - adjustConfigs4PgpoolUser + - cmd[${nodes.sqldb.master.id}]: psql -U webadmin -d postgres -c "CREATE USER pgpool REPLICATION LOGIN CONNECTION LIMIT -1 ENCRYPTED PASSWORD '${globals.pgpoolintpass}' IN ROLE pg_monitor;" + - preparePgpoolNodes + - copyPcpassFile + - if ('${settings.is_pgpool2}' == 'true'): + - forEach(nodes.sqldb): + - addPgNodesToPgPool: + pgaddress: ${@i.address} + - forEach(pgpoolnode:nodes.pgpool): + - generateAndTransferSSHKeys: + id: ${@pgpoolnode.id} + - cmd[sqldb]: sed -ci -e '1i host all pgpool ${@pgpoolnode.address}/32 md5' /var/lib/pgsql/data/pg_hba.conf; + - if (nodes.pgpool.length > 1): + - cmd[pgpool]: |- + jcm enableWatchdog; + systemctl reset-failed pgpool-II.service; + user: root + - forEach(pgpoolnode:nodes.pgpool): + - getPgPoolNodesCount + - setPgpoolNodeId: + pgPoolNode: ${@pgpoolnode.id} + - addWatchdogConfig: + pgpoolAddress: ${@pgpoolnode.address} + - cmd[sqldb]: jem service restart + - cmd[pgpool]: |- + systemctl reset-failed pgpool-II.service; + jem service restart + user: root + - forEach(nodes.sqldb): + - addAppNameToConinfo: + id: ${@i.id} + address: ${@i.address} + - cmd[${nodes.pgpool.master.id}]: cat /var/lib/pgsql/.pcppass |awk -F ':' '{print $4}' + - setGlobals: + pgpoolPasswd: '${response.out}' + successPath: /text/success-pgpool.md?_r=${fn.random} + - else: + - setGlobals: + successPath: /text/success.md?_r=${fn.random} + - fixPgpassPermissions + + addToKnownHosts: + - forEach(pgnode:nodes.sqldb): + - cmd[pgpool]: |- + ssh-keygen -R ${@pgnode.address} + ssh-keyscan ${@pgnode.address} 2>&1 1>> ~/.ssh/known_hosts + + copyPcpassFile: + - cmd[${nodes.sqldb.master.id}]: cat /var/lib/pgsql/.pgpass + - cmd[sqldb]: echo '${response.out}' > /var/lib/pgsql/.pgpass + + generateAndTransferSSHKeys: + - cmd[${this.id}]: |- + [ -f "/var/lib/pgsql/.ssh/id_rsa.pub" ] || ssh-keygen -q -t rsa -N '' <<< $'\ny' + - cmd[${this.id}]: |- + cat /var/lib/pgsql/.ssh/id_rsa.pub + - cmd[sqldb]: + grep -q '${response.out}' /var/lib/pgsql/.ssh/authorized_keys || echo '${response.out}' >> /var/lib/pgsql/.ssh/authorized_keys + + preparePgpoolNodes: + - cmd[pgpool]: |- + [ -f ~/.pgpoolkey ] || echo 'defaultpgpoolkey' > ~/.pgpoolkey; chmod 600 ~/.pgpoolkey + - cmd[pgpool]: |- + sed -ci -e "s/^sr_check_password.*/sr_check_password = '${globals.pgpoolintpass}'/" /etc/pgpool-II/pgpool.conf + sed -ci -e "s/^health_check_password.*/health_check_password = '${globals.pgpoolintpass}'/" /etc/pgpool-II/pgpool.conf + pg_enc -m -f /etc/pgpool-II/pgpool.conf -u pgpool ${globals.pgpoolintpass} + chown -R postgres:postgres /etc/pgpool-II + systemctl reset-failed pgpool-II.service + user: root + + adjustConfigs4PgpoolUser: + - cmd[sqldb]: |- + jem service stop; + source /etc/jelastic/metainf.conf; + if [ "$COMPUTE_TYPE_VERSION" -ge "13" ] ; then + echo 'wal_keep_size = 2048' >> ${globals.postgresqlConf} + else + echo "wal_keep_segments = 256" >> ${globals.postgresqlConf} + fi + grep -q "^wal_log_hints" ${globals.postgresqlConf} || echo 'wal_log_hints = on' >> ${globals.postgresqlConf}; + jem service start; + user: root + + initPrimary: + - cmd[${nodes.sqldb.master.id}]: jcm initPrimary &>> /var/log/run.log + - getPswd + getPswd: - - cmd[${nodes.sqldb.master.id}]: |- - jcm initMaster &>> /var/log/run.log - jcm getPswd + - cmd[${nodes.sqldb.master.id}]: jcm getPswd - setGlobals: - pswd: ${response.out} - - initSlave: + pswd: ${response.out} + + initSecondary: - setNodeDisplayName[${this.id}]: Secondary - cmd[${nodes.sqldb.master.id}]: |- jcm addReplicaHost ${this.ip} &>> /var/log/run.log sudo jem service reload - cmd[${this.id}]: |- jcm setPswd ${globals.pswd} - jcm initSlave ${nodes.sqldb.master.address} &>> /var/log/run.log - -startPage: ${nodes.sqldb.master.url} -success: | - **Admin Panel**: [${nodes.sqldb.master.url}](${nodes.sqldb.master.url}) - **User**: webadmin - **Password**: ${globals.pswd} + jcm initSecondary &>> /var/log/run.log - * [Database Replication with PostgreSQL](https://docs.jelastic.com/postgresql-database-replication/) - * [Remote Access to PostgreSQL](https://docs.jelastic.com/remote-access-postgres/) - * [Import and Export Dump to PostgreSQL](https://docs.jelastic.com/dump-postgres/) + getNodes: + - script: | + var resp = jelastic.env.control.GetEnvInfo('${env.envName}', session); + if (resp.result != 0) return resp; + var nodes_address = []; + for (var l = 0, k = resp.nodes; l < k.length; l++) { + if (k[l].nodeGroup == 'sqldb') { + nodes_address.push(k[l].address); + } + } + return { result: 0, nodes_address: nodes_address.join(",") } + - setGlobals: + nodes_address: ${response.nodes_address} + addPgNodesToPgPool: + - forEach(pgpoolnode:nodes.pgpool): + - cmd[${@pgpoolnode.id}]: jcm addPgNodeToPgpool2Conf ${this.pgaddress} &>>/var/log/run.log + + removePgNodesFromPgPool: + - forEach(pgpoolnode:nodes.pgpool): + - cmd[${@pgpoolnode.id}]: jcm removePgNodeFromPgpool2Conf ${this.pgaddress} &>>/var/log/run.log + user: root + + getPgPoolNodesCount: + - cmd[${nodes.pgpool.master.id}]: grep '^wd_port[0-9]*' /etc/pgpool-II/pgpool.conf|wc -l + - setGlobals: + pgPoolCount: ${response.out} + + setPgpoolNodeId: + - cmd[${this.pgPoolNode}]: let PGPOOLCOUNT='${globals.pgPoolCount}+1'; [ -f "/etc/pgpool-II/pgpool_node_id" ] || jcm setPgpoolNodeId $PGPOOLCOUNT + user: root + + addWatchdogConfig: + - cmd[pgpool]: let PGPOOLCOUNT='${globals.pgPoolCount}+1'; jcm addWatchdogConfig $PGPOOLCOUNT ${this.pgpoolAddress} + user: root + + removeWatchdogConfig: + - cmd[pgpool]: jcm removeWatchdogConfig ${this.pgpoolAddress} + user: root + fixPgpassPermissions: + - cmd[sqldb]: chmod 600 /var/lib/pgsql/.pgpass + user: root + + addAppNameToConinfo: + - cmd[${nodes.pgpool.master.id}]: grep '^backend_hostname' /etc/pgpool-II/pgpool.conf |grep ${this.address}|awk '{print $1}'|grep -o [[:digit:]].* + - setGlobals: + pgNodeNumber: ${response.out} + - cmd[${this.id}]: sed -ci -e "s/port=5432 user=replication'/port=5432 user=replication application_name=server${globals.pgNodeNumber}'/g" ${globals.postgresqlConf} + user: root + + setMaxWalSenders: + - cmd[sqldb]: |- + MAX_WAL_SENDERS=$(grep '^MAX_WAL_SENDERS' /usr/local/sbin/jcm | grep -o [0-9]*) + if [[ "${nodes.sqldb.length}" -ge "${MAX_WAL_SENDERS}" ]] ; then + sed -ci -e 's/^[[:space:]]*MAX_WAL_SENDERS.*/MAX_WAL_SENDERS=${nodes.sqldb.length}/' /usr/local/sbin/jcm + fi + MAX_WAL_SENDERS=$(grep '^max_wal_senders' ${globals.postgresqlConf}|grep -o [0-9]*) + if [[ "${nodes.sqldb.length}" -ge "${MAX_WAL_SENDERS}" ]] ; then + sed -ci -e 's/^[[:space:]]*max_wal_senders.*/max_wal_senders = ${nodes.sqldb.length}/' ${globals.postgresqlConf} + jem service restart + fi + user: root + +startPage: ${nodes.sqldb.master.url} +success: + email: ${globals.successPath} + text: ${globals.successPath} diff --git a/addons/cluster.json b/addons/cluster.json index 946b1d8..98869c0 100644 --- a/addons/cluster.json +++ b/addons/cluster.json @@ -1,20 +1,64 @@ { - "jps": "https://raw.githubusercontent.com/jelastic-jps/postgres/master/addons/auto-cluster.yaml", - "defaultState": false, - "skipOnEnvInstall": true, - "nodeGroupData": { - "scalingMode": "STATELESS", - "skipNodeEmails": true, - "isRedeploySupport": true, - "isResetServicePassword": "NODEGROUP" + "convertable": false, + "skipOnEnvInstall": true, + "isClusterDependency": true, + "jps": "https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0/addons/auto-cluster.yaml", + "description": "

Primary-Secondary with Scalable Secondaries

Pre-configured PostgreSQL database cluster with primary-secondary replication. New LB and DB nodes are automatically added into the cluster upon scaling.
", + "defaultState": false, + "nodeGroupData": { + "scalingMode": "STATELESS", + "skipNodeEmails": true, + "isRedeploySupport": true, + "isResetServicePassword": "NODEGROUP" + }, + "settings": { + "data": { + "is_pgpool2": false }, - "validation": { - "minCount": 2, - "minCloudlets": 4, - "scalingMode": "STATELESS" - }, - "recommended": { - "cloudlets": 32 - }, - "description": "

Primary-Secondary with Scalable Secondaries

Pre-configured PostgreSQL database cluster with primary-secondary replication. New nodes are automatically added into the cluster as secondaries. Learn More" + "fields": [ + { + "type": "toggle", + "caption": "Pgpool-II", + "name": "is_pgpool2", + "value": false + } + ] + }, + "validation": { + "minCount": 2, + "minCloudlets": 4, + "scalingMode": "STATELESS", + "rules": [ + { + "is_pgpool2": { + "true": { + "setGlobals": { + "pgpool2Count": 1 + } + } + } + } + ] + }, + "requires": [ + "pgpool2" + ], + "extraNodes": [ + { + "nodeGroup": "pgpool", + "nodeType": "pgpool2", + "count": "${globals.pgpool2Count:0}", + "isClusterSupport": false, + "isDeploySupport": false, + "skipNodeEmails": true, + "isClusterDependency": false, + "isResetServicePassword": "NODEGROUP", + "applyQuotas": true, + "validation": { + "minCount": 1, + "maxCount": 3, + "scalingMode": "STATEFUL" + } + } + ] } diff --git a/images/deploy-to-cloud.png b/images/deploy-to-cloud.png new file mode 100644 index 0000000..70c6554 Binary files /dev/null and b/images/deploy-to-cloud.png differ diff --git a/images/pgadmin.png b/images/pgadmin.png new file mode 100644 index 0000000..db9c97e Binary files /dev/null and b/images/pgadmin.png differ diff --git a/images/pgpool-admin.png b/images/pgpool-admin.png new file mode 100644 index 0000000..463455e Binary files /dev/null and b/images/pgpool-admin.png differ diff --git a/images/pgpool-postgres-single-region-big-tip-black-font.svg b/images/pgpool-postgres-single-region-big-tip-black-font.svg new file mode 100644 index 0000000..42300d0 --- /dev/null +++ b/images/pgpool-postgres-single-region-big-tip-black-font.svg @@ -0,0 +1,1900 @@ + +image/svg+xml + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +Secondary +Primary +Secondary +Pgpool-II +Pgpool-II +Active +Standby +Monitor +Watchdog +Pgpool-II +Standby +Watchdog + \ No newline at end of file diff --git a/images/phppgadmin.png b/images/phppgadmin.png new file mode 100644 index 0000000..04493d9 Binary files /dev/null and b/images/phppgadmin.png differ diff --git a/images/postgresql-replication-installation.png b/images/postgresql-replication-installation.png index 915be52..4776ad4 100755 Binary files a/images/postgresql-replication-installation.png and b/images/postgresql-replication-installation.png differ diff --git a/images/postgresql-replication-success-message.png b/images/postgresql-replication-success-message.png index e1b9159..d10d69d 100755 Binary files a/images/postgresql-replication-success-message.png and b/images/postgresql-replication-success-message.png differ diff --git a/manifest.yaml b/manifest.yaml index 872a393..be440a9 100755 --- a/manifest.yaml +++ b/manifest.yaml @@ -1,8 +1,8 @@ type: install version: 1.7 id: postgres-master-slave -baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/master -homepage: http://docs.jelastic.com/postgresql-database-replication +baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0 +homepage: https://www.virtuozzo.com/application-platform-docs/postgresql-database-replication logo: /images/postgres-70x70.png name: PostgreSQL Primary-Secondary Cluster description: @@ -14,22 +14,30 @@ description: categories: - apps/clustered-dbs - apps/clusters + - apps/databases settings: fields: - caption: Version - name: nodeType - type: list - values: - - value: postgres9 - caption: PostgreSQL 9 - - value: postgres10 - caption: PostgreSQL 10 - - value: postgres11 - caption: PostgreSQL 11 - - value: postgres12 - caption: PostgreSQL 12 - default: postgres12 + - caption: Version + name: nodeType + type: list + values: + - value: postgres11 + caption: PostgreSQL 11 + - value: postgres12 + caption: PostgreSQL 12 + - value: postgres13 + caption: PostgreSQL 13 + - value: postgres14 + caption: PostgreSQL 14 + - value: postgres15 + caption: PostgreSQL 15 + default: postgres15 + - type: toggle + caption: Pgpool-II enabled + name: is_pgpool2 + tooltip:

Pgpool-II Load Balancer

Scalable and Highly Available load balancer layer to distribute requests and manage PostgreSQL replication topology. New LB and DB nodes are automatically added into the cluster upon scaling.
+ value: true nodes: cloudlets: 32 @@ -37,17 +45,23 @@ nodes: scalingMode: STATELESS nodeType: ${settings.nodeType} password: ${fn.password} - cluster: true - skipNodeEmails: true - + cluster: + is_pgpool2: ${settings.is_pgpool2} + +onInstall: + - cmd[${nodes.sqldb.master.id}]: jcm getPswd + - setGlobals: + pswd: ${response.out} + - if ('${settings.is_pgpool2}' == 'true'): + - cmd[${nodes.pgpool.master.id}]: cat /var/lib/pgsql/.pcppass |awk -F ':' '{print $4}' + - setGlobals: + pgpoolPasswd: '${response.out}' + successPath: /text/success-pgpool.md?_r=${fn.random} + - else: + - setGlobals: + successPath: /text/success.md?_r=${fn.random} + startPage: ${nodes.sqldb.master.url} success: email: false - text: | - **Admin Panel**: [${nodes.sqldb.master.url}](${nodes.sqldb.master.url}) - **User**: webadmin - **Password**: ${nodes.sqldb.password} - - * [Database Replication with PostgreSQL](https://docs.jelastic.com/postgresql-database-replication/) - * [Remote Access to PostgreSQL](https://docs.jelastic.com/remote-access-postgres/) - * [Import and Export Dump to PostgreSQL](https://docs.jelastic.com/dump-postgres/) + text: ${globals.successPath} diff --git a/scripts/jcm b/scripts/jcm deleted file mode 100644 index a84e6a7..0000000 --- a/scripts/jcm +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -MASTER_IP=$4 -SLAVE_IP=$2 -DB_PASSWORD=$3 -PGSQL_DATA="/var/lib/pgsql/data" - - -if [ "${1}" == "master" ]; then - #set up master - PGPASSWORD=${DB_PASSWORD} psql -Uwebadmin postgres -c "CREATE USER replication REPLICATION LOGIN CONNECTION LIMIT -1 ENCRYPTED PASSWORD '${DB_PASSWORD}';"; - sudo /etc/init.d/postgresql stop - sed -i "1i host replication replication ${SLAVE_IP}/32 trust" ${PGSQL_DATA}/pg_hba.conf; - sed -i "s|.*wal_level.*|wal_level = hot_standby|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*max_wal_senders.*|max_wal_senders = 8|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*wal_keep_segments.*|wal_keep_segments = 32|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*archive_mode.*|archive_mode = on|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*archive_command.*| archive_command = 'cd .'|g" ${PGSQL_DATA}/postgresql.conf; - sudo /etc/init.d/postgresql start -fi - -if [ "${1}" == "slave" ]; then - #set up slave - sudo /etc/init.d/postgresql stop - rm -rf ${PGSQL_DATA}; - PGPASSWORD=${DB_PASSWORD} pg_basebackup -h ${MASTER_IP} -D ${PGSQL_DATA} -U replication -v -P; - sed -i "1i host replication replication ${MASTER_IP}/32 trust" ${PGSQL_DATA}/pg_hba.conf; - sed -i "s|.*hot_standby.*|hot_standby = on|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "153 a wal_level = hot_standby" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*archive_mode.*|archive_mode = on|g" ${PGSQL_DATA}/postgresql.conf; - sed -i "s|.*archive_command.*| archive_command = 'cd .'|g" ${PGSQL_DATA}/postgresql.conf; - major=$(psql --version | cut -d' ' -f3 | cut -d'.' -f1) - [ "$major" -lt "12" ] && { - sed -i "s|.*max_wal_senders.*|max_wal_senders = 1|g" ${PGSQL_DATA}/postgresql.conf; - echo "standby_mode = on" > ${PGSQL_DATA}/recovery.conf; - echo "primary_conninfo = 'host=${MASTER_IP} port=5432 user=replication password=${DB_PASSWORD}'" >> ${PGSQL_DATA}/recovery.conf; - echo "trigger_file = '/tmp/postgresql.trigger.5432'" >> ${PGSQL_DATA}/recovery.conf; - } || { - sed -i "s|.*max_wal_senders.*|max_wal_senders = 8|g" ${PGSQL_DATA}/postgresql.conf; - echo "primary_conninfo = 'host=${MASTER_IP} port=5432 user=replication password=${DB_PASSWORD}'" >> ${PGSQL_DATA}/postgresql.conf; - echo "promote_trigger_file = '/tmp/postgresql.trigger.5432'" >> ${PGSQL_DATA}/postgresql.conf; - touch ${PGSQL_DATA}/standby.signal - } - sudo /etc/init.d/postgresql start -fi diff --git a/tests/status.yaml b/tests/status.yaml index 1ae5d88..eeb3244 100644 --- a/tests/status.yaml +++ b/tests/status.yaml @@ -2,7 +2,7 @@ type: update id: postgres-master-slave-test name: PostreSQL Primary-Secondary Test homepage: https://www.postgresql.org/ -baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/master +baseUrl: https://raw.githubusercontent.com/jelastic-jps/postgres/v2.0.0 logo: /images/postgres-70x70.png description: Test of Primary-Secondary cluster status diff --git a/text/success-pgpool.md b/text/success-pgpool.md new file mode 100644 index 0000000..cb32297 --- /dev/null +++ b/text/success-pgpool.md @@ -0,0 +1,18 @@ +**Admin Panel**: [${nodes.sqldb.master.url}](${nodes.sqldb.master.url}) +**User**: webadmin +**Password**: ${globals.pswd} + +**You can connect to PostgreSQL cluster through the Pgpool-II leader node**: + +**Pgpool-II Leader Node**: node${nodes.pgpool.master.id}-${env.domain}:5432 +**User**: webadmin +**Password**: ${globals.pswd} + +**Use these credentials to manage the Pgpool-II nodes in PgpoolAdmin**: +**PgpoolAdmin Url**: [${nodes.pgpool.master.url}](${nodes.pgpool.master.url}) +**PgpoolAdmin User**: postgres +**PgpoolAdmin Password**: ${globals.pgpoolPasswd} + +* [Database Replication with PostgreSQL](https://docs.jelastic.com/postgresql-database-replication/) +* [Remote Access to PostgreSQL](https://docs.jelastic.com/remote-access-postgres/) +* [Import and Export Dump to PostgreSQL](https://docs.jelastic.com/dump-postgres/) diff --git a/text/success.md b/text/success.md new file mode 100644 index 0000000..9994111 --- /dev/null +++ b/text/success.md @@ -0,0 +1,6 @@ +**Admin Panel**: [${nodes.sqldb.master.url}](${nodes.sqldb.master.url}) +**User**: webadmin +**Password**: ${globals.pswd} +* [Database Replication with PostgreSQL](https://docs.jelastic.com/postgresql-database-replication/) +* [Remote Access to PostgreSQL](https://docs.jelastic.com/remote-access-postgres/) +* [Import and Export Dump to PostgreSQL](https://docs.jelastic.com/dump-postgres/)