|
| 1 | +# Postgres_cluster |
| 2 | + |
| 3 | +Various experiments with PostgreSQL clustering perfomed at PostgresPro. |
| 4 | + |
| 5 | +This is mirror of postgres repo with several changes to the core and few extra extensions. |
| 6 | + |
| 7 | +## Core changes: |
| 8 | + |
| 9 | +* Transaction manager interface (eXtensible Transaction Manager, xtm). Generic interface to plug distributed transaction engines. More info at [[https://wiki.postgresql.org/wiki/DTM]] and [[http://www.postgresql.org/message-id/flat/F2766B97-555D-424F-B29F-E0CA0F6D1D74@postgrespro.ru]]. |
| 10 | +* Distributed deadlock detection API. |
| 11 | +* Fast 2pc patch. More info at [[http://www.postgresql.org/message-id/flat/74355FCF-AADC-4E51-850B-47AF59E0B215@postgrespro.ru]] |
| 12 | + |
| 13 | +## New extensions: |
| 14 | + |
| 15 | +* pg_dtm. Transaction management by interaction with standalone coordinator (Arbiter or dtmd). [[https://wiki.postgresql.org/wiki/DTM#DTM_approach]] |
| 16 | +* pg_tsdtm. Coordinator-less transaction management by tracking commit timestamps. |
| 17 | +* multimaster. Synchronous multi-master replication based on logical_decoding and pg_dtm. |
| 18 | + |
| 19 | + |
| 20 | +## Changed extension: |
| 21 | + |
| 22 | +* postgres_fdw. Added support of pg_dtm. |
| 23 | + |
| 24 | +## Deploying |
| 25 | + |
| 26 | +For deploy and test postgres over a cluster we use ansible. In each extension directory one can find test subdirectory where we are storing tests and deploy scripts. |
| 27 | + |
| 28 | + |
| 29 | +### Running tests on local cluster |
| 30 | + |
| 31 | +To use it one need ansible hosts file with following groups: |
| 32 | + |
| 33 | +farms/cluster.example: |
| 34 | +``` |
| 35 | +[clients] # benchmark will start simultaneously on that nodes |
| 36 | +server0.example.com |
| 37 | +[nodes] # all that nodes will run postgres, dtmd/master will be deployed to first |
| 38 | +server1.example.com |
| 39 | +server2.example.com |
| 40 | +server3.example.com |
| 41 | +``` |
| 42 | + |
| 43 | +After you have proper hosts file you can deploy all stuff to servers: |
| 44 | + |
| 45 | +```shell |
| 46 | +# cd pg_dtm/tests |
| 47 | +# ansible-playbook -i farms/sai deploy_layouts/cluster.yml |
| 48 | +``` |
| 49 | + |
| 50 | +To perform dtmbench run: |
| 51 | + |
| 52 | +```shell |
| 53 | +# ansible-playbook -i farms/sai perf.yml -e nnodes=3 -e nconns=100 |
| 54 | +``` |
| 55 | + |
| 56 | +here nnodes is number of nudes that will be used for that test, nconns is the |
| 57 | +number of connections to the backend. |
| 58 | + |
| 59 | + |
| 60 | + |
| 61 | +## Running tests on Amazon ec2 |
| 62 | + |
| 63 | + |
| 64 | +In the case of amazon cloud there is no need in specific hosts file. Instead of it |
| 65 | +we use script farms/ec2.py to get current instances running on you account. To use |
| 66 | +that script you need to specify you account key and access_key in ~/.boto.cfg (or in |
| 67 | +any other place that described at http://boto.cloudhackers.com/en/latest/boto_config_tut.html) |
| 68 | + |
| 69 | +To create VMs in cloud run: |
| 70 | +```shell |
| 71 | +# ansible-playbook -i farms/ec2.py deploy_layouts/ec2.yml |
| 72 | +``` |
| 73 | +After that you should wait few minutes to have info about that instances in Amazon API. After |
| 74 | +that you can deploy postgres as usual: |
| 75 | +```shell |
| 76 | +# ansible-playbook -i farms/ec2.py deploy_layouts/cluster-ec2.yml |
| 77 | +``` |
| 78 | +And to run a benchmark: |
| 79 | +```shell |
| 80 | +# ansible-playbook -i farms/sai perf-ec2.yml -e nnodes=3 -e nconns=100 |
| 81 | +``` |
0 commit comments