You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<strong>:zap: Blazing-fast cloning of PostgreSQL databases :elephant:</strong><br>
18
-
Thin clones of PostgreSQL to build powerful development, test, QA, and staging environments.<br>
19
-
Available for any PostgreSQL, including AWS RDS<sup>*</sup>, GCP CloudSQL<sup>*</sup>, Heroku<sup>*</sup>, Digital Ocean<sup>*</sup>, and self-managed instances.
17
+
<strong>⚡ Blazing-fast Postgres cloning and branching 🐘</strong><br /><br />
18
+
🛠️ Build powerful dev/test environments.<br />
19
+
🔃 Cover 100% of DB migrations with CI tests.<br>
20
+
💡 Quickly verify ChatGPT ideas to get rid of hallucinations.<br /><br />
21
+
Available for any PostgreSQL, including self-managed and managed<sup>*</sup> like AWS RDS, GCP CloudSQL, Supabase, Timescale.<br /><br />
22
+
Can be installed and used anywhere: all clouds and on-premises.
20
23
</div>
21
24
22
25
<br />
@@ -44,25 +47,29 @@
44
47
</div>
45
48
46
49
---
47
-
<sub><sup>*</sup>For a managed PostgreSQL cloud service such as AWS RDS or Heroku, where physical connection and access to PGDATA are not available, DLE is supposed to be running on a separate VM in the same region, performing periodical automated full refresh of data and serving itself as a database-as-a-service solution providing thin database clones for development and testing purposes.</sub>
50
+
<sub><sup>*</sup>For managed PostgreSQL cloud services like AWS RDS or Heroku, direct physical connection and PGDATA access aren't possible. In these cases, DBLab should run on a separate VM within the same region. It will routinely auto-refresh its data, effectively acting as a database-as-a-service solution. This setup then offers thin database branching ideal for development and testing.</sub>
48
51
49
-
## Why DLE?
50
-
- Build dev/QA/staging environments based on full-size production-like databases.
52
+
## Why DBLab?
53
+
- Build dev/QA/staging environments using full-scale, production-like databases.
51
54
- Provide temporary full-size database clones for SQL query analysis and optimization (see also: [SQL optimization chatbot Joe](https://gitlab.com/postgres-ai/joe)).
52
-
- Automatically test database changes in CI/CD pipelines to avoid incidents in production.
55
+
- Automatically test database changes in CI/CD pipelines, minimizing risks of production incidents.
56
+
- Rapidly validate ChatGPT or other LLM concepts, check for hallucinations, and iterate towards effective solutions.
53
57
54
-
For example, cloning a 1 TiB PostgreSQL database takes ~10 seconds. Dozens of independent clones are up and running on a single machine, supporting lots of development and testing activities, without increasing costs for hardware.
58
+
For example, cloning a 1 TiB PostgreSQL database takes just about 10 seconds. On a single machine, you can have dozens of independent clones running simultaneously, supporting extensive development and testing activities without any added hardware costs.
- enter [the Database Lab Platform](https://console.postgres.ai/), join the "Demo" organization, and test cloning of ~1 TiB demo database, or
60
-
- check out another demo setup, DLE CE: https://demo.aws.postgres.ai:446/instance, use the token `demo_token` to enter
63
+
- Visit [Postgres.ai Console](https://console.postgres.ai/), set up your first organization and provision a DBLab Standard Edition (DBLab SE) to any cloud or on-prem
64
+
-[Pricing](https://postgres.ai/pricing) (starting at $62/month)
65
+
-[Doc: How to install DBLab SE](https://postgres.ai/docs/how-to-guides/administration/install-dle-from-postgres-ai)
66
+
- Demo: https://demo.aws.postgres.ai:446/instance (use the token `demo_token` to access)
67
+
- Looking for a free version? Install DBLab Community Edition by [following this tutorial](https://postgres.ai/docs/tutorials/database-lab-tutorial)
61
68
62
69
## How it works
63
-
Thin cloning is fast because it uses [Copy-on-Write (CoW)](https://en.wikipedia.org/wiki/Copy-on-write#In_computer_storage). DLE supports two technologies to enable CoW and thin cloning: [ZFS](https://en.wikipedia.org/wiki/ZFS) (default) and [LVM](https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)).
70
+
Thin cloning is fast because it is based on [Copy-on-Write (CoW)](https://en.wikipedia.org/wiki/Copy-on-write#In_computer_storage). DBLab employs two technologies for enabling thin cloning: [ZFS](https://en.wikipedia.org/wiki/ZFS) (default) and [LVM](https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)).
64
71
65
-
With ZFS, Database Lab Engine periodically creates a new snapshot of the data directory and maintains a set of snapshots, cleaning up old and unused ones. When requesting a new clone, users can choose which snapshot to use.
72
+
Using ZFS, DBLab routinely takes new snapshots of the data directory, managing a collection of them and removing old or unused ones. When requesting a fresh clone, users have the option to select their preferred snapshot.
66
73
67
74
Read more:
68
75
-[How it works](https://postgres.ai/products/how-it-works)
@@ -71,53 +78,61 @@ Read more:
71
78
-[Questions and answers](https://postgres.ai/docs/questions-and-answers)
72
79
73
80
## Where to start
74
-
-[Database Lab tutorial for any PostgreSQL database](https://postgres.ai/docs/tutorials/database-lab-tutorial)
75
-
-[Database Lab tutorial for Amazon RDS](https://postgres.ai/docs/tutorials/database-lab-tutorial-amazon-rds)
-[DBLab tutorial for any PostgreSQL database](https://postgres.ai/docs/tutorials/database-lab-tutorial)
82
+
-[DBLab tutorial for Amazon RDS](https://postgres.ai/docs/tutorials/database-lab-tutorial-amazon-rds)
83
+
-[How to install DBLab SE using Postgres.ai Console](https://postgres.ai/docs/how-to-guides/administration/install-dle-from-postgres-ai)
84
+
-[How to install DBLab SE using AWS Marketplace](https://postgres.ai/docs/how-to-guides/administration/install-dle-from-aws-marketplace)
77
85
78
86
## Case studies
79
-
- Qiwi: [How Qiwi Controls the Data to Accelerate Development](https://postgres.ai/resources/case-studies/qiwi)
80
87
- GitLab: [How GitLab iterates on SQL performance optimization workflow to reduce downtime risks](https://postgres.ai/resources/case-studies/gitlab)
81
88
82
89
## Features
83
-
- Blazing-fast cloning of Postgres databases – a few seconds to create a new clone ready to accept connections and queries, regardless of database size.
84
-
- The theoretical maximum number of snapshots and clones is 2<sup>64</sup> ([ZFS](https://en.wikipedia.org/wiki/ZFS), default).
85
-
- The theoretical maximum size of PostgreSQL data directory: 256 quadrillion zebibytes, or 2<sup>128</sup> bytes ([ZFS](https://en.wikipedia.org/wiki/ZFS), default).
86
-
- PostgreSQL major versions supported: 9.6–14.
87
-
- Two technologies are supported to enable thin cloning ([CoW](https://en.wikipedia.org/wiki/Copy-on-write)): [ZFS](https://en.wikipedia.org/wiki/ZFS) and [LVM](https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)).
88
-
- All components are packaged in Docker containers.
89
-
- UI to make manual work more convenient.
90
-
- API and CLI to automate the work with DLE snapshots and clones.
91
-
- By default, PostgreSQL containers include many popular extensions ([docs](https://postgres.ai/docs/database-lab/supported-databases#extensions-included-by-default)).
92
-
- PostgreSQL containers can be customized ([docs](https://postgres.ai/docs/database-lab/supported-databases#how-to-add-more-extensions)).
93
-
- Source database can be located anywhere (self-managed Postgres, AWS RDS, GCP CloudSQL, Azure, Timescale Cloud, and so on) and does NOT require any adjustments. There are NO requirements to install ZFS or Docker to the source (production) databases.
94
-
- Initial data provisioning can be done at either the physical (pg_basebackup, backup / archiving tools such as WAL-G or pgBackRest) or logical (dump/restore directly from the source or from files stored at AWS S3) level.
95
-
- For logical mode, partial data retrieval is supported (specific databases, specific tables).
96
-
- For physical mode, a continuously updated state is supported ("sync container"), making DLE a specialized version of standby Postgres.
97
-
- For logical mode, periodic full refresh is supported, automated, and controlled by DLE. It is possible to use multiple disks containing different versions of the database, so full refresh won't require downtime.
98
-
- Fast Point in Time Recovery (PITR) to the points available in DLE snapshots.
99
-
- Unused clones are automatically deleted.
100
-
- "Deletion protection" flag can be used to block automatic or manual deletion of clones.
101
-
- Snapshot retention policies supported in DLE configuration.
102
-
- Persistent clones: clones survive DLE restarts (including full VM reboots).
103
-
- The "reset" command can be used to switch to a different version of data.
104
-
- DB Migration Checker component collects various artifacts useful for DB testing in CI ([docs](https://postgres.ai/docs/db-migration-checker)).
105
-
- SSH port forwarding for API and Postgres connections.
106
-
- Docker container config parameters can be specified in the DLE config.
107
-
- Resource usage quotas for clones: CPU, RAM (container quotas, supported by Docker)
108
-
- Postgres config parameters can be specified in the DLE config (separately for clones, the "sync" container, and the "promote" container).
- Blazing-fast cloning of Postgres databases – clone in seconds, irrespective of database size
92
+
- Theoretical max of snapshots/clones: 2<sup>64</sup> ([ZFS](https://en.wikipedia.org/wiki/ZFS), default)
93
+
- Maximum size of PostgreSQL data directory: 256 quadrillion zebibytes, or 2<sup>128</sup> bytes ([ZFS](https://en.wikipedia.org/wiki/ZFS), default)
94
+
- Support & technologies
95
+
- Supported PostgreSQL versions: 9.6–15
96
+
- Thin cloning ([CoW](https://en.wikipedia.org/wiki/Copy-on-write)) technologies: [ZFS](https://en.wikipedia.org/wiki/ZFS) and [LVM](https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux))
97
+
- UI for manual tasks and API & CLI for automation
98
+
- Packaged in Docker containers for all components
99
+
- Postgres containers
100
+
- Popular extensions including contrib modules, pgvector, HypoPG and many others ([docs](https://postgres.ai/docs/database-lab/supported-databases#extensions-included-by-default))
101
+
- Customization capabilities for containers ([docs](https://postgres.ai/docs/database-lab/supported-databases#how-to-add-more-extensions))
102
+
- Docker container and Postgres config parameters in DBLab config
103
+
- Source database requirements
104
+
- Location flexibility: self-managed Postgres, AWS RDS, GCP CloudSQL, Azure, etc. No source adjustments needed
105
+
- No ZFS or Docker requirements for source databases
106
+
- Data provisioning & retrieval
107
+
- Physical (pg_basebackup, WAL-G, pgBackRest) and logical (dump/restore) provisioning
108
+
- Partial data retrieval in logical mode (specific databases/tables)
109
+
- Continuous update in physical mode
110
+
- Periodic full refresh in logical mode without downtime
111
+
- Recovery & management
112
+
- Fast Point in Time Recovery (PITR) for physical mode
113
+
- Auto-deletion of unused clones
114
+
- Snapshot retention policies in DBLab configuration
115
+
- Clones
116
+
- "Deletion protection" for preventing clone deletion
117
+
- Persistent clones withstand DBLab restarts
118
+
- "Reset" command for data version switching
119
+
- Resource quotas: CPU, RAM
120
+
- Monitoring & security
121
+
-`/healthz` API endpoint (no auth), extended `/status` endpoint ([API docs](https://api.dblab.dev))
122
+
- Netdata module for insights
110
123
111
124
## How to contribute
112
-
### Give the project a star
113
-
The easiest way to contribute is to give the project a GitHub/GitLab star:
125
+
### Support us on GitHub/GitLab
126
+
The simplest way to show your support is by giving us a star on GitHub or GitLab! ⭐
114
127
115
128

116
129
117
130
### Spread the word
118
-
Post a tweet mentioning [@Database_Lab](https://twitter.com/Database_Lab) or share the link to this repo in your favorite social network.
131
+
- Shoot out a tweet and mention [@Database_Lab](https://twitter.com/Database_Lab)
132
+
- Share this repo's link on your favorite social media platform
119
133
120
-
If you are actively using DLE, tell others about your experience. You can use the logo referenced below and stored in the `./assets` folder. Feel free to put them in your documents, slide decks, application, and website interfaces to show that you use DLE.
134
+
### Share your experience
135
+
If DBLab has been a vital tool for you, tell the world about your journey. Use the logo from the `./assets` folder for a visual touch. Whether it's in documents, presentations, applications, or on your website, let everyone know you trust and use DBLab.
121
136
122
137
HTML snippet for lighter backgrounds:
123
138
<p>
@@ -147,52 +162,55 @@ Check out our [contributing guide](./CONTRIBUTING.md) for more details.
147
162
### Participate in development
148
163
Check out our [contributing guide](./CONTRIBUTING.md) for more details.
149
164
150
-
### Translate the README
151
-
Making Database Lab Engine more accessible to engineers around the Globe is a great help for the project. Check details in the [translation section of contributing guide](./CONTRIBUTING.md#Translation).
DLE source code is licensed under the OSI-approved open source license GNU Affero General Public License version 3 (AGPLv3).
186
+
DBLab source code is licensed under the OSI-approved open source license GNU Affero General Public License version 3 (AGPLv3).
175
187
176
188
Reach out to the Postgres.ai team if you want a trial or commercial license that does not contain the GPL clauses: [Contact page](https://postgres.ai/contact).
Making DBLab more accessible to engineers around the globe is a great help for the project. Check details in the [translation section of contributing guide](./CONTRIBUTING.md#Translation).
189
206
190
207
This README is available in the following translations:
0 commit comments