- PostgreSQL does not support parameters for identifiers. If you need to have dynamic database, schema, table, or column names (e.g. in DDL statements) use
pg-format package for handling escaping these values to ensure you do not have SQL injection!
+ PostgreSQL does not support parameters for identifiers. If you need to have dynamic database, schema, table, or column names (e.g. in DDL statements) use [pg-format](https://www.npmjs.com/package/pg-format) package for handling escaping these values to ensure you do not have SQL injection!
Parameters passed as the second argument to `query()` will be converted to raw data types using the following rules:
@@ -123,7 +123,7 @@ console.log(res.rows[0]) // ['Brian', 'Carlson']
### Types
-You can pass in a custom set of type parsers to use when parsing the results of a particular query. The `types` property must conform to the [Types](/api/types) API. Here is an example in which every value is returned as a string:
+You can pass in a custom set of type parsers to use when parsing the results of a particular query. The `types` property must conform to the [Types](/apis/types) API. Here is an example in which every value is returned as a string:
```js
const query = {
diff --git a/docs/pages/features/ssl.mdx b/docs/pages/features/ssl.mdx
index 95683aca1..2c5e7bd9e 100644
--- a/docs/pages/features/ssl.mdx
+++ b/docs/pages/features/ssl.mdx
@@ -50,3 +50,17 @@ const config = {
},
}
```
+
+## Channel binding
+
+If the PostgreSQL server offers SCRAM-SHA-256-PLUS (i.e. channel binding) for TLS/SSL connections, you can enable this as follows:
+
+```js
+const client = new Client({ ...config, enableChannelBinding: true})
+```
+
+or
+
+```js
+const pool = new Pool({ ...config, enableChannelBinding: true})
+```
diff --git a/docs/pages/features/transactions.mdx b/docs/pages/features/transactions.mdx
index 492cbbe0e..4433bd3e4 100644
--- a/docs/pages/features/transactions.mdx
+++ b/docs/pages/features/transactions.mdx
@@ -36,4 +36,4 @@ try {
} finally {
client.release()
}
-```
\ No newline at end of file
+```
diff --git a/docs/pages/features/types.mdx b/docs/pages/features/types.mdx
index 808d2e185..36e8b7035 100644
--- a/docs/pages/features/types.mdx
+++ b/docs/pages/features/types.mdx
@@ -4,7 +4,7 @@ title: Data Types
import { Alert } from '/components/alert.tsx'
-PostgreSQL has a rich system of supported [data types](https://www.postgresql.org/docs/9.5/static/datatype.html). node-postgres does its best to support the most common data types out of the box and supplies an extensible type parser to allow for custom type serialization and parsing.
+PostgreSQL has a rich system of supported [data types](https://www.postgresql.org/docs/current/datatype.html). node-postgres does its best to support the most common data types out of the box and supplies an extensible type parser to allow for custom type serialization and parsing.
## strings by default
diff --git a/docs/pages/guides/_meta.json b/docs/pages/guides/_meta.json
index 3889a0992..777acb4e2 100644
--- a/docs/pages/guides/_meta.json
+++ b/docs/pages/guides/_meta.json
@@ -1,5 +1,6 @@
{
"project-structure": "Suggested Code Structure",
"async-express": "Express with Async/Await",
+ "pool-sizing": "Pool Sizing",
"upgrading": "Upgrading"
}
diff --git a/docs/pages/guides/async-express.md b/docs/pages/guides/async-express.md
index 982fdc50c..a44c15289 100644
--- a/docs/pages/guides/async-express.md
+++ b/docs/pages/guides/async-express.md
@@ -26,7 +26,7 @@ import { Pool } from 'pg'
const pool = new Pool()
-export const query = (text, params) => pool.query(text, params);
+export const query = (text, params) => pool.query(text, params)
```
Then I will install [express-promise-router](https://www.npmjs.com/package/express-promise-router) and use it to define my routes. Here is my `routes/user.js` file:
diff --git a/docs/pages/guides/pool-sizing.md b/docs/pages/guides/pool-sizing.md
new file mode 100644
index 000000000..5c7ddaad8
--- /dev/null
+++ b/docs/pages/guides/pool-sizing.md
@@ -0,0 +1,25 @@
+---
+title: Pool Sizing
+---
+
+If you're using a [pool](/apis/pool) in an application with multiple instances of your service running (common in most cloud/container environments currently), you'll need to think a bit about the `max` parameter of your pool across all services and all _instances_ of all services which are connecting to your Postgres server.
+
+This can get pretty complex depending on your cloud environment. Further nuance is introduced with things like pg-bouncer, RDS connection proxies, etc., which will do some forms of connection pooling and connection multiplexing. So, it's definitely worth thinking about. Let's run through a few setups. While certainly not exhaustive, these examples hopefully prompt you into thinking about what's right for your setup.
+
+## Simple apps, dev mode, fixed instance counts, etc.
+
+If your app isn't running in a k8s style env with containers scaling automatically or lambdas or cloud functions etc., you can do some "napkin math" for the `max` pool config you can use. Let's assume your Postgres instance is configured to have a maximum of 200 connections at any one time. You know your service is going to run on 4 instances. You can set the `max` pool size to 50, but if all your services are saturated waiting on database connections, you won't be able to connect to the database from any mgmt tools or scale up your services without changing config/code to adjust the max size.
+
+In this situation, I'd probably set the `max` to 20 or 25. This lets you have plenty of headroom for scaling more instances and realistically, if your app is starved for db connections, you probably want to take a look at your queries and make them execute faster, or cache, or something else to reduce the load on the database. I worked on a more reporting-heavy application with limited users, but each running 5-6 queries at a time which all took 100-200 milliseconds to run. In that situation, I upped the `max` to 50. Typically, though, I don't bother setting it to anything other than the default of `10` as that's usually _fine_.
+
+## Auto-scaling, cloud-functions, multi-tenancy, etc.
+
+If the number of instances of your services which connect to your database is more dynamic and based on things like load, auto-scaling containers, or running in cloud-functions, you need to be a bit more thoughtful about what your max might be. Often in these environments, there will be another database pooling proxy in front of the database like pg-bouncer or the RDS-proxy, etc. I'm not sure how all these function exactly, and they all have some trade-offs, but let's assume you're not using a proxy. Then I'd be pretty cautious about how large you set any individual pool. If you're running an application under pretty serious load where you need dynamic scaling or lots of lambdas spinning up and sending queries, your queries are likely fast and you should be fine setting the `max` to a low value like 10 -- or just leave it alone, since `10` is the default.
+
+## pg-bouncer, RDS-proxy, etc.
+
+I'm not sure of all the pooling services for Postgres. I haven't used any myself. Throughout the years of working on `pg`, I've addressed issues caused by various proxies behaving differently than an actual Postgres backend. There are also gotchas with things like transactions. On the other hand, plenty of people run these with much success. In this situation, I would just recommend using some small but reasonable `max` value like the default value of `10` as it can still be helpful to keep a few TCP sockets from your services to the Postgres proxy open.
+
+## Conclusion, tl;dr
+
+It's a bit of a complicated topic and doesn't have much impact on things until you need to start scaling. At that point, your number of connections _still_ probably won't be your scaling bottleneck. It's worth thinking about a bit, but mostly I'd just leave the pool size to the default of `10` until you run into troubles: hopefully you never do!
diff --git a/docs/pages/guides/project-structure.md b/docs/pages/guides/project-structure.md
index 94dcc1a30..5f53a4183 100644
--- a/docs/pages/guides/project-structure.md
+++ b/docs/pages/guides/project-structure.md
@@ -31,8 +31,8 @@ import { Pool } from 'pg'
const pool = new Pool()
-export const query = (text, params, callback) => {
- return pool.query(text, params, callback)
+export const query = (text, params) => {
+ return pool.query(text, params)
}
```
@@ -41,10 +41,10 @@ That's it. But now everywhere else in my application instead of requiring `pg` d
```js
// notice here I'm requiring my database adapter file
// and not requiring node-postgres directly
-import * as db from '../db.js'
+import * as db from '../db/index.js'
app.get('/:id', async (req, res, next) => {
- const result = await db.query('SELECT * FROM users WHERE id = $1', [req.params.id]
+ const result = await db.query('SELECT * FROM users WHERE id = $1', [req.params.id])
res.send(result.rows[0])
})
@@ -85,13 +85,13 @@ export const query = async (text, params) => {
console.log('executed query', { text, duration, rows: res.rowCount })
return res
}
-
+
export const getClient = () => {
return pool.connect()
}
```
-Okay. Great - the simplest thing that could possibly work. It seems like one of our routes that checks out a client to run a transaction is forgetting to call `done` in some situation! Oh no! We are leaking a client & have hundreds of these routes to go audit. Good thing we have all our client access going through this single file. Lets add some deeper diagnostic information here to help us track down where the client leak is happening.
+Okay. Great - the simplest thing that could possibly work. It seems like one of our routes that checks out a client to run a transaction is forgetting to call `release` in some situation! Oh no! We are leaking a client & have hundreds of these routes to go audit. Good thing we have all our client access going through this single file. Lets add some deeper diagnostic information here to help us track down where the client leak is happening.
```js
export const query = async (text, params) => {
diff --git a/docs/pages/guides/upgrading.md b/docs/pages/guides/upgrading.md
index e3bd941c8..6a09d2ec1 100644
--- a/docs/pages/guides/upgrading.md
+++ b/docs/pages/guides/upgrading.md
@@ -5,13 +5,13 @@ slug: /guides/upgrading
# Upgrading to 8.0
-node-postgres at 8.0 introduces a breaking change to ssl-verified connections. If you connect with ssl and use
+node-postgres at 8.0 introduces a breaking change to ssl-verified connections. If you connect with ssl and use
```
const client = new Client({ ssl: true })
```
-and the server's SSL certificate is self-signed, connections will fail as of node-postgres 8.0. To keep the existing behavior, modify the invocation to
+and the server's SSL certificate is self-signed, connections will fail as of node-postgres 8.0. To keep the existing behavior, modify the invocation to
```
const client = new Client({ ssl: { rejectUnauthorized: false } })
@@ -37,7 +37,7 @@ If your application still relies on these they will be _gone_ in `pg@7.0`. In or
// old way, deprecated in 6.3.0:
// connection using global singleton
-pg.connect(function(err, client, done) {
+pg.connect(function (err, client, done) {
client.query(/* etc, etc */)
done()
})
@@ -50,10 +50,10 @@ pg.end()
// new way, available since 6.0.0:
// create a pool
-var pool = new pg.Pool()
+const pool = new pg.Pool()
// connection using created pool
-pool.connect(function(err, client, done) {
+pool.connect(function (err, client, done) {
client.query(/* etc, etc */)
done()
})
@@ -102,11 +102,12 @@ If you do **not** pass a callback `client.query` will return an instance of a `P
`client.query` has always accepted any object that has a `.submit` method on it. In this scenario the client calls `.submit` on the object, delegating execution responsibility to it. In this situation the client also **returns the instance it was passed**. This is how [pg-cursor](https://github.com/brianc/node-pg-cursor) and [pg-query-stream](https://github.com/brianc/node-pg-query-stream) work. So, if you need the event emitter functionality on your queries for some reason, it is still possible because `Query` is an instance of `Submittable`:
```js
-import { Client, Query } from 'pg'
+import pg from 'pg'
+const { Client, Query } = pg
const query = client.query(new Query('SELECT NOW()'))
-query.on('row', row => {})
-query.on('end', res => {})
-query.on('error', res => {})
+query.on('row', (row) => {})
+query.on('end', (res) => {})
+query.on('error', (res) => {})
```
`Query` is considered a public, documented part of the API of node-postgres and this form will be supported indefinitely.
diff --git a/docs/pages/index.mdx b/docs/pages/index.mdx
index d785d327f..bcaaaecd6 100644
--- a/docs/pages/index.mdx
+++ b/docs/pages/index.mdx
@@ -3,6 +3,8 @@ title: Welcome
slug: /
---
+import { Logo } from '/components/logo.tsx'
+
node-postgres is a collection of node.js modules for interfacing with your PostgreSQL database. It has support for callbacks, promises, async/await, connection pooling, prepared statements, cursors, streaming results, C/C++ bindings, rich type parsing, and more! Just like PostgreSQL itself there are a lot of features: this documentation aims to get you up and running quickly and in the right direction. It also tries to provide guides for more advanced & edge-case topics allowing you to tap into the full power of PostgreSQL from node.js.
## Install
@@ -15,11 +17,26 @@ $ npm install pg
node-postgres continued development and support is made possible by the many [supporters](https://github.com/brianc/node-postgres/blob/master/SPONSORS.md).
+Special thanks to [Medplum](https://www.medplum.com/) for sponsoring node-postgres for a whole year!
+
+