Database connections
Databases can handle a limited number of concurrent connections. Each connection requires RAM, which means that simply increasing the database connection limit without scaling available resources:
- ✔ might allow more processes to connect but
- ✘ significantly affects database performance, and can result in the database being shut down due to an out of memory error
The way your application manages connections also impacts performance. This guide describes how to approach connection management in serverless environments and long-running processes.
This guide focuses on relational databases and how to configure and tune the Prisma ORM connection pool (MongoDB uses the MongoDB driver connection pool).
Long-running processes
Examples of long-running processes include Node.js applications hosted on a service like Heroku or a virtual machine. Use the following checklist as a guide to connection management in long-running environments:
- Start with the recommended pool size (
connection_limit
) and tune it - Make sure you have one global instance of
PrismaClient
Recommended connection pool size
The recommended connection pool size (connection_limit
) to start with for long-running processes is the default pool size (num_physical_cpus * 2 + 1
) ÷ number of application instances.
num_physical_cpus
refers to the the number of CPUs of the machine your application is running on.
If you have one application instances:
- The default pool size applies by default (
num_physical_cpus * 2 + 1
) - you do not need to set theconnection_limit
parameter. - You can optionally tune the pool size.
If you have multiple application instances:
- You must manually set the
connection_limit
parameter . For example, if your calculated pool size is 10 and you have 2 instances of your app, theconnection_limit
parameter should be no more than 5. - You can optionally tune the pool size.
PrismaClient
in long-running applications
In long-running applications, we recommend that you:
- ✔ Create one instance of
PrismaClient
and re-use it across your application - ✔ Assign
PrismaClient
to a global variable in dev environments only to prevent hot reloading from creating new instances
Re-using a single PrismaClient
instance
To re-use a single instance, create a module that exports a PrismaClient
object:
import { PrismaClient } from '@prisma/client'
let prisma = new PrismaClient()
export default prisma
The object is cached the first time the module is imported. Subsequent requests return the cached object rather than creating a new PrismaClient
:
import prisma from './client'
async function main() {
const allUsers = await prisma.user.findMany()
}
main()
You do not have to replicate the example above exactly - the goal is to make sure PrismaClient
is cached. For example, you can instantiate PrismaClient
in the context
object that you pass into an Express app.
Do not explicitly $disconnect()
You do not need to explicitly $disconnect()
in the context of a long-running application that is continuously serving requests. Opening a new connection takes time and can slow down your application if you disconnect after each query.
Prevent hot reloading from creating new instances of PrismaClient
Frameworks like Next.js support hot reloading of changed files, which enables you to see changes to your application without restarting. However, if the framework refreshes the module responsible for exporting PrismaClient
, this can result in additional, unwanted instances of PrismaClient
in a development environment.
As a workaround, you can store PrismaClient
as a global variable in development environments only, as global variables are not reloaded:
import { PrismaClient } from '@prisma/client'
const globalForPrisma = globalThis as unknown as { prisma: PrismaClient }
export const prisma =
globalForPrisma.prisma || new PrismaClient()
if (process.env.NODE_ENV !== 'production') globalForPrisma.prisma = prisma
The way that you import and use Prisma Client does not change:
import { prisma } from './client'
async function main() {
const allUsers = await prisma.user.findMany()
}
main()
Serverless environments (FaaS)
Examples of serverless environments include Node.js functions hosted on AWS Lambda, Vercel or Netlify Functions. Use the following checklist as a guide to connection management in serverless environments:
- Familiarize yourself with the serverless connection management challenge
- Set pool size (
connection_limit
) based on whether you have an external connection pooler, and optionally tune the pool size - Instantiate
PrismaClient
outside the handler and do not explicitly$disconnect()
- Configure function concurrency and handle idle connections
The serverless challenge
In a serverless environment, each function creates its own instance of PrismaClient
, and each client instance has its own connection pool.
Consider the following example, where a single AWS Lambda function uses PrismaClient
to connect to a database. The connection_limit
is 3:
A traffic spike causes AWS Lambda to spawn two additional lambdas to handle the increased load. Each lambda creates an instance of PrismaClient
, each with a connection_limit
of 3, which results in a maximum of 9 connections to the database:
200 concurrent functions (and therefore 600 possible connections) responding to a traffic spike 📈 can exhaust the database connection limit very quickly. Furthermore, any functions that are paused keep their connections open by default and block them from being used by another function.
- Start by setting the
connection_limit
to1
- If a smaller pool size is not enough, consider using an external connection pooler like PgBouncer
Recommended connection pool size
The recommended pool size (connection_limit
) in serverless environments depends on:
- Whether you are using an external connection pooler
- Whether your functions are designed to send queries in parallel
Without an external connection pooler
If you are not using an external connection pooler, start by setting the pool size (connection_limit
) to 1, then optimize. Each incoming request starts a short-lived Node.js process, and many concurrent functions with a high connection_limit
can quickly exhaust the database connection limit during a traffic spike.
The following example demonstrates how to set the connection_limit
to 1 in your connection URL:
- PostgreSQL
- MySQL
postgresql://USER:PASSWORD@HOST:PORT/DATABASE?schema=public&connection_limit=1
mysql://USER:PASSWORD@HOST:PORT/DATABASE?connection_limit=1
If you are using AWS Lambda and not configuring a connection_limit
, refer to the following GitHub issue for information about the expected default pool size: https://github.com/prisma/docs/issues/667
With an external connection pooler
If you are using an external connection pooler, use the default pool size (num_physical_cpus * 2 + 1
) as a starting point and then tune the pool size. The external connection pooler should prevent a traffic spike from overwhelming the database.
Optimizing for parallel requests
If you rarely or never exceed the database connection limit with the pool size set to 1, you can further optimize the connection pool size. Consider a function that sends queries in parallel:
Promise.all() {
query1,
query2,
query3
query4,
...
}
If the connection_limit
is 1, this function is forced to send queries serially (one after the other) rather than in parallel. This slows down the function's ability to process requests, and may result in pool timeout errors. Tune the connection_limit
parameter until a traffic spike:
- Does not exhaust the database connection limit
- Does not result in pool timeout errors
PrismaClient
in serverless environments
Instantiate PrismaClient
outside the handler
Instantiate PrismaClient
outside the scope of the function handler to increase the chances of reuse. As long as the handler remains 'warm' (in use), the connection is potentially reusable:
import { PrismaClient } from '@prisma/client'
const client = new PrismaClient()
export async function handler() {
/* ... */
}
Do not explicitly $disconnect()
You do not need to explicitly $disconnect()
at the end of a function, as there is a possibility that the container might be reused. Opening a new connection takes time and slows down your function's ability to process requests.
Other serverless considerations
Container reuse
There is no guarantee that subsequent nearby invocations of a function will hit the same container - for example, AWS can choose to create a new container at any time.
Code should assume the container to be stateless and create a connection only if it does not exist - Prisma Client JS already implements this logic.
Zombie connections
Containers that are marked "to be removed" and are not being reused still keep a connection open and can stay in that state for some time (unknown and not documented from AWS). This can lead to sub-optimal utilization of the database connections.
A potential solution is to clean up idle connections (serverless-mysql
implements this idea, but cannot be used with Prisma ORM).
Concurrency limits
Depending on your serverless concurrency limit (the number of serverless functions running in parallel), you might still exhaust your database's connection limit. This can happen when too many functions are invoked concurrently, each with its own connection pool, which eventually exhausts the database connection limit. To prevent this, you can set your serverless concurrency limit to a number lower than the maximum connection limit of your database divided by the number of connections used by each function invocation (as you might want to be able to connect from another client for other purposes).
Optimizing the connection pool
If the query engine cannot process a query in the queue before the time limit , you will see connection pool timeout exceptions in your log. A connection pool timeout can occur if:
- Many users are accessing your app simultaneously
- You send a large number of queries in parallel (for example, using
await Promise.all()
)
If you consistently experience connection pool timeouts after configuring the recommended pool size, you can further tune the connection_limit
and pool_timeout
parameters.
Increasing the pool size
Increasing the pool size allows the query engine to process a larger number of queries in parallel. Be aware that your database must be able to support the increased number of concurrent connections, otherwise you will exhaust the database connection limit.
To increase the pool size, manually set the connection_limit
to a higher number:
datasource db {
provider = "postgresql"
url = "postgresql://johndoe:mypassword@localhost:5432/mydb?schema=public&connection_limit=40"
}
Note: Setting the
connection_limit
to 1 in serverless environments is a recommended starting point, but this value can also be tuned.
Increasing the pool timeout
Increasing the pool timeout gives the query engine more time to process queries in the queue. You might consider this approach in the following scenario:
- You have already increased the
connection_limit
. - You are confident that the queue will not grow beyond a certain size, otherwise you will eventually run out of RAM.
To increase the pool timeout, set the pool_timeout
parameter to a value larger than the default (10 seconds):
datasource db {
provider = "postgresql"
url = "postgresql://johndoe:mypassword@localhost:5432/mydb?connection_limit=5&pool_timeout=20"
}
Disabling the pool timeout
Disabling the pool timeout prevents the query engine from throwing an exception after x seconds of waiting for a connection and allows the queue to build up. You might consider this approach in the following scenario:
- You are submitting a large number of queries for a limited time - for example, as part of a job to import or update every customer in your database.
- You have already increased the
connection_limit
. - You are confident that the queue will not grow beyond a certain size, otherwise you will eventually run out of RAM.
To disable the pool timeout, set the pool_timeout
parameter to 0
:
datasource db {
provider = "postgresql"
url = "postgresql://johndoe:mypassword@localhost:5432/mydb?connection_limit=5&pool_timeout=0"
}
External connection poolers
Connection poolers like Prisma Accelerate and PgBouncer prevent your application from exhausting the database's connection limit.
If you would like to use the Prisma CLI in order to perform other actions on your database ,e.g. migrations and introspection, you will need to add an environment variable that provides a direct connection to your database in the datasource.directUrl
property in your Prisma schema:
# Connection URL to your database using PgBouncer.
DATABASE_URL="postgres://root:password@127.0.0.1:54321/postgres?pgbouncer=true"
# Direct connection URL to the database used for migrations
DIRECT_URL="postgres://root:password@127.0.0.1:5432/postgres"
You can then update your schema.prisma
to use the new direct URL:
datasource db {
provider = "postgresql"
url = env("DATABASE_URL")
directUrl = env("DIRECT_URL")
}
More information about the directUrl
field can be found here.
Prisma Accelerate
Prisma Accelerate is a managed external connection pooler built by Prisma that is integrated in the Prisma Data Platform and handles connection pooling for you.
PgBouncer
PostgreSQL only supports a certain amount of concurrent connections, and this limit can be reached quite fast when the service usage goes up – especially in serverless environments.
PgBouncer holds a connection pool to the database and proxies incoming client connections by sitting between Prisma Client and the database. This reduces the number of processes a database has to handle at any given time. PgBouncer passes on a limited number of connections to the database and queues additional connections for delivery when connections becomes available. To use PgBouncer, see Configure Prisma Client with PgBouncer.
AWS RDS Proxy
Due to the way AWS RDS Proxy pins connections, it does not provide any connection pooling benefits when used together with Prisma Client.