Skip to content

Commit

Permalink
docs: upstream
Browse files Browse the repository at this point in the history
  • Loading branch information
planetscale-actions-bot committed Aug 27, 2024
1 parent ec48309 commit 0c09a35
Showing 1 changed file with 18 additions and 7 deletions.
25 changes: 18 additions & 7 deletions docs/reference/planetscale-system-limits.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: 'PlanetScale system limits'
subtitle: 'Learn about system limits that PlanetScale puts in place to protect your database.'
date: '2024-07-12'
date: '2024-08-27'
---

## Table limits
Expand Down Expand Up @@ -43,12 +43,13 @@ PlanetScale has enforced some system limits to prevent long-running queries or t

The following table details these limits:

| Type | Limit |
| -------------------------------------------- | ----- |
| Per-query rows returned, updated, or deleted | 100k |
| Per-query DML timeout | 30s |
| Per-query `SELECT` timeout | 900s |
| Per-transaction timeout | 20s |
| Type | Limit |
| -------------------------------------------- | ------ |
| Per-query rows returned, updated, or deleted | 100k |
| Per-query result set total size | 64 MiB |
| Per-query DML timeout | 30s |
| Per-query `SELECT` timeout | 900s |
| Per-transaction timeout | 20s |

### Recommendations for handling query limits

Expand All @@ -58,6 +59,16 @@ These limits are enforced for the safety of your database. However, we do unders

We recommend trying to break up large queries, e.g. through pagination.

**What should I do if I have a query that returns more than 64 MiB of data?**

If your schema currently relies on storing large amounts of variable length data within `JSON`, `BLOB`, or `TEXT` type columns that is regularly over a few MiB in size you will want to strongly consider storing that variable length data outside of your database, such as within an object storage solution, instead.

If large values are stored within variable length data columns it can limit the number of rows you can return.

For example, above we described a 100K limit for result sets, but if your result set's size exceeds the 64 MiB limit then you may receive an error message like the following: `resource_exhausted: grpc: received message larger than max (<RESULT_SET_SIZE_IN_BYTES> vs. 67108864)` when retrieving much less than 100K rows.

Similarly, when generating a large `INSERT` or `UPDATE` query for these types of columns you may run into the `grpc: trying to send message larger than max (<QUERY_SIZE_IN_BYTES> vs. 67108864)` error message which would require you to reduce the query's size.

**What should I do if I have a query that runs longer than 30 seconds?**

For queries that are running for longer than 30 seconds, we recommend breaking these up into multiple shorter queries. For analytics queries that are not possible to break up, see "How should I handle analytical queries?" below.
Expand Down

0 comments on commit 0c09a35

Please sign in to comment.