diff --git a/_partials/_chunk-interval.md b/_partials/_chunk-interval.md new file mode 100644 index 0000000000..4f84258bfc --- /dev/null +++ b/_partials/_chunk-interval.md @@ -0,0 +1,18 @@ +$PG builds the index on the fly during ingestion. That means that to build a new entry on the index, +a significant portion of the index needs to be traversed during every row insertion. When the index does not fit +into memory, it is constantly flushed to disk and read back. This wastes IO resources which would otherwise +be used for writing the heap/WAL data to disk. + +The default chunk interval is 7 days. However, best practice is to set `chunk_interval` so that prior to processing, +the indexes for chunks currently being ingested into fit within 25% of main memory. For example, on a system with 64 +GB of memory, if index growth is approximately 2 GB per day, a 1-week chunk interval is appropriate. If index growth is +around 10 GB per day, use a 1-day interval. + +You set `chunk_interval` when you [create a $HYPERTABLE][hypertable-create-table], or by calling +[`set_chunk_time_interval`][chunk_interval] on an existing hypertable. + + + +[best-practices]: /use-timescale/:currentVersion:/hypertables/#best-practices-for-time-partitioning +[chunk_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/ +[hypertable-create-table]: /api/:currentVersion:/hypertable/create_table/ diff --git a/use-timescale/hypertables/improve-query-performance.md b/use-timescale/hypertables/improve-query-performance.md index 1a95f50697..a30839227d 100644 --- a/use-timescale/hypertables/improve-query-performance.md +++ b/use-timescale/hypertables/improve-query-performance.md @@ -6,6 +6,7 @@ keywords: [hypertables, indexes, chunks] --- import OldCreateHypertable from "versionContent/_partials/_old-api-create-hypertable.mdx"; +import ChunkInterval from "versionContent/_partials/_chunk-interval.mdx"; # Improve hypertable and query performance @@ -27,11 +28,7 @@ Adjusting your hypertable chunk interval can improve performance in your databas 1. **Choose an optimum chunk interval** - The default chunk interval is 7 days. You can set a custom interval when you create a hypertable. - Best practice is that prior to processing, one chunk of data takes up 25% of main memory, including the indexes - from each active hypertable. For example, if you write approximately 2 GB of data per day to a database with 64 - GB of memory, set `chunk_interval` to 1 week. If you write approximately 10 GB of data per day on the same - machine, set the time interval to 1 day. For more information, see [best practices for time partitioning][best-practices]. + In the following example you create a table called `conditions` that stores time values in the `time` column and has chunks that store data for a `chunk_interval` of one day: diff --git a/use-timescale/hypertables/index.md b/use-timescale/hypertables/index.md index 487b0137d8..aac6553468 100644 --- a/use-timescale/hypertables/index.md +++ b/use-timescale/hypertables/index.md @@ -6,6 +6,7 @@ keywords: [hypertables] --- import HypertableIntro from 'versionContent/_partials/_hypertable-intro.mdx'; +import ChunkInterval from "versionContent/_partials/_chunk-interval.mdx"; # Hypertables @@ -63,11 +64,9 @@ to fit into memory so you can insert and query recent data without reading from disk. However, having too many small and sparsely filled chunks can affect query planning time and compression. -Best practice is to set `chunk_interval` so that prior to processing, one chunk of data -takes up 25% of main memory, including the indexes from each active $HYPERTABLE. -For example, if you write approximately 2 GB of data per day to a database with 64 GB of -memory, set `chunk_interval` to 1 week. If you write approximately 10 GB of data per day -on the same machine, set the time interval to 1 day. + + + For a detailed analysis of how to optimize your chunk sizes, see the [blog post on chunk time intervals][blog-chunk-time]. To learn how