IO - DataFileRead
DataFileRead event occurs when a connection waits on a backend process to read a required page from storage because the page isn’t available in shared memory.
All queries and data manipulation (DML) operations access pages in the buffer pool. Statements that can induce reads include
DELETE. For example, an
UPDATE can read pages from tables or indexes. If the page being requested or updated isn’t in the shared buffer pool, this read can lead to the
Because the shared buffer pool is finite, it can fill up. In this case, requests for pages that aren’t in memory force the database to read blocks from disk. If the
DataFileRead event occurs frequently, your shared buffer pool might be too small to accommodate your workload. This problem is acute for
SELECT queries that read a large number of rows that don’t fit in the buffer pool. For more information about the buffer pool, see shared buffer area.
Common causes for the
DataFileRead event include the following:
You might find multiple connections generating the same number of
DataFileReadwait events. In this case, a spike (sudden and large increase) in
DataFileReadevents can occur.
SELECT and DML statements performing sequential scans
Your application might be performing a new operation. Or an existing operation might change because of a new execution plan. In such cases, look for tables (particularly large tables) that have a greater
seq_scanvalue. Find them by querying
pg_stat_user_tables. To track queries that are generating more read operations, use the extension
CTAS and CREATE INDEX for large data sets
A CTAS is a
CREATE TABLE AS SELECTstatement. If you run a CTAS using a large data set as a source, or create an index on a large table, the
DataFileReadevent can occur. When you create an index, the database might need to read the entire object using a sequential scan. A CTAS generates
DataFileReadevents when pages aren’t in memory.
Ingesting large amounts of data
When your application ingests large amounts of data,
ANALYZEoperations might occur more often. The
ANALYZEprocess can be triggered by an autovacuum launcher or invoked manually.The
ANALYZEoperation reads a subset of the table. The number of pages that must be scanned is calculated by multiplying 30 by the
default_statistics_targetvalue. For more information, see the PostgreSQL documentation. The
default_statistics_targetparameter accepts values between 1 and 10,000, where the default is 100.
If instance network bandwidth or CPU are consumed, the
DataFileReadevent might occur more frequently.
We recommend different actions depending on the causes of your wait event.
Assume that you identify specific queries that are generating
DataFileRead wait events. You might identify them using the following techniques:
- Catalog views such as the one provided by the extension
- The catalog view
pg_stat_all_tables, if it periodically shows an increased number of physical reads
pg_statio_all_tablesview, if it shows that
_readcounters are increasing
We recommend that you determine which filters are used in the predicate (
WHERE clause) of these queries. Follow these guidelines:
EXPLAINcommand. In the output, identify which types of scans are used. A sequential scan doesn’t necessarily indicate a problem. Queries that use sequential scans naturally produce more
DataFileReadevents when compared to queries that use filters.
Find out whether the column listed in the
WHEREclause is indexed. If not, consider creating an index for this column. This approach avoids the sequential scans and reduces the
DataFileReadevents. If a query has restrictive filters and still produces sequential scans, evaluate whether the proper indexes are being used.
Find out whether the query is accessing a very large table. In some cases, partitioning a table can improve performance, allowing the query to only read necessary partitions.
Examine the cardinality (total number of rows) from your join operations. Note how restrictive the values are that you’re passing in the filters for your
WHEREclause. If possible, tune your query to reduce the number of rows that are passed in each step of the plan.
Maintenance operations such as
ANALYZE are important. We recommend that you don’t turn them off because you find
DataFileRead wait events related to these maintenance operations. The following approaches can minimize the effect of these operations:
- Run maintenance operations manually during off-peak hours. This technique prevents the database from reaching the threshold for automatic operations.
- For very large tables, consider partitioning the table. This technique reduces the overhead of maintenance operations. The database only accesses the partitions that require maintenance.
- When you ingest large amounts of data, consider disabling the autoanalyze feature.
The autoanalyze feature is automatically triggered for a table when the following formula is true.
pg_stat_user_tables.n_tup_ins > (pg_class.reltuples x autoanalyze_scale_factor) + autoanalyze_base_threshold
pg_stat_user_tables and catalog
pg_class have multiple rows. One row can correspond to one row in your table. This formula assumes that the
reltuples are for a specific table. The parameters
autoanalyze_scale_factor (0.20 by default) and
autoanalyze_base_threshold (50 tuples by default) are usually set globally for the whole instance. However, you can set different values for a specific table.
When you monitor database health status, you might find that the database connections metric spikes. This increase indicates an increased number of connections to your database. We recommend the following approach:
Limit the number of connections that the application can open with each instance. If your application has an embedded connection pool feature, set a reasonable number of connections. Base the number on what the CPUs in your instance can parallelize effectively.
Whenever possible, take advantage of hot standby nodes for PostgreSQL. When your application runs a read-only operation, send these requests to the reader-only endpoint. This technique spreads application requests across all reader nodes, reducing the I/O pressure on the writer node.
Consider scaling up your DB instance. A higher-capacity instance class gives more memory, which gives PostgreSQL a larger shared buffer pool to hold pages. The larger size also gives the DB instance more CPUs to handle connections. More CPUs are particularly helpful when the operations that are generating
DataFileReadwait events are writes.