Redrock Postgres Documentation
Home Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage
Edit page

Will DROP/TRUNCATE table sucks for large values of shared buffers?

Description

If a large number of DROP/TRUNCATE tables occurs in the database server with a large value of shared buffers, will the system performance be affected?

Answer

If a large number of DROP/TRUNCATE tables occurs in the database server with a large value of shared buffers, there will be no significant degradation in system performance.

When Redrock Postgres drops or truncates a table, it doesn’t immediately remove the physical files associated with the table and recycle the disk space. The database renames the table and places it in the Recycle Bin along with any associated objects.

Redrock Postgres automatically cleans the recycle bin on a regular basis, traversing the cached data pages in the shared buffer pool and discarding the caches on the tables that need to be cleaned.

Example

Test script for pgbench

SET synchronous_commit TO off;
BEGIN;
CREATE TABLE x(id int);
INSERT INTO x VALUES (1);
DROP TABLE x;
COMMIT;

The test is repeated with various settings for shared_buffers:

#!/bin/sh

DB=postgres

for x in '8 MB' '32 MB' '128 MB' '1 GB' '8 GB'
do
  pg_ctl -o "--shared_buffers='$x'" start
  sleep 1
  echo tps for $x
  psql -c "SHOW shared_buffers" $DB
  pgbench --file=run.sql -j 1 -c 1 -T 10 -P 2 $DB 2>/dev/null
  pg_ctl stop
  sleep 1
done

This is a database initialized directly with initdb, without any parameter adjustments, the following are the test results:

$ ./test.sh | grep tps
tps for 8 MB
tps = 1677.356617 (including connections establishing)
tps = 1677.639309 (excluding connections establishing)
tps for 32 MB
tps = 1822.183495 (including connections establishing)
tps = 1822.540181 (excluding connections establishing)
tps for 128 MB
tps = 1822.571589 (including connections establishing)
tps = 1823.012897 (excluding connections establishing)
tps for 1 GB
tps = 1580.872282 (including connections establishing)
tps = 1581.187091 (excluding connections establishing)
tps for 8 GB
tps = 1398.936176 (including connections establishing)
tps = 1399.232128 (excluding connections establishing)

By putting the 8GB test in the middle, shuffling the order, the result of the test again:

$ ./test.sh | grep tps
tps for 8 MB
tps = 1806.202465 (including connections establishing)
tps = 1806.542239 (excluding connections establishing)
tps for 32 MB
tps = 1816.756097 (including connections establishing)
tps = 1817.082547 (excluding connections establishing)
tps for 8GB
tps = 1688.839596 (including connections establishing)
tps = 1689.206157 (excluding connections establishing)
tps for 128 MB
tps = 1639.884082 (including connections establishing)
tps = 1640.174407 (excluding connections establishing)
tps for 1 GB
tps = 1559.705001 (including connections establishing)
tps = 1560.020075 (excluding connections establishing)

The results above changes, but without a direct relationship between the TPS results and the shared_buffers. Normal performance tests will have similar deviations, mainly related to checkpoints and background I/O related configurations.