Redrock Postgres Documentation
Home Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Toggle Dark/Light/Auto mode Back to homepage

Online Migration

PostgreSQL provides the ability of logical replication. We can make an zero-downtime online migration through logical replication, and online migration can be applied in the following scenarios:

  • Upgrade the version of the database software;
  • Upgrade the version of the operating system software, or change the operating system type, such as from Windows to Linux;
  • Upgrade the specifications of the hardware, or change the architecture of the hardware, such as changing from X86 to ARM;
  • Adjust the database deployment schema, such as splitting a large database into multiple smaller databases;
  • Migrate a database deployment environment, such as from a server in one public cloud to another;

If your database is large, it may take some time to transfer over the network. In addition, online migration based on logical replication supports only database-based or table-based migrations. We’ll cover database-based migrations in the following steps. In this section, we will perform an online migration of the lrtest database.

Initial Checks

If you don’t already have a primary key for each table that you plan to replicate, create a primary key. The following query shows which tables do not have a primary key. The query should appear as 0 tables before you start the migration.

SELECT n.nspname AS schema, c.relname AS table
  FROM pg_class c JOIN pg_namespace n
    ON n.oid = c.relnamespace
  WHERE c.relkind = 'r'
    AND NOT EXISTS (
      SELECT 1 FROM pg_constraint con
        WHERE con.conrelid = c.oid AND con.contype = 'p')
    AND n.nspname NOT IN ('pg_catalog', 'pg_recyclebin', 'information_schema');

If you do not get a value of row 0 here, you need to create a primary key on the queried table to replicate the table.

Configuration Changes

An important thing to note is that if you have adjusted postgresql.conf for memory, maximum number of connections, etc., these parameters will not be copied during logical copying. Before migration, execute the following queries on the original and target databases, and compare the query results in detail.

SELECT name, setting, unit, boot_val FROM pg_settings;

Where the view pg_settings provides parameter information when the database server is running. If you find that there are differences between the parameter values between the original database and the target database during the parameter comparison process, you may need to modify the corresponding parameter configuration in the target database and restart the target database so that the configuration changes take effect.

Since logical replication requires a replicating user, we will execute the following query on the original database to create a user with replication permission.

CREATE USER logicalrep REPLICATION PASSWORD 'pgpass';

After creating the replication user, we need to change the pg_hba.conf configuration file on the original database to allow the target database to connect to the replication data on the original database. Since the database server reads ‘pg_hba.conf’ from top to bottom, this means that you need to place these rows appropriately according to the configuration rules:

host   lrtest  logicalrep  192.168.1.51/32  md5

After changing the pg_hba.conf, we need to set the wal_level parameter of the original database to logical.

ALTER SYSTEM SET wal_level TO logical;

After we have changed the configuration above, we need to restart the original database for the configuration changes to take effect.

Schema Migration

For the original database to be migrated, use pg_dump for schema dump. This will allow us to recover the schema information in the target database.

pg_dump -Fc -s -h 192.168.1.50 -d lrtest -U postgres -f /tmp/lrtest_schema.dmp

Use pg_restore to import a schema dump to the target database.

pg_restore -C -h 192.168.1.51 -d lrtest -U postgres /tmp/lrtest_schema.dmp

Setup Record and Publish DDL Operations in the original database and Subscribe and Synchronize DDL Operations in the target database. Note that if you need to migrate all table data in the database in the steps of migrating table data below, you do not need to publish the table audit.ddl_history on the original database, nor do you need to subscribe to the table audit.ddl_history on the target database.

Migrate table data

Now that the schema information has been imported into the target database, we can create a publication by running the following command on the original database. As mentioned above, we are about to migrate all the tables in the database. If you only want to copy certain tables in the database, you can pass the FOR TABLE when using the CREATE PUBLICATION command.

CREATE PUBLICATION lrtest_migrate FOR ALL TABLES;

Run the following command on the target database to subscribe all tables data and changes on the original database:

CREATE SUBSCRIPTION lrtest_migrate CONNECTION
  'host=192.168.1.50 dbname=lrtest user=logicalrep password=pgpass'
  PUBLICATION lrtest_migrate;

Verification

Next, we need to verify that the data of the original database and the target database are consistent. Run the following command on the original database to create the aggregate function used to calculate the checksum of the table data.

CREATE FUNCTION md5_agg_sfunc(text, text) RETURNS text
  AS 'SELECT md5($1 || $2)' LANGUAGE sql;

CREATE AGGREGATE md5_agg (
  BASETYPE = text,
  STYPE = text,
  SFUNC = md5_agg_sfunc,
  INITCOND = ''
);
Since we have already set the synchronization of DDL operations in the above steps of migrating schema, the functions created on the original database will be automatically synchronized to the target database, so there is no need to run the above commands on the target database.

Then, for each table in the database, we can execute the following query on the original and target databases separately and confirm that the results of the queries on both sides are consistent.

SELECT md5_agg(t::text) AS checksum FROM
  (SELECT * FROM t_table ORDER BY table_key) AS t;

If you find that there is a difference between the original database and the target database, you can execute the following query on each of the two databases to verify the data in the table in batches.

SELECT md5_agg(t::text) AS checksum, max(table_key) AS maxval FROM (
  SELECT * FROM t_table
    WHERE table_key > key_val
    ORDER BY table_key LIMIT 100000
) AS t;
In the query above, table_key is the primary key field in the table t_table and key_val is the primary key column value. When you start your first query, set key_val to be less than the minimum primary key column value in the table, and in the subsequent batch validation process, set key_val to the maximum primary key column value returned by the previous query.

Clean up

The data from the original database was successfully copied to the target database. We went even further and confirmed that the original and target database data were consistent. At this point the logical replication is running, and you can leave it in place until you are ready to switch. If all your data has been copied and you have completed the conversion, you can delete the subscription.

DROP SUBSCRIPTION lrtest_migrate;

Now that the online migration process has completed successfully, we can remove the original database if you no longer need it.