Table of Contents
- Migration Stages
- Check Postgres pg_extension (Repack and other extensions)
- NEWDB: Create Roles (per cluster level, not DB level)
- NEWDB: Create empty DB
- OLDBD: Backup (dump) data into a folder structure
- Transfer backup folder from OLDDB to NEWDB using SCP
- NEWDB: Restore Data in 3 stages (pre-data / data / post-data)
- NEWDB: Warn data to memory (Optional)
- NEWDB: Complete the final restore stage
- NEWDB: Rebuild planner statistics
- Timing notes
This is going to assume the new version is 18, and the database is called mastodon_production.
Migration Stages
Check Postgres pg_extension (Repack and other extensions)
You can check if you used other extensions using this SQL on the old DB.
psql -U postgres -d mastodon_production -c "SELECT extname, extversion FROM pg_extension ORDER BY extname;"

# Install the Postgres repack extension if you've used it on the previous server.
# FYI: the pg_stat_statments and plpgsql are commonly installed by default.
## pg_restore: error: could not execute query: ERROR: extension "pg_repack" is not available
## HINT: The extension must first be installed on the system where PostgreSQL is running.
apt install postgresql-18-repack
NEWDB: Create Roles (per cluster level, not DB level)
You’ll need to generate the password
note: these permissions are overkill. We will restrict them later.
psql -U postgres -c "
CREATE ROLE mastodon;
ALTER ROLE mastodon WITH SUPERUSER INHERIT CREATEROLE CREATEDB LOGIN NOREPLICATION NOBYPASSRLS PASSWORD 'SCRAM-SHA-256$4096:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX';
"
NEWDB: Create empty DB
# Create the database on the new location
psql -U postgres -c "CREATE DATABASE mastodon_production OWNER mastodon ENCODING 'UTF8' LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8' TEMPLATE template0;"
OLDBD: Backup (dump) data into a folder structure
# -j 12 is the number of CPU cores minus a few for the OS. I have 16 cores and set to 12
pg_dump -U postgres -Fd -j 12 --compress=0 -d mastodon_production -f /mnt/dbdata/backup/AS
Transfer backup folder from OLDDB to NEWDB using SCP
I was considering using warpgate, croc or wusp for the endpoint to endpoint transfer without the SSH, but they are slower. croc allows for local relays for LAN traffic but I figured this is the boring way to handle hundreds of GBs safely and quickly.

# if you get the error above, please just set the private key to chmod 600 private.key
# scp the folder from local to remote
scp -i key.key -r /mnt/dbdata/backup/AS/ shlee@x.x.x.x:/data/backup/AS/
NEWDB: Restore Data in 3 stages (pre-data / data / post-data)
# Restore pre-data (schema)
pg_restore -U postgres \
-j 12 \
--section=pre-data \
-d mastodon_production \
/data/backup/AS/AS
# Restore data
pg_restore -U postgres \
-j 12 \
--section=data \
-d mastodon_production \
/data/backup/AS/AS
NEWDB: Warn data to memory (Optional)
If you have a lot of memory, preload the largest tables.
# Warm everything
psql -U postgres -d mastodon_production << 'EOF'
\echo 'Warming statuses...'
SELECT count(*) FROM statuses;
\echo 'Warming conversations...'
SELECT count(*) FROM conversations;
\echo 'Warming statuses_tags...'
SELECT count(*) FROM statuses_tags;
\echo 'Warming mentions...'
SELECT count(*) FROM mentions;
EOF
# Check cache hit ratio
psql -U postgres -d mastodon_production -c "
SELECT relname,
round(100.0 * heap_blks_hit / nullif(heap_blks_hit + heap_blks_read, 0), 2) AS cache_hit_pct
FROM pg_statio_user_tables
WHERE relname IN ('statuses','conversations','accounts','media_attachments',
'mentions','status_stats','tombstones','notifications',
'favourites','preview_cards','statuses_tags')
ORDER BY cache_hit_pct ASC NULLS LAST;"
NEWDB: Complete the final restore stage
# Restore post-data (indexes, constraints)
pg_restore -U postgres \
-j 12 \
--section=post-data \
-d mastodon_production \
/data/backup/AS/AS
NEWDB: Rebuild planner statistics
# Mandatory before go-live — rebuilds planner statistics on fresh data
vacuumdb -U postgres \
-d mastodon_production \
--analyze-only \
--jobs=12
Timing notes
I have a 380GB database on OLDDB….
- pg_dump took 23 min 9 sec
- scp took 22 min 54 sec
- pre-data took 1 sec
- data took 53 min 30 sec
- warming took 33 min 17 sec
- post-data took 98 min 30 sec
- Planner statistics took took 1 second
Categories: Uncategorised
Leave a Reply