This article serves as reference for point release updates.
In order to perform a point release update, or minor release update, for example from PostgreSQL 11.1 to PostgreSQL 11.2, there's no need to use pg_upgrade
or pg_dump
and pg_restore
. The only requirement is that you restart the postgres
service through init scripts or systemd
units, whichever method was used to start the service.
You should update all your replicas first, and finally update the primary. If you have cascading replicas, start by upgrading the replicas that are at the end of the cascade, and then upgrade the node which they are streaming from, and so on. Do not update a node until all the replicas that stream from it are upgraded.
The steps we normally recommend using are the following, which you should perform on all nodes from the cluster, starting with the last replicas in a cascading replication, and moving up until you end up upgrading the primary:
- Download the packages (
yum
andapt
have special options to download only) - If this is a primary server, run a
CHECKPOINT
on the server so all dirty pages are flushed to disk. Ignore this step if it's a replica that's being upgraded - Stop postgres
- Install new binaries (this would normally mean running
yum
orapt
again to get packages upgraded) - Start postgres again
The whole process on one node shouldn't take more than 5 minutes. You should make sure that your application is not pointing to the node you are about to upgrade so you don't get any connection errors while the service is coming back up.
IMPORTANT: Remember to check the content of the release notes for all the minor versions between the installed one on your system and the latest. Sometimes they might contain some extraordinary maintenance operations to be performed on the updated database.
In general terms, PostgreSQL updates are performed together with security updates of the system through package managers such as yum
and apt
.
IMPORTANT: It is always a good practice to keep systems updated. These are controlled downtimes that highly reduce the risk of severe downtimes in the mid/long term.
If you follow this approach, we encourage that you setup symmetric and identical QA/staging environments and regularly perform system wide updates on those servers first before proceeding in production. There are also ways to reduce even more the downtime, by promoting an already upgraded node, and point the application there, leaving the old primary released from it's duty and ready for the upgrade to be performed on it. You can later rejoin it to the cluster.