Recipe - Hot logical backup of all databases

Gabriele Bartolini
Gabriele Bartolini

A short recipe extracted from our award winning administration Cookbook regarding hot logical backup of all databases in a PostgreSQL instance.

If you have more than one database in your PostgreSQL server, you may want to back up all the databases together.

How to do it ...

My recommendation is that you do exactly what you did for one database to each database in your cluster. You can run those individual dumps in parallel if you want to speed things up.

Once this is complete, dump the global information also using the following:

pg_dumpall -g

How it works ...

To back up all databases, you may be told you need to use the pg_dumpall utility. I have four reasons why you shouldn't do that, which are as follows:

  • If you use pg_dumpall, then the only output produced is in a script file. Script files can't use the parallel restore feature of pg_restore, so by taking your backup in this way, you will be forcing the restore to be slower than it needs to be.
  • The pg_dumpall utility produces dumps of each database one after another. This means that pg_dumpall is slower than running multiple pg_dump tasks in parallel, one against each database.
  • The dumps of individual databases are not consistent to a particular point in time. As we pointed out in a previous recipe (NOTE: in the book), if you start the dump at 04:00 and it ends at 07:00, then you cannot be sure exactly what time the dump relates to; it could be anytime between 04:00 and 07:00.
  • Options for pg_dumpall are similar in many ways to pg_dump, though not all of them exist, so some things aren't possible

See also

If you are taking a logical backup of all your databases for disaster recovery purposes, you should look at hot physical backup options instead.

Was this article helpful?

0 out of 0 found this helpful