How to tell if a logical replication slot is in use

Swapnil Suryawanshi
Swapnil Suryawanshi

The purpose of this article is to explain how and when to drop the replication slot, in the case that some discontinued node or service has left it behind.

Why have slots?

So what's the slot doing, why is it there?

In general terms, replication slots are a handle to make a server retain resources it would otherwise discard - either for logical replication or for physical block-level replication.

Physical slots mainly exist to stop the server from discarding WAL, so streaming replication standbys can know for sure they can retain it. Logical slots also have to make the server retain WAL because they need to read and decode that WAL on the upstream side to generate the logical datastream for the downstream. Physical slots retain WAL at the pg_replication_slot.catalog_lsn position for the slot; logical slots force WAL to be retained from the earlier pg_replication_slots.restart_lsn. (That's because a logical slot must be able to restart decoding from the xid assignment of the oldest xact not yet committed or aborted as of the historical state of the master at the confirmed_flush_lsn position in order to do transaction reordering and buffering.)

The master isn't (yet) smart enough to know how to fetch such WAL from an archive if it has an archive_command set, so it has to keep it in its pg_wal / pg_xlog. Which could fill up, bringing down the server.

Both physical and logical slots may also force the master to retain dead rows - old versions of rows since UPDATEDd or DELETEd. Physical slots can hold down the global xmin of all rows (pg_replication_slots.xmin) for use by streaming replication hot standbys to avoid unwanted query cancels, even if the network connection is transiently lost. Logical slots usually do it for the system catalogues only (pg_replication_slots.catalog_xmin) so it usually only matters for sites that use tons of temp tables etc. but they may also set a regular xmin during the initialization of a new logical replica or re-syncing of tables on an existing logical replica.

Important: Do not drop a replication slot, especially a logical replication slot, unless you have definitively identified what it is (or was) for. Even if it's currently inactive that does not mean it is unwanted or unused; dropping it may permanently destroy an important replication link or capability.

Do not drop a slot just because it's not marked active right now.

The only time it's definitely safe to drop a slot without careful checking is when:

  • The slot was created by a node-join / subscription-create etc. operation that you know failed or was cancelled
  • The slot was associated with a peer node/subscriber you know has been permanently taken offline/failed/crashed
  • The slot was associated with a peer node/subscriber the customer has split off into its own independent node so they never want to have it receive more changes from its old peer
  • You created the slot for a purpose you know it's no longer needed for

But in all those cases you should prefer to use the product-specific options to part/drop a node before resorting to manual slot drop.

Why drop a slot?

Why would you want to drop a replication slot anyway? If the resources it's forcing the master to retain are becoming an operational problem for the master, causing performance problems and/or risk of an outage. Or if you know for sure it's not needed anymore, and want to avoid such issues in future.

A stale/abandoned slot can be an issue because:

  • The slot is causing the server to retain excess WAL in pg_wal / pg_xlog because it's restart_lsn (logical slot) or confirmed_flush_lsn (physical slot) not advancing. The server must retain WAL up to and including that LSN.
  • The slot is causing system catalogue bloat because its catalog_xmin is not advancing, so the (auto)vacuum cannot remove dead tuples from system catalogues. This is generally only a problem when the customer uses lots of temp tables as temp tables still modify the system catalogs;
  • The slot has an xmin set that's holding up a vacuum for all tables on all databases, causing serious bloat. Physical slots used by hot standby streaming replicas may set an xmin for hot_standby_feedback; this will remain persistent even when the standby is not currently connected. Logical slots won't generally be an issue for xmin because logical slots only hold an xmin (as opposed to catalog_xmin) immediately after they're created, for use during an initial dump and reload operation.

Does inactive (active=f) mean the slot is no longer needed?

Definitely not!

pg_replication_slot.active='f' just means that there is not currently any client connected to the slot (via walsender, or occasionally SQL logical decoding functions).

The client might still need the slot, but just:

  • be down for maintenance or transient fault
  • have temporarily lost its network connection
  • Hit an error when receiving from the slot that causes it to disconnect (think out-of-disk, bdr/pglogical apply errors, etc.)
  • be using the slot for a purpose where it doesn't need to remain connected all the time, like BDR3's cluster-wide recovery position slots or pglogical3's standby-failover placeholder slots
  • ....

Do not drop a slot just because it's not marked active right now.

How can a slot be abandoned/lost / stale?

So if a slot that's inactive might still be needed, how do you tell if a slot is abandoned, stale, lost, or otherwise no longer required?

You need to understand the client deployment and ask questions. The best thing to do is to conclusively identify what each replication slot is used for and also monitor the slot to see if it occasionally advances even if it's not usually active.

There's no 100% reliable recipe for this as the client cannot be relied upon to fully understand their own deployment, system topology, etc., and relay that correctly to you. So you just have to do your best.

First, identify what product and peer the slot is associated with.

Logical slots

In general, check with the user to find out about any failed node join/creation attempts, nodes they tore down without parting, etc. Make sure you learn what the node-network topology of their deployment should be.

Note that the utilities bdr_init_physical (BDR3), pglogical_create_subscriber (pglogical) and bdr_init_copy (bdr1/2) create slots quite early on, and won't remove them on failure.

  • Is this BDR1/2?
  • SELECT bdr.bdr_version() and SELECT * FROM bdr.bdr_get_local_nodeid()
  • The slot pg_replication_slots.plugin will be bdr
  • Slots are named bdr_* and follow the name pattern bdr_[local_dboid]_[remote_sysid]_[remote_timeline_id]_[remote_dboid]_.
  • You can map the slot name to a node identity tuple using the bdr.bdr_node_slots view. That view uses the bdr.bdr_parse_slot_name function to join on the local bdr.bdr_nodes. Use that to check if the BDR node group recognises the node ID. If not, it's probably a leftover from a failed join, uncleanly parted node, etc. If possible, use bdr.part_by_node_names to force the absent nodes into a parted state before dropping the slot.
  • Is this BDR3?
  • SELECT bdr.bdr_version() and SELECT * FROM bdr.local_node_summary;
  • The slot pg_replication_slots.plugin will be pglogical_output
  • Slot names for individual nodes follow the pattern bdr_[dbname]_[nodegroup]_[subscriber_name]; any component may be abbreviated with a short hash.
  • Slot names for group slots follow the pattern bdr_[dbname]_[nodegroup] with abbreviation
  • BDR3 nodes know about each other so you can ask them about slots. Check the bdr.subscription_summary view.
  • Warning BDR3 also maintains replication slots it uses for sync recovery after (possibly unplanned) node loss or node part. These "group slots" can be identified by the name pattern above. As of 3.5 there is currently no SQL level function or view showing anything about these slots and they don't appear in bdr.subscription_summary. (TODO)
  • Is this a pglogical provider?
  • SELECT pglogical.pglogical_version() and SELECT * FROM pglogical.tables
  • The slot pg_replication_slots.plugin will be pglogical_output (but this is also true of bdr3)
  • pglogical slots follow the name pattern pgl_[dbname]_[provider_nodename]_[subscription_name] where any component may be abbreviated with a short hash. pglogical slot names may be overridden in the pglogical.subscription.slot_name field on the subscriber, so they can theoretically be anything but this requires post-subscription-creation fiddling and is generally only used by pglogical3 plugins like bdr3.
  • pglogical publishers don't keep an inventory of subscribers so you'll have to ask the client for more info
  • Is this a streaming physical replication standby of a pglogical3 or bdr3 node running PostgreSQL 10 or newer, or of a pglogical2 node running 2ndQPostgres 9.6?
  • SELECT pg_is_in_recovery() and check above info for pglogical3/bdr3 info
  • Unused slots are probably slots intended for use by the pglogical3/bdr3 peers and are maintained by pglogical for that purpose. pglogical should be in shared_preload_libraries since it maintains the slot.
  • 2ndQPostgres 9.6 with pglogical2 (only) also maintains such mirrored slots using the failover slots support in that version. Their functionality is the same, just how they're updated differs.
  • Remember that BDR3's node-loss-recovery "group slots" will appear on physical standbys too
  • Postgres masters don't keep an inventory of streaming replication standbys so you'll have to ask the user for more information
  • Is this a PostgreSQL 10+ publisher for built-in logical replication?
  • SELECT * FROM pg_catalog.pg_publication
  • The slot pg_replication_slots.plugin will be pgoutput
  • The default slot name is the same as the subscription name with no decoration
  • Check if the slot may be used by a PostgreSQL 10+ built-in logical replication subscriber
  • Postgres logical rep publishers don't keep an inventory of subscribers (XXX confirm XXX) so you'll have to ask the user for more information
  • Failing that, check if the customer may use other tools like manually running pg_recvlogical to connect to a slot with a custom output plugin like test_decoding or wal2json, and if so whether they still use it.

Physical slots

Physical slots are generally used for:

  • Streaming replication hot standbys via recovery.conf's primary_slot_name;
  • pg_basebackup when run with the -S / --slot option;
  • PgBarman (version 2.0+) when configured with the slot_name option;
  • ?repmgr?

The master doesn't keep track of the users of its slots or care what's using them. So you'll have to ask the site operator for details. Physical replication slot names are generally arbitrary and selected by the user so there are no clues there, and there is no plugin name to give you a hint either.

It's uncommon for physical slots not to advance. But it can happen - for example, if a streaming physical replica falls back from streaming mode to replaying from WAL archives, it will take a while to retry connecting for streaming and thus re-activating the slot.

Checking for changes on slots that aren't always active

It's already been established that active='f' isn't a sufficient criterion to say a slot isn't needed anymore because there are plenty of reasons a slot could be inactive but still needed. So how do you tell?

If you can't confirm the slot is used using some of the above steps you should watch to see if it's actually being used. Use something like the psql command

\t
select clock_timestamp(), slot_name, restart_lsn, xmin, catalog_xmin, confirmed_flush_lsn from pg_replication_slots;
\watch 60

which will print lines with slot positions. Or just run the query a few times some minutes/hours/whatever apart.

Can I drop it?

If you can't track the slot down to a product and node and identify the state and fate of the node, and the slot isn't doing anything, and the customer doesn't know anything about it, then you can probably drop it.

Was this article helpful?

0 out of 0 found this helpful