Prepared transactions are a key feature of PostgreSQL. Understanding what this feature offer and handling any potential pitfalls is critical to maintaining a system that is reliable. So let’s take a dive into what prepared transactions are all about.
A Little Bit About Transactions
In a database system, transactions are a way of processing all or zero statements in a block that generally contains more than one statement. The results of the statements in the block are not visible to other transactions until the entire block is committed. If the transaction fails, or is rolled back, it has no affect on the database at all.
Transactions are attached to a session. However, there are needs when you want to perform a transaction that is session independent (and has other benefits as well). That’s where “prepared transactions” come in.
A prepared transaction is a session independent, crash resistant, state maintained transaction. The state of the transaction is stored on disk which allows the database server to reinstate the transaction even after restarting from a crash. A prepared transaction is maintained until a rollback or a commit action is performed on it.
The PostgreSQL documentation states that a prepared transaction may be created by issuing a
PREPARE TRANSACTION ‘transaction_id’ command within an existing transaction block. It further states that this process prepares a transaction for a two-phase commit.
Furthermore, it is recommended that prepared transactions are not be used by applications or interactive sessions. Ideally, an external transactional manager should perform atomic global transactions across homogeneous or heterogeneous database resources.
In PostgreSQL, by default,
max_prepared_transactions is set to
postgresql.conf file; i.e. it is disabled. If you want to use prepared transactions, it is recommended to set this value to
max_connections. On the standby server in a synchronous streaming replication setup however, you’d want to be more generous and set the value a little higher than “max_connections” as otherwise you risk the standby not accepting any queries.
At any given time, you can view any active prepared transactions by simply querying the view
pg_prepared_xacts. The view shows five columns:
- Transaction; xid (transaction ID)
- GID; text (a user defined name for the prepared transaction)
- Prepared date; (timestamp with timezone when the prepared transaction was created)
- Owner; (user who created the prepared transaction)
- Database; (name of the database)
This view gives sufficient information about who owns the transaction and the database as well as when it was created. This is rather useful as we’ll see later on this blog.
Creating a Prepared Transaction
So now that we know what a prepared transaction is, let’s see how we can create one. Creating a prepared transaction is a four step process:
- Perform required operations
PREPARE TRANSACTION and
COMMIT PREPARED or
ROLLBACK PREPARED is followed by a
GID that uniquely identifies the prepared transactions.
The following code block shows SQL statements are their outputs along with a view of
pg_prepared_xacts view as we the transaction is prepared and then committed.
postgres=# BEGIN; BEGIN postgres=# CREATE TABLE foo(firstCol INT); CREATE TABLE postgres=# INSERT INTO foo VALUES(27); INSERT 0 1 postgres=# PREPARE TRANSACTION 'foo_insert'; PREPARE TRANSACTION postgres=# SELECT * FROM pg_prepared_xacts; transaction | gid | prepared | owner | database -------------+------------+-------------------------------+--------+---------- 48219 | foo_insert | 2020-01-27 11:36:18.522511+00 | highgo | postgres (1 row) postgres=# COMMIT PREPARED 'foo_insert'; COMMIT PREPARED postgres=# SELECT * FROM pg_prepared_xacts; transaction | gid | prepared | owner | database -------------+-----+----------+-------+---------- (0 rows)
When a server is stopped (or it crashes) with one or more active prepared transaction(s), it creates one file each for every active prepared transaction under
pg_twophase folder within the data directory. Earlier in the blog, we created the following prepared transaction.
postgres=# SELECT * FROM pg_prepared_xacts; transaction | gid | prepared | owner | database -------------+------------+-------------------------------+--------+---------- 48219 | foo_insert | 2020-01-27 12:03:59.271897+00 | highgo | postgres (1 row)
So I stopped the server without committing the transaction. The server created a file with the filename
0000BC5B which corresponds to the transaction ID of the prepared transaction.
[highgo@e7dc5351c7c7 bin]$ ls -l /tmp/data/pg_twophase/ total 4 -rw------- 1 highgo users 220 Jan 27 12:11 0000BC5B
0000BC5B equates to a value of
48219. When the server was restarted, it emitted the following messages in the startup log:
2020-01-27 12:16:56.025 UTC  LOG: redo starts at 0/40000028 2020-01-27 12:16:56.026 UTC  LOG: consistent recovery state reached at 0/40000100 2020-01-27 12:16:56.026 UTC  LOG: redo done at 0/40000100 2020-01-27 12:16:56.065 UTC  LOG: recovering prepared transaction 48219 from shared memory
If you don’t wish to recover a prepared transaction, you may simply delete the corresponding file under the
That’s rather simple, no? So why shouldn’t we be using this more regularly? After all, it gives a higher probability of a commit operation being successful. If only things were this simple!
What Can Go Wrong with a Prepared Transaction?
A prepared transaction can be left unfinished (neither committed nor rollbacked) if the client has disappeared. It can happen for various reasons including a client crash, or a server crash leading to client’s connection getting terminated and never reconnects. You are really relying on a transaction manager to ensure that there are no orphaned prepared transactions.
Besides the crashes, there is another way a prepared transaction can be left unfinished. If a backup is restored that carried the preparation steps, but not the steps closing the transaction, you may still end up with an orphaned prepared transaction.
Or perhaps a DBA created a prepared transaction and forgot about closing it.
So what’s the big deal if a prepared transaction is left unfinished.
The Real Issue
The real issue is that an orphaned prepared transaction continues to hold key system resources which may include locks, or keeping alive a transaction ID that may hold back vacuum from cleaning up dead tuples that are no longer visible to any other transaction except for this orphaned prepared transaction.
Consider the transaction we created earlier in this blog. When the transaction is prepared, and prior to committing that transaction, if another transaction attempts to alter the table, it fails to acquire the required locks and hangs, until the prepared transaction is resolved, either committed or rolled back. Otherwise, the alter command hangs indefinitely, eventually, I had to issue a
CTRL + C to stop the command.
postgres=# ALTER TABLE foo ADD COLUMN b INT; ^C 2020-01-27 12:52:15.809 UTC  ERROR: canceling statement due to user request 2020-01-27 12:52:15.809 UTC  STATEMENT: ALTER TABLE foo ADD COLUMN b INT; Cancel request sent ERROR: canceling statement due to user request postgres=# SELECT c.oid, c.relname, l.locktype, l.relation, l.mode FROM pg_class c INNER JOIN pg_locks l ON c.oid = l.relation WHERE c.relname = 'foo'; oid | relname | locktype | relation | mode --------+---------+----------+----------+------------------ 111197 | foo | relation | 111197 | RowExclusiveLock (1 row)
The challenge with vacuum can be a little more serious as in extreme cases, this may lead to database shutdown as the transaction will prevent transaction ID wrap around.
Discovering and Notifying
Although the expectation in general is that a prepared transaction is completed within a few seconds, however, that is not always the case. A prepared transaction may persist for a few minutes, hours, or even days.
Maintaining metadata for these transactions may be a challenge on its own. However, I would suggest setting up a nomenclature that defines the max age a prepared transaction can have. For example, consider the following prepared transaction:
postgres=# BEGIN; BEGIN postgres=# INSERT INTO foo VALUES(85); INSERT 0 1 postgres=# PREPARE TRANSACTION 'foo_insert 1m'; PREPARE TRANSACTION
Or consider the following transaction:
postgres=# BEGIN; BEGIN postgres=# INSERT INTO foo VALUES(2385); INSERT 0 1 postgres=# PREPARE TRANSACTION 'foo_insert 1d'; PREPARE TRANSACTION
In these transaction names, the last part defines the age of a transaction. Any transactions that are past their age can easily be identified through the following simple SQL query:
postgres=# SELECT gid, prepared, REGEXP_REPLACE(gid, '.* ', '') AS age FROM pg_prepared_xacts WHERE prepared + CAST(regexp_replace(gid, '.* ', '') AS INTERVAL) < NOW(); gid | prepared | age ---------------+-------------------------------+----- foo_insert 1m | 2020-01-27 13:17:33.549918+00 | 1m (1 row)
This clearly shows a transaction that should no longer be valid. So an external agent or a cron job can easily lookup these transactions and either notify an administrator, or rollback back these transactions. This in my opinion in a simple and easy way of ensuring that orphan transactions can managed in your environment even if transaction manager has failed or a DBA has accidentally left one behind.
Prepared transactions are obviously a really important feature, but requires careful set up of the environment with a fall back notifier or cleanser that can easily ensure that these transactions are not taking up key resources unnecessarily, and the system remains in good shape. The handling of orphaned prepared transactions is still being discussed in the PostgreSQL community. Whether it becomes part of the server core is yet to be seen. In the meantime, we need to use external tools for management of these transactions or hack our way out of this problem. There’s always a better solution around the corner!
Hamid has more than 19 years of professional software development experience and has been involved with PostgreSQL for more than 8 years now. He was part of EnterpriseDB and managed PostgreSQL official installers for Windows, Linux and MacOS. Hamid has recently joined HighGo Software Inc. as Senior DevOps and Database Architect.