On Upgrading and Databases

Page content

Postgresql hates itself. I’m convinced of this, and have considered the idea frequently over the years. I roll it around in my mouth just to savor the taste, only to hope the flavor changes eventually. A couple things have advanced, though not quite what one might hope.

Parallel Restore

With the introduction of PostgreSQL 8.4, the core developers have finally succumbed to the availability of multiple-CPU systems, and added parallel restores. But what does this mean, exactly? It means a dump file contains information on several tables, and from its very nature, is segmented and perfect for simultaneous action. One dump file being read by several threads can restore multiple tables and other database objects at the same time.

Have a huge table that’s taking forever to restore? Adding indexes to this table? Tired of waiting? Well, another thread is busily restoring another table right now, so you can stop fretting. Since all triggers, foreign keys, and constraints are disabled during the restore (since it assumes you’re working from a semantically consistent source), the pg_restore command can simply cram as much data into the server as fast as PostgreSQL can swallow.

The thing that got me however, is that this functionality is backwards compatible. I tested this to confirm, and indeed, you can do this:

/opt/postgres-8.3/bin/pg_dump foo.db -Fc -f backup.bin production
/opt/postgres-8.4/bin/pg_restore -Fc -j4 -d production -C backup.bin

The -Fc switch just says “Make a compressed output,” which is necessary because most VLDB installations are much too large to write directly to disk. The real juicy bit here is the -j4 parameter. As used, this says “Restore using four threads.” Look familiar? The make command has had this functionality for ages, but unlike our veritable compiler launcher, you wouldn’t want to set the concurrency higher than the amount of CPUs.

Why? Because restoring a database is a computationally intensive operation. It’s inevitable at least one of those threads will encounter an index, and that will produce 99% CPU while PostgreSQL chews. Toward the end of the restore, most of the activity will be index creation, so having higher concurrency will simply raise the system load indicator and cause unnecessary context switching as the task scheduler valiantly attempts to farm out precious CPU cycles.

That said, parallel restore is not a linear scale. Larger tables can necessarily slow a single thread, and if that thread is assigned too many of these, it’ll stall the entire process; it’ll be working while all other threads are long finished. This is one possible argument for increasing the thread count, as that lowers the chance a single thread will be overloaded with larger tables. This may take some experimentation.

Either way, it’s an exciting development.

pg_migrator

Again, with the introduction of PostgreSQL 8.4, Bruce Momjian, a significant core developer, contributed a tool that can actually upgrade an entire database cluster in place. The time required is essentially only that necessary to copy the data files from the old installation to the new one. On a quick RAID system, this can be an order of magnitude faster than a dump/restore, even when aided by parallel restore.

Using this utility requires a significant amount of preparation. To do it safely, you must install two separate source trees. If you use RPMs, also alter the system RPM SPEC file and compile your own packages, since the official ones don’t set the right configure flags. Then pg_migrater should be installed into the target database version source tree.

It’s tricky, but I documented the entire process on my company’s Wiki so it’s almost a cut-and-paste affair. And really, once initial setup is complete, the entire migration is a simple handful of commands and easily scriptable. Best yet, the process can be performed over and over again with no danger to the source data, and very little system load, all things considered.

Again, a huge win for PostgreSQL.

A Rant about Time

But why the wait, guys? Production systems in the terabyte range have existed for years, and some companies have even been afraid to use PostgreSQL at all because the prospect of dumping/restoring that much data—rightly so—sends them scampering for the hills. These enhancements, pg_migrator especially, have been necessary for the last decade. To be honest, I would have rather had this than autovacuum; that’s how much I abhor upgrading a PostgreSQL installation.

I’m not complaining as such. 8.4 is a freaking godsend for DBAs everywhere. I’m just innately curious as to why the idea of quick migration was assigned such low priority in the dev community. It’s a significant usability issue, and a major reason so many older, semi-deprecated versions are still clinging to life around the world. With table partitioning, bitmap indexes, piggyback sequence scans, and all the other things PostgreSQL brings to the table, it’s an outright shame some admins are literally afraid to use it when faced with the prospect of the upgrade process.

PostgreSQL 8.4 is a huge step forward in that regard. I can’t stress enough just how many barriers it breaks that have otherwise persisted since the beginning. I can honestly see much more adoption of PostgreSQL thanks to this, and the devs deserve a round of applause for their efforts.

Here’s hoping 8.5 is just as groundbreaking!