PG Phriday: Alien Incursion

Foreign tables have been a headline feature of Postgres ever since the release of version 9.2. Combined with extensions, they’re the secret sauce that allows Postgres to pull data from other database engines, flat files, REST interfaces, and possibly every gas station, residence, warehouse, farmhouse, hen house, outhouse, and doghouse in the area.

Postgres all the things

But that kind of power comes at a significant cost. Since the remote data comes from what is essentially a black box, there are a lot of performance optimizations Postgres can’t apply. Foreign data wrappers are also still somewhat in their infancy—even the Postgres foreign data wrapper lacks the ability to simultaneously join and sort on the remote server in the same query.

That makes data federation incredibly painful and could reduce the feature to nothing but an amusing toy. It’s functional but slow since it’s likely data processing is done locally, potentially after retrieving the entire contents of the remote table. Of course, this also means several disparate sources can all be handled within the confines of our Postgres database since it has access to all of the data… eventually.

Is there a better way? Since the introduction of Materialized views in 9.3, it became possible to trivially take a snapshot of a remote table, or some subsection of its contents, and use it repeatedly. Unlike the retrieval Postgres executes within a query on a foreign object, materialized views allow remote data to persist.

With that in mind, let’s set up a very basic sensor database that merely accumulates readings.

CREATE TABLE sensor_log (
  id            INT PRIMARY KEY,
  location      VARCHAR NOT NULL,
  reading       BIGINT NOT NULL,
  reading_date  TIMESTAMP NOT NULL
);

INSERT INTO sensor_log (id, location, reading, reading_date)
SELECT s.id, s.id % 1000, s.id % 100,
       CURRENT_DATE - ((s.id * 10) || 's')::INTERVAL
  FROM generate_series(1, 5000000) s(id);

CREATE INDEX idx_sensor_log_location ON sensor_log (location);
CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);

ANALYZE sensor_log;

It’s a table we use frequently in examples, and there’s no reason it can’t be used here. If we connect to any other Postgres database, we can execute the following SQL to produce a standard foreign table.

For the purposes of this example, let’s just assume the database is on the same server and the server is somewhat trusting with connections in that context. Properly securing a foreign server is a topic for a different day.

CREATE SCHEMA ext_info;

CREATE EXTENSION postgres_fdw;

CREATE SERVER sys_sensors
       FOREIGN DATA WRAPPER postgres_fdw
       OPTIONS (dbname 'sensors', host 'sensor-host', port '5432');

CREATE USER MAPPING for postgres
       SERVER sys_sensors
       OPTIONS (user 'postgres');

IMPORT FOREIGN SCHEMA public
 LIMIT TO (sensor_log)
  FROM SERVER sys_sensors
  INTO ext_info;

ANALYZE ext_info.sensor_log;

SELECT count(*) FROM ext_info.sensor_log;

  count  
---------
 5000000

Take note that we’ve leveraged the IMPORT FOREIGN SCHEMA feature added in Postgres 9.5. This makes it much easier to reflect a remote data source since we don’t necessarily need to know the structure of the objects we’re importing. Either way, it worked and we were able to query the contents. So far there are no surprises here.

Now let’s wrap our external table with a very limited view that essentially fetches the entire contents of the remote table.

CREATE SCHEMA ext_materialized;

CREATE MATERIALIZED VIEW ext_materialized.sensor_log AS
SELECT * FROM ext_info.sensor_log;

CREATE UNIQUE INDEX pk_ext_sensor_log
    ON ext_materialized.sensor_log (id);

CREATE INDEX idx_ext_sensor_log_reading_date
    ON ext_materialized.sensor_log (reading_date);

ANALYZE ext_materialized.sensor_log;

EXPLAIN
SELECT *
  FROM ext_materialized.sensor_log
 WHERE reading_date >= CURRENT_DATE - INTERVAL '1 day';

                            QUERY PLAN                         
-----------------------------------------------------------------
 Index Scan using idx_ext_sensor_log_reading_date on sensor_log
       (cost=0.44..304.87 rows=8596 width=23)
   Index Cond: (reading_date >= 
       (('now'::cstring)::date - '1 day'::interval))

We added an index on the reading_date column and analyzed the contents to illustrate that materialized views act just like regular tables in very important ways. Postgres can collect statistics, index, and build views on top of materialized views as if they were regular tables. This means we can treat them like a local cache of remote data, with all of the ancillary benefits the Postgres planner provides for our own data.

When the remote data changes, we must rebuild the materialized view to capture the new data. Here’s how that works:

-- In the sensors database:

INSERT INTO sensor_log (id, location, reading, reading_date)
SELECT s.id, s.id % 1000, s.id % 100,
       CURRENT_DATE + ((s.id * 10) || 's')::INTERVAL
  FROM generate_series(5000001, 5001000) s(id);

-- In the database where we have the materialized view:

\timing on

REFRESH MATERIALIZED VIEW CONCURRENTLY ext_materialized.sensor_log;

Time: 117734.945 ms

ANALYZE ext_materialized.sensor_log;

Just… ouch. Why did that take so long? We supplied the CONCURRENTLY keyword to avoid locking the view during the refresh. This means Postgres fetches the data and compares it to the existing view contents, inserting and deleting only the data that changed since the last refresh. That’s a pretty intensive procedure, and for higher data volumes, extremely inefficient. Yet we added the “missing” content without preventing other users from accessing the materialized view, which is certainly advantageous if we don’t mind the refresh delay.

We can improve this process a number of ways. If we consider our data and how we use it, we might realize we don’t need the entire contents of the remote table. In an analysis context, maybe we only need the most recent day of readings to compare with information we’ve gathered from other sources. Since Postgres doesn’t transmit calculated predicates to the remote server, we also need to leverage the ability to create foreign tables based on remote views.

Here’s how that would look:

-- In the sensors database:

CREATE VIEW sensor_log_today AS
SELECT *
  FROM sensor_log
 WHERE reading_date >= CURRENT_DATE;

-- In the database where we have the materialized views:

IMPORT FOREIGN SCHEMA public
 LIMIT TO (sensor_log_today)
  FROM SERVER sys_sensors
  INTO ext_info;

CREATE MATERIALIZED VIEW ext_materialized.sensor_log_today AS
SELECT * FROM ext_info.sensor_log_today;

CREATE UNIQUE INDEX pk_ext_sensor_log_today
    ON ext_materialized.sensor_log_today (id);

CREATE INDEX idx_ext_sensor_log_today_reading_date
    ON ext_materialized.sensor_log_today (reading_date);

ANALYZE ext_materialized.sensor_log_today;

\timing on

REFRESH MATERIALIZED VIEW CONCURRENTLY ext_materialized.sensor_log_today;

Time: 17.531 ms

That’s a bit of an improvement. This kind of quick snapshot system is a solid foundation for a virtual database that contains none of its own data, and only exists for analytics. If we need to retain data for longer periods of time, we can built our own data loading process that references the smaller snapshot.

Something like this:

-- Bootstrap the giant repo of sensor readings.

CREATE SCHEMA pile;

CREATE TABLE pile.sensor_log (LIKE ext_info.sensor_log);

INSERT INTO pile.sensor_log
SELECT * FROM ext_info.sensor_log;

CREATE UNIQUE INDEX pk_pile_sensor_log
    ON pile.sensor_log (id);

CREATE INDEX idx_pile_sensor_log_reading_date
    ON pile.sensor_log (reading_date);

ANALYZE pile.sensor_log;

-- Now a "refresh" looks like this:

REFRESH MATERIALIZED VIEW CONCURRENTLY ext_materialized.sensor_log_today;

ANALYZE ext_materialized.sensor_log_today;

INSERT INTO pile.sensor_log
SELECT *
  FROM ext_materialized.sensor_log_today
 WHERE id > (SELECT max(id) FROM pile.sensor_log);

We may have already executed the bootstrapping section any time in the past. The refresh on the other hand, lets us “top up” our local cache with the remote data as often as we want. The remote table might even be regularly truncated, but it doesn’t matter to us; we always see the most recent information and we can handle it as we please.

All of these tables and materialized views are driven by remote data. If we were working with several customers, we might perform aggregate analysis across all of them in this virtual staging area using cumulative snapshot updates. We could decide to retain this information for a year or two and cycle anything older, or any other retention period that matches our reporting needs.

This works the same way across environments. Perhaps we want to combine data from a production and user acceptance context, but only for the last work week. Though this kind of transient architecture isn’t exactly trivial, at least the possibility exists.

The primary caveat to relying on foreign structures so extensively is that we might not have direct control over the remote objects. If a column type is changed for example, our foreign table would stop working. Depending on how deeply nested all of our materialized views are, rebuilding in the face of remote modifications will likely incur significant maintenance costs. Even if we do control the infrastructure and software on the entire stack, we’ve contributed a sprawling decoupled dependency chain to the mix.

But that’s a small price to pay for the freedom to have our hand in every cookie jar.

cookie!