For all of those warehouse queries that never seem to complete before the heat death of the universe, there’s often a faster version. Sometimes this is due to a fundamental misunderstanding of how queries work, or how Postgres specifically functions. The trick is knowing when to back away slowly from an ugly but efficient query, and when to inject a flurry of predicates to fully illustrate the original intent of the query so the planner makes better decisions. When partitions are involved, this can be quite an adventure.

Sometimes optimizing partitioned plans is an art. Other times, things are a little easier to isolate. Imagine we have a sensor log partitioned by month, and our app is fairly new, so there are only three months available. Anything illustrated in this case would only be much worse as time progresses.

Just to make things simple, there will be no trigger logic or anything to actually move rows around. We’re just looking at query performance, not the feasibility of the structure itself.

CREATE TABLE sensor_log (
  sensor_id     INT NOT NULL,
  location      VARCHAR NOT NULL,
  reading       INT NOT NULL,
  reading_date  TIMESTAMPTZ NOT NULL DEFAULT now()
);
 
CREATE TABLE sensor_log_201607 () INHERITS (sensor_log);
CREATE TABLE sensor_log_201608 () INHERITS (sensor_log);
CREATE TABLE sensor_log_201609 () INHERITS (sensor_log);
 
INSERT INTO sensor_log_201607 (sensor_id, location, reading, reading_date)
SELECT s.id % 1000, (random() * 100)::INT, s.id % 100,
       '2016-08-01'::TIMESTAMPTZ - (s.id || 's')::INTERVAL
  FROM generate_series(1, 3600*24*30) s(id);
 
INSERT INTO sensor_log_201608 (sensor_id, location, reading, reading_date)
SELECT s.id % 1000, (random() * 100)::INT, s.id % 100,
       '2016-09-01'::TIMESTAMPTZ - (s.id || 's')::INTERVAL
  FROM generate_series(1, 3600*24*30) s(id);
 
INSERT INTO sensor_log_201609 (sensor_id, location, reading, reading_date)
SELECT s.id % 1000, (random() * 100)::INT, s.id % 100,
       '2016-10-01'::TIMESTAMPTZ - (s.id || 's')::INTERVAL
  FROM generate_series(1, 3600*24*30) s(id);
 
ALTER TABLE sensor_log_201607
  ADD CONSTRAINT ck_201607_part_reading_date
CHECK (reading_date >= '2016-07-01' AND reading_date < '2016-08-01');
 
ALTER TABLE sensor_log_201608
  ADD CONSTRAINT ck_201608_part_reading_date
CHECK (reading_date >= '2016-08-01' AND reading_date < '2016-09-01');
 
ALTER TABLE sensor_log_201609
  ADD CONSTRAINT ck_201609_part_reading_date
CHECK (reading_date >= '2016-09-01' AND reading_date < '2016-10-01');
 
CREATE INDEX idx_sensor_log_201607_sensor_id
    ON sensor_log_201607 (sensor_id);
CREATE INDEX idx_sensor_log_201607_reading_date
    ON sensor_log_201607 (reading_date DESC);
 
CREATE INDEX idx_sensor_log_201608_sensor_id
    ON sensor_log_201608 (sensor_id);
CREATE INDEX idx_sensor_log_201608_reading_date
    ON sensor_log_201608 (reading_date DESC);
 
CREATE INDEX idx_sensor_log_201609_sensor_id
    ON sensor_log_201609 (sensor_id);
CREATE INDEX idx_sensor_log_201609_reading_date
    ON sensor_log_201609 (reading_date DESC);
 
ANALYZE sensor_log_201607;
ANALYZE sensor_log_201608;
ANALYZE sensor_log_201609;

With these three (not including the root) tables in place, a query can target the base table and operate as if the partitioning didn’t exist. There isn’t anything new in that regard, but it’s the first place where a business intelligence operative might stumble. The base table can obscure potentially hundreds of other segments and the associated overhead.

This is especially concerning in cumulative warehouse archives where there are no primary keys. In these contexts, data is unique per key and date combination. The date isn’t simply a manner of isolating specific ranges for comparative aggregates.

On operator might understand that smaller query result sets make for fewer index matches and faster execution times. So maybe they try to isolate in a step-by-step manner with temporary tables or CTEs. Using that as a foundation, they produce this query:

EXPLAIN
WITH SOURCE AS (
    SELECT *
      FROM sensor_log
     WHERE reading_date >= '2016-09-11'
       AND reading_date < '2016-09-18'
       AND location = '5'
)
SELECT SOURCE.*, target.*
  FROM SOURCE 
  LEFT JOIN sensor_log target ON (
         SOURCE.sensor_id = target.sensor_id AND
         target.location = '10'
       );
 
 MERGE LEFT JOIN  (cost=178118.79..212683.46 ROWS=2302397 width=66)
   MERGE Cond: (SOURCE.sensor_id = target.sensor_id)
   CTE SOURCE
     ->  Append  (cost=0.00..24384.21 ROWS=5743 width=18)
           ->  Seq Scan ON sensor_log  (cost=0.00..0.00 ROWS=1 width=48)
                 FILTER: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone) AND ((location)::text = '5'::text))
           ->  INDEX Scan USING idx_sensor_log_201609_reading_date ON sensor_log_201609  (cost=0.43..24384.21 ROWS=5742 width=18)
                 INDEX Cond: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone))
                 FILTER: ((location)::text = '5'::text)
   ->  Sort  (cost=473.44..487.80 ROWS=5743 width=48)
         Sort KEY: SOURCE.sensor_id
         ->  CTE Scan ON SOURCE  (cost=0.00..114.86 ROWS=5743 width=48)
   ->  Sort  (cost=153261.13..153461.58 ROWS=80181 width=18)
         Sort KEY: target.sensor_id
         ->  Append  (cost=0.00..146730.00 ROWS=80181 width=18)
               ->  Seq Scan ON sensor_log target  (cost=0.00..0.00 ROWS=1 width=48)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201607 target_1  (cost=0.00..48910.00 ROWS=26352 width=18)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201608 target_2  (cost=0.00..48910.00 ROWS=27562 width=18)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201609 target_3  (cost=0.00..48910.00 ROWS=26266 width=18)
                     FILTER: ((location)::text = '10'::text)

In this example, the CTE simulates just some other random table joining into our partitioned sample. Notice those last three elements where the planner decides to perform a sequential scan on all of the partitions? It’s not entirely naive to think date constraints are transitive when working with tables that are commonly unique. Nor is it strictly common knowledge that CTEs act as optimization fences that completely isolate the internal query. But in large archives that trend toward accumulation, those assumptions can be extremely dangerous.

Let’s say we know about the optimization fence and rewrite the query, retaining our naive assumption that date constraints are shared by the join:

EXPLAIN
SELECT SOURCE.*, target.*
  FROM sensor_log SOURCE 
  LEFT JOIN sensor_log target ON (
         SOURCE.sensor_id = target.sensor_id AND
         target.location = '10'
       )
 WHERE SOURCE.reading_date >= '2016-09-11'
   AND SOURCE.reading_date < '2016-09-18'
   AND SOURCE.location = '5';
 
 MERGE LEFT JOIN  (cost=178003.93..212568.60 ROWS=2302397 width=36)
   MERGE Cond: (SOURCE.sensor_id = target.sensor_id)
   ->  Sort  (cost=24742.79..24757.15 ROWS=5743 width=18)
         Sort KEY: SOURCE.sensor_id
         ->  Append  (cost=0.00..24384.21 ROWS=5743 width=18)
               ->  Seq Scan ON sensor_log SOURCE  (cost=0.00..0.00 ROWS=1 width=48)
                     FILTER: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone) AND ((location)::text = '5'::text))
               ->  INDEX Scan USING idx_sensor_log_201609_reading_date ON sensor_log_201609 source_1  (cost=0.43..24384.21 ROWS=5742 width=18)
                     INDEX Cond: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone))
                     FILTER: ((location)::text = '5'::text)
   ->  Sort  (cost=153261.13..153461.58 ROWS=80181 width=18)
         Sort KEY: target.sensor_id
         ->  Append  (cost=0.00..146730.00 ROWS=80181 width=18)
               ->  Seq Scan ON sensor_log target  (cost=0.00..0.00 ROWS=1 width=48)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201607 target_1  (cost=0.00..48910.00 ROWS=26352 width=18)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201608 target_2  (cost=0.00..48910.00 ROWS=27562 width=18)
                     FILTER: ((location)::text = '10'::text)
               ->  Seq Scan ON sensor_log_201609 target_3  (cost=0.00..48910.00 ROWS=26266 width=18)
                     FILTER: ((location)::text = '10'::text)

As expected, the planner properly applied the date range to the source table, but not the target, and we end up with sequential scans on all of the partitions. Obvious, right? Maybe not. It’s actually not uncommon to perform cross data comparisons with current readings using multiple fragments of composite keys. When operating like this, it’s easy to forget the date constraints are equally critical, especially with partitioned tables that can span for years and represent several TB of data.

Literally every human being that has touched a warehouse in my presence has made this mistake—including myself. Whether it’s a date, ID, or some other value, conditionals shouldn’t be assumed. Nor are they transitive, even if the database engine supports that kind of predicate migration.

Here’s what the query and its plan should resemble:

EXPLAIN
SELECT SOURCE.*, target.*
  FROM sensor_log SOURCE 
  LEFT JOIN sensor_log target ON (
         SOURCE.sensor_id = target.sensor_id AND
         target.location = '10' AND
         target.reading_date >= '2016-09-11' AND
         target.reading_date < '2016-09-18'
       )
 WHERE SOURCE.reading_date >= '2016-09-11'
   AND SOURCE.reading_date < '2016-09-18'
   AND SOURCE.location = '5';
 
 MERGE LEFT JOIN  (cost=49517.62..52217.26 ROWS=178062 width=36)
   MERGE Cond: (SOURCE.sensor_id = target.sensor_id)
   ->  Sort  (cost=24742.79..24757.15 ROWS=5743 width=18)
         Sort KEY: SOURCE.sensor_id
         ->  Append  (cost=0.00..24384.21 ROWS=5743 width=18)
               ->  Seq Scan ON sensor_log SOURCE  (cost=0.00..0.00 ROWS=1 width=48)
                     FILTER: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone) AND ((location)::text = '5'::text))
               ->  INDEX Scan USING idx_sensor_log_201609_reading_date ON sensor_log_201609 source_1  (cost=0.43..24384.21 ROWS=5742 width=18)
                     INDEX Cond: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone))
                     FILTER: ((location)::text = '5'::text)
   ->  Sort  (cost=24774.82..24790.32 ROWS=6201 width=18)
         Sort KEY: target.sensor_id
         ->  Append  (cost=0.00..24384.21 ROWS=6201 width=18)
               ->  Seq Scan ON sensor_log target  (cost=0.00..0.00 ROWS=1 width=48)
                     FILTER: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone) AND ((location)::text = '10'::text))
               ->  INDEX Scan USING idx_sensor_log_201609_reading_date ON sensor_log_201609 target_1  (cost=0.43..24384.21 ROWS=6200 width=18)
                     INDEX Cond: ((reading_date >= '2016-09-11 00:00:00-05'::TIMESTAMP WITH TIME zone) AND (reading_date < '2016-09-18 00:00:00-05'::TIMESTAMP WITH TIME zone))
                     FILTER: ((location)::text = '10'::text)

And there we have it, the partitioned table results are restricted specifically to equivalent date ranges. And those date clauses we added to the join must exist for any similarly partitioned table unless the intent really is to compare with all historical records in all partitions. And that’s not exactly recommended either. That kind of analysis really should be done with fact tables and data cubes, which are targeted toward aggregate use much better than raw results.

Just be cognizant of partitioned tables. Quite a few people claim there are better ways to implement them in Postgres provided some underlying code changes are committed. Until that time, it’s critical to always include the partition column in every query, join, sub-clause, partridge, and pear tree. It’s how Postgres decides which partitions to include in results, and often enough, which indexes to consider.

The alternative is having to wait for hours or days while Postgres checks every existing partition of every included table for the search criteria, and nobody wants that.

Well… except attackers who are trying to DOS your database.

PG Phriday: Pesky Partition Plans
Tagged on: