PostgreSQL is a pretty good database, and I enjoy working with it. However, there is an implementation detail that not everyone knows about, which can drastically affect table performance. What is this mysterious feature? I am, of course, referring to foreign keys.
Foreign keys are normally a part of good database design, and for good reason. They inform about entity relationships, and they verify, enforce, and maintain those relationships. Yet all of this comes at a cost that might surprise you. In PostgreSQL, every foreign key is maintained with an invisible system-level trigger added to the source table in the reference. At least one trigger must go here, as operations that modify the source data must be checked that they do not violate the constraint.
This query is an easy way to see how many foreign keys are associated with every table in an entire PostgreSQL database:
SELECT t.oid::regclass::text AS table_name, count(1) AS total FROM pg_constraint c JOIN pg_class t ON (t.oid = c.confrelid) GROUP BY table_name ORDER BY total DESC;
With this in mind, consider how much overhead each trigger incurs on the referenced table. We can actually calculate this overhead. Consider this function:
CREATE OR REPLACE FUNCTION fnc_check_fk_overhead(key_count INT) RETURNS VOID AS $$ DECLARE i INT; BEGIN CREATE TABLE test_fk ( id BIGINT PRIMARY KEY, junk VARCHAR ); INSERT INTO test_fk SELECT generate_series(1, 100000), repeat(' ', 20); CLUSTER test_fk_pkey ON test_fk; FOR i IN 1..key_count LOOP EXECUTE 'CREATE TABLE test_fk_ref_' || i || ' (test_fk_id BIGINT REFERENCES test_fk (id))'; END LOOP; FOR i IN 1..100000 LOOP UPDATE test_fk SET junk = ' ' WHERE id = i; END LOOP; DROP TABLE test_fk CASCADE; FOR i IN 1..key_count LOOP EXECUTE 'DROP TABLE test_fk_ref_' || i; END LOOP; END; $$ LANGUAGE plpgsql VOLATILE;
The function is designed to create a simple two-column table, fill it with 100,000 records, and test how long it takes to update every record. This is purely meant to simulate a high-transaction load caused by multiple clients. I know no sane developer would actually update so many records this way.
The only parameter this function accepts, is the amount of tables it should create that reference this source table. Every referring table is empty, and has only one column for the reference to be valid. After the foreign key tables are created, it performs those 100,000 updates, and we can measure the output with our favorite SQL tool. Here is a quick test with
\timing SELECT fnc_check_fk_overhead(0); SELECT fnc_check_fk_overhead(5); SELECT fnc_check_fk_overhead(10); SELECT fnc_check_fk_overhead(15); SELECT fnc_check_fk_overhead(20);
On our system, these timings were collected several times, and averaged 2961ms, 3805ms, 4606ms, 5089ms, and 5785ms after three runs each. As we can see, after merely five foreign keys, performance of our updates drops by 28.5%. By the time we have 20 foreign keys, the updates are 95% slower!
I don’t mention this to make you abandon foreign keys. However, if you are in charge of an extremely active OLTP system, you might consider removing any non-critical FK constraints. If the values are merely informative, or will not cause any integrity concerns, a foreign key is not required. Indeed, excessive foreign keys are actually detrimental to the database in a very tangible way.
I merely ask you keep this in mind when designing or revising schemas for yourself or developers you support.