Dynamic alternative to pivot with CASE and GROUP BY Dynamic alternative to pivot with CASE and GROUP BY postgresql postgresql

Dynamic alternative to pivot with CASE and GROUP BY


If you have not installed the additional module tablefunc, run this command once per database:

CREATE EXTENSION tablefunc;

Answer to question

A very basic crosstab solution for your case:

SELECT * FROM crosstab(  'SELECT bar, 1 AS cat, feh   FROM   tbl_org   ORDER  BY bar, feh') AS ct (bar text, val1 int, val2 int, val3 int);  -- more columns?

The special difficulty here is, that there is no category (cat) in the base table. For the basic 1-parameter form we can just provide a dummy column with a dummy value serving as category. The value is ignored anyway.

This is one of the rare cases where the second parameter for the crosstab() function is not needed, because all NULL values only appear in dangling columns to the right by definition of this problem. And the order can be determined by the value.

If we had an actual category column with names determining the order of values in the result, we'd need the 2-parameter form of crosstab(). Here I synthesize a category column with the help of the window function row_number(), to base crosstab() on:

SELECT * FROM crosstab(   $$   SELECT bar, val, feh   FROM  (      SELECT *, 'val' || row_number() OVER (PARTITION BY bar ORDER BY feh) AS val      FROM tbl_org      ) x   ORDER BY 1, 2   $$ , $$VALUES ('val1'), ('val2'), ('val3')$$         -- more columns?) AS ct (bar text, val1 int, val2 int, val3 int);  -- more columns?

The rest is pretty much run-of-the-mill. Find more explanation and links in these closely related answers.

Basics:
Read this first if you are not familiar with the crosstab() function!

Advanced:

Proper test setup

That's how you should provide a test case to begin with:

CREATE TEMP TABLE tbl_org (id int, feh int, bar text);INSERT INTO tbl_org (id, feh, bar) VALUES   (1, 10, 'A') , (2, 20, 'A') , (3,  3, 'B') , (4,  4, 'B') , (5,  5, 'C') , (6,  6, 'D') , (7,  7, 'D') , (8,  8, 'D');

Dynamic crosstab?

Not very dynamic, yet, as @Clodoaldo commented. Dynamic return types are hard to achieve with plpgsql. But there are ways around it - with some limitations.

So not to further complicate the rest, I demonstrate with a simpler test case:

CREATE TEMP TABLE tbl (row_name text, attrib text, val int);INSERT INTO tbl (row_name, attrib, val) VALUES   ('A', 'val1', 10) , ('A', 'val2', 20) , ('B', 'val1', 3) , ('B', 'val2', 4) , ('C', 'val1', 5) , ('D', 'val3', 8) , ('D', 'val1', 6) , ('D', 'val2', 7);

Call:

SELECT * FROM crosstab('SELECT row_name, attrib, val FROM tbl ORDER BY 1,2')AS ct (row_name text, val1 int, val2 int, val3 int);

Returns:

 row_name | val1 | val2 | val3----------+------+------+------ A        | 10   | 20   | B        |  3   |  4   | C        |  5   |      | D        |  6   |  7   |  8

Built-in feature of tablefunc module

The tablefunc module provides a simple infrastructure for generic crosstab() calls without providing a column definition list. A number of functions written in C (typically very fast):

crosstabN()

crosstab1() - crosstab4() are pre-defined. One minor point: they require and return all text. So we need to cast our integer values. But it simplifies the call:

SELECT * FROM crosstab4('SELECT row_name, attrib, val::text  -- cast!                         FROM tbl ORDER BY 1,2')

Result:

 row_name | category_1 | category_2 | category_3 | category_4----------+------------+------------+------------+------------ A        | 10         | 20         |            | B        | 3          | 4          |            | C        | 5          |            |            | D        | 6          | 7          | 8          |

Custom crosstab() function

For more columns or other data types, we create our own composite type and function (once).
Type:

CREATE TYPE tablefunc_crosstab_int_5 AS (  row_name text, val1 int, val2 int, val3 int, val4 int, val5 int);

Function:

CREATE OR REPLACE FUNCTION crosstab_int_5(text)  RETURNS SETOF tablefunc_crosstab_int_5AS '$libdir/tablefunc', 'crosstab' LANGUAGE c STABLE STRICT;

Call:

SELECT * FROM crosstab_int_5('SELECT row_name, attrib, val   -- no cast!                              FROM tbl ORDER BY 1,2');

Result:

 row_name | val1 | val2 | val3 | val4 | val5----------+------+------+------+------+------ A        |   10 |   20 |      |      | B        |    3 |    4 |      |      | C        |    5 |      |      |      | D        |    6 |    7 |    8 |      |

One polymorphic, dynamic function for all

This goes beyond what's covered by the tablefunc module.
To make the return type dynamic I use a polymorphic type with a technique detailed in this related answer:

1-parameter form:

CREATE OR REPLACE FUNCTION crosstab_n(_qry text, _rowtype anyelement)  RETURNS SETOF anyelement AS$func$BEGIN   RETURN QUERY EXECUTE    (SELECT format('SELECT * FROM crosstab(%L) t(%s)'                , _qry                , string_agg(quote_ident(attname) || ' ' || atttypid::regtype                           , ', ' ORDER BY attnum))    FROM   pg_attribute    WHERE  attrelid = pg_typeof(_rowtype)::text::regclass    AND    attnum > 0    AND    NOT attisdropped);END$func$  LANGUAGE plpgsql;

Overload with this variant for the 2-parameter form:

CREATE OR REPLACE FUNCTION crosstab_n(_qry text, _cat_qry text, _rowtype anyelement)  RETURNS SETOF anyelement AS$func$BEGIN   RETURN QUERY EXECUTE    (SELECT format('SELECT * FROM crosstab(%L, %L) t(%s)'                , _qry, _cat_qry                , string_agg(quote_ident(attname) || ' ' || atttypid::regtype                           , ', ' ORDER BY attnum))    FROM   pg_attribute    WHERE  attrelid = pg_typeof(_rowtype)::text::regclass    AND    attnum > 0    AND    NOT attisdropped);END$func$  LANGUAGE plpgsql;

pg_typeof(_rowtype)::text::regclass: There is a row type defined for every user-defined composite type, so that attributes (columns) are listed in the system catalog pg_attribute. The fast lane to get it: cast the registered type (regtype) to text and cast this text to regclass.

Create composite types once:

You need to define once every return type you are going to use:

CREATE TYPE tablefunc_crosstab_int_3 AS (    row_name text, val1 int, val2 int, val3 int);CREATE TYPE tablefunc_crosstab_int_4 AS (    row_name text, val1 int, val2 int, val3 int, val4 int);...

For ad-hoc calls, you can also just create a temporary table to the same (temporary) effect:

CREATE TEMP TABLE temp_xtype7 AS (    row_name text, x1 int, x2 int, x3 int, x4 int, x5 int, x6 int, x7 int);

Or use the type of an existing table, view or materialized view if available.

Call

Using above row types:

1-parameter form (no missing values):

SELECT * FROM crosstab_n(   'SELECT row_name, attrib, val FROM tbl ORDER BY 1,2' , NULL::tablefunc_crosstab_int_3);

2-parameter form (some values can be missing):

SELECT * FROM crosstab_n(   'SELECT row_name, attrib, val FROM tbl ORDER BY 1' , $$VALUES ('val1'), ('val2'), ('val3')$$ , NULL::tablefunc_crosstab_int_3);

This one function works for all return types, while the crosstabN() framework provided by the tablefunc module needs a separate function for each.
If you have named your types in sequence like demonstrated above, you only have to replace the bold number. To find the maximum number of categories in the base table:

SELECT max(count(*)) OVER () FROM tbl  -- returns 3GROUP  BY row_nameLIMIT  1;

That's about as dynamic as this gets if you want individual columns. Arrays like demonstrated by @Clocoaldo or a simple text representation or the result wrapped in a document type like json or hstore can work for any number of categories dynamically.

Disclaimer:
It's always potentially dangerous when user input is converted to code. Make sure this cannot be used for SQL injection. Don't accept input from untrusted users (directly).

Call for original question:

SELECT * FROM crosstab_n('SELECT bar, 1, feh FROM tbl_org ORDER BY 1,2'                       , NULL::tablefunc_crosstab_int_3);


Although this is an old question, I would like to add another solution made possible by recent improvements in PostgreSQL. This solution achieves the same goal of returning a structured result from a dynamic data set without using the crosstab function at all. In other words, this is a good example of re-examining unintentional and implicit assumptions that prevent us from discovering new solutions to old problems. ;)

To illustrate, you asked for a method to transpose data with the following structure:

id    feh    bar1     10     A2     20     A3      3     B4      4     B5      5     C6      6     D7      7     D8      8     D

into this format:

bar  val1   val2   val3A     10     20 B      3      4 C      5        D      6      7     8

The conventional solution is a clever (and incredibly knowledgeable) approach to creating dynamic crosstab queries that is explained in exquisite detail in Erwin Brandstetter's answer.

However, if your particular use case is flexible enough to accept a slightly different result format, then another solution is possible that handles dynamic pivots beautifully. This technique, which I learned of here

uses PostgreSQL's new jsonb_object_agg function to construct pivoted data on the fly in the form of a JSON object.

I will use Mr. Brandstetter's "simpler test case" to illustrate:

CREATE TEMP TABLE tbl (row_name text, attrib text, val int);INSERT INTO tbl (row_name, attrib, val) VALUES   ('A', 'val1', 10) , ('A', 'val2', 20) , ('B', 'val1', 3) , ('B', 'val2', 4) , ('C', 'val1', 5) , ('D', 'val3', 8) , ('D', 'val1', 6) , ('D', 'val2', 7);

Using the jsonb_object_agg function, we can create the required pivoted result set with this pithy beauty:

SELECT  row_name AS bar,  json_object_agg(attrib, val) AS dataFROM tblGROUP BY row_nameORDER BY row_name;

Which outputs:

 bar |                  data                  -----+---------------------------------------- A   | { "val1" : 10, "val2" : 20 } B   | { "val1" : 3, "val2" : 4 } C   | { "val1" : 5 } D   | { "val3" : 8, "val1" : 6, "val2" : 7 }

As you can see, this function works by creating key/value pairs in the JSON object from the attrib and value columns in the sample data, all grouped by row_name.

Although this result set obviously looks different, I believe it will actually satisfy many (if not most) real world use cases, especially those where the data requires a dynamically-generated pivot, or where resulting data is consumed by a parent application (e.g., needs to be re-formatted for transmission in a http response).

Benefits of this approach:

  • Cleaner syntax. I think everyone would agree that the syntax for this approach is far cleaner and easier to understand than even the most basic crosstab examples.

  • Completely dynamic. No information about the underlying data need be specified beforehand. Neither the column names nor their data types need be known ahead of time.

  • Handles large numbers of columns. Since the pivoted data is saved as a single jsonb column, you will not run up against PostgreSQL's column limit (≤1,600 columns, I believe). There is still a limit, but I believe it is the same as for text fields: 1 GB per JSON object created (please correct me if I am wrong). That's a lot of key/value pairs!

  • Simplified data handling. I believe that the creation of JSON data in the DB will simplify (and likely speed up) the data conversion process in parent applications. (You will note that the integer data in our sample test case was correctly stored as such in the resulting JSON objects. PostgreSQL handles this by automatically converting its intrinsic data types to JSON in accordance with the JSON specification.) This will effectively eliminate the need to manually cast data passed to parent applications: it can all be delegated to the application's native JSON parser.

Differences (and possible drawbacks):

  • It looks different. There's no denying that the results of this approach look different. The JSON object is not as pretty as the crosstab result set; however, the differences are purely cosmetic. The same information is produced--and in a format that is probably more friendly for consumption by parent applications.

  • Missing keys. Missing values in the crosstab approach are filled in with nulls, while the JSON objects are simply missing the applicable keys. You will have to decide for your self if this is an acceptable trade off for your use case. It seems to me that any attempt to address this problem in PostgreSQL will greatly complicate the process and likely involve some introspection in the form of additional queries.

  • Key order is not preserved. I don't know if this can be addressed in PostgreSQL, but this issue is mostly cosmetic also, since any parent applications are either unlikely to rely on key order, or have the ability to determine proper key order by other means. The worst case will probably only require an addition query of the database.

Conclusion

I am very curious to hear the opinions of others (especially @ErwinBrandstetter's) on this approach, especially as it pertains to performance. When I discovered this approach on Andrew Bender's blog, it was like getting hit in the side of the head. What a beautiful way to take a fresh approach to a difficult problem in PostrgeSQL. It solved my use case perfectly, and I believe it will likewise serve many others as well.


This is to complete @Damian good answer. I have already suggested the JSON approach in other answers before the 9.6's handy json_object_agg function. It just takes more work with the previous tool set.

Two of the cited possible drawbacks are really not. The random key order is trivially corrected if necessary. The missing keys, if relevant, takes an almost trivial amount of code to be addressed:

select    row_name as bar,    json_object_agg(attrib, val order by attrib) as datafrom    tbl    right join    (        (select distinct row_name from tbl) a        cross join        (select distinct attrib from tbl) b    ) c using (row_name, attrib)group by row_nameorder by row_name; bar |                     data                     -----+---------------------------------------------- a   | { "val1" : 10, "val2" : 20, "val3" : null } b   | { "val1" : 3, "val2" : 4, "val3" : null } c   | { "val1" : 5, "val2" : null, "val3" : null } d   | { "val1" : 6, "val2" : 7, "val3" : 8 }

For a final query consumer which understands JSON there are no drawbacks. The only one is that it can not be consumed as a table source.