Run a query with a LIMIT/OFFSET and also get the total number of rows Run a query with a LIMIT/OFFSET and also get the total number of rows postgresql postgresql

Run a query with a LIMIT/OFFSET and also get the total number of rows


Yes. With a simple window function:

SELECT *, count(*) OVER() AS full_countFROM   tblWHERE  /* whatever */ORDER  BY col1OFFSET ?LIMIT  ?

Be aware that the cost will be substantially higher than without the total number, but typically still cheaper than two separate queries. Postgres has to actually count all rows either way, which imposes a cost depending on the total number of qualifying rows. Details:

However, as Dani pointed out, when OFFSET is at least as great as the number of rows returned from the base query, no rows are returned. So we also don't get full_count.

If that's not acceptable, a possible workaround to always return the full count would be with a CTE and an OUTER JOIN:

WITH cte AS (   SELECT *   FROM   tbl   WHERE  /* whatever */   )SELECT *FROM  (   TABLE  cte   ORDER  BY col1   LIMIT  ?   OFFSET ?   ) subRIGHT  JOIN (SELECT count(*) FROM cte) c(full_count) ON true;

You get one row of NULL values with the full_count appended if OFFSET is too big. Else, it's appended to every row like in the first query.

If a row with all NULL values is a possible valid result you have to check offset >= full_count to disambiguate the origin of the empty row.

This still executes the base query only once. But it adds more overhead to the query and only pays if that's less than repeating the base query for the count.

If indexes supporting the final sort order are available, it might pay to include the ORDER BY in the CTE (redundantly).


edit: this answer is valid when retrieving the unfiltered table. I'll let it in case it could help someone but it might not exactly answer the initial question.

Erwin Brandstetter's answer is perfect if you need an accurate value. However, on large tables you often only need a pretty good approximation. Postgres gives you just that and it will be much faster as it will not need to evaluate each row:

SELECT *FROM (    SELECT *    FROM tbl    WHERE /* something */    ORDER BY /* something */    OFFSET ?    LIMIT ?    ) dataRIGHT JOIN (SELECT reltuples FROM pg_class WHERE relname = 'tbl') pg_count(total_count) ON true;

I'm actually quite not sure if there is an advantage to externalize the RIGHT JOIN or have it as in a standard query. It would deserve some testing.

SELECT t.*, pgc.reltuples AS total_countFROM tbl as tRIGHT JOIN pg_class pgc ON pgc.relname = 'tbl'WHERE /* something */ORDER BY /* something */OFFSET ?LIMIT ?


While Erwin Brandstetter's answer works like a charm, it returns the total count of rows in every row like following:

col1 - col2 - col3 - total--------------------------aaaa - aaaa - aaaa - countbbbb - bbbb - bbbb - countcccc - cccc - cccc - count

You may want to consider using an approach that returns total count only once, like the following:

total - rows------------count - [{col1: 'aaaa'},{col2: 'aaaa'},{col3: 'aaaa'}         {col1: 'bbbb'},{col2: 'bbbb'},{col3: 'bbbb'}         {col1: 'cccc'},{col2: 'cccc'},{col3: 'cccc'}]

SQL query:

SELECT     (SELECT COUNT(*) FROM table) as count,     (SELECT json_agg(t.*) FROM (        SELECT * FROM table        WHERE /* whatever */        ORDER BY col1        OFFSET ?        LIMIT ?    ) AS t) AS rows