Get execution time of PostgreSQL query Get execution time of PostgreSQL query postgresql postgresql

Get execution time of PostgreSQL query


If you mean in psql, rather than some program you are writing, use \? for the help, and see:

\timing [on|off]       toggle timing of commands (currently off)

And then you get output like:

# \timing onTiming is on.# select 1234;         ?column? ----------     1234(1 row)Time: 0.203 ms


There are various ways to measure execution time, each has pros and cons. But whatever you do, some degree of the observer effect applies. I.e., measuring itself may distort the result.

1. EXPLAIN ANALYZE

You can prepend EXPLAIN ANALYZE, which reports the whole query plan with estimated costs actually measured times. The query is actually executed (with all side -effect, if any!). Works for SELECT, INSERT, UPDATE, DELETE.

Check whether my adapted version of your query is, in fact, faster:

EXPLAIN ANALYZESELECT DISTINCT born_on.nameFROM   born_on bWHERE  date '2012-01-30' - b.dob <= (    SELECT max(d1.dod - b1.dob)    FROM   born_on b1    JOIN   died_on d1 USING (name)  -- name must be unique!    )AND NOT EXISTS (    SELECT FROM died_on d2    WHERE  d2.name = b.name    );

Execute a couple of times to get more comparable times with warm cache. Several options are available to adjust the level of detail.

While mainly interested in total execution time, make it:

EXPLAIN (ANALYZE, COSTS OFF, TIMING OFF)

Mostly, TIMING matters - the manual:

TIMING

Include actual startup time and time spent in each node in the output.The overhead of repeatedly reading the system clock can slow down thequery significantly on some systems, so it may be useful to set thisparameter to FALSE when only actual row counts, and not exact times,are needed. Run time of the entire statement is always measured, evenwhen node-level timing is turned off with this option. [...]

EXPLAIN ANALYZE measures on the server, using server time from the server OS, excluding network latency. But EXPLAIN adds some overhead to also output the query plan.

2. psql with \timing

Or use \timing in psql. Like Peter demonstrates.

The manual:

\timing [ on | off ]

With a parameter, turns displaying of how long each SQL statementtakes on or off. Without a parameter, toggles the display between onand off. The display is in milliseconds; intervals longer than 1second are also shown in minutes:seconds format, with hours and daysfields added if needed.

Important difference: psql measures on the client using local time from the local OS, so the time includes network latency. This can be a negligible difference or huge depending on connection and volume of returned data.

3. Enable log_duration

This has probably the least overhead per measurement and produces the least distorted timings. But it's a little heavy-handed as you have to be superuser, have to adjust the server configuration, cannot just target the execution of a single query, and you have to read the server logs (unless you redirect to stdout).

The manual:

log_duration (boolean)

Causes the duration of every completed statement to be logged. Thedefault is off. Only superusers can change this setting.

For clients using extended query protocol, durations of the Parse,Bind, and Execute steps are logged independently.

There are related settings like log_min_duration_statement.

4. Precise manual measurement with clock_timestamp()

The manual:

clock_timestamp() returns the actual current time, and therefore its value changes even within a single SQL command.

To get execution times for ad-hoc queries as exact as possible, the best I can think of is based on what filiprem provided - which is fine as is.
I refined it some more to filter measuring overhead - which can matter for cheap queries (but typically not for expensive ones) - especially if the underlying OS makes timing calls expensive:

DO$do$DECLARE   _start_ts1 timestamptz;   _start_ts2 timestamptz;   _start_ts  timestamptz;   _end_ts1   timestamptz;   _end_ts2   timestamptz;   _end_ts    timestamptz;   _overhead  numeric;     -- in ms   _timing    numeric;     -- in msBEGIN   _start_ts1 := clock_timestamp();   _end_ts1   := clock_timestamp();   _start_ts2 := clock_timestamp();   _end_ts2   := clock_timestamp();   -- take the minimum as conservative estimate   _overhead  := 1000 * extract(epoch FROM LEAST(_end_ts1 - _start_ts1                                               , _end_ts2 - _start_ts2));                        _start_ts := clock_timestamp();   PERFORM 1;  -- your query here, replacing the outer SELECT with PERFORM   _end_ts   := clock_timestamp();      _timing := 1000 * (extract(epoch FROM _end_ts - _start_ts));-- RAISE NOTICE 'Timing in ms = %'         , _timing;  -- optional info-- RAISE NOTICE 'Timing overhead in ms = %', _overhead;   RAISE NOTICE 'Execution time in ms = %' , _timing - _overhead;END$do$

Cost of timing itself varies wildly, depending on the underlying OS. To get a hold of this, take start and end time a couple of times and take the minimum interval as conservative estimate for timing overhead. Also, executing the function a couple of times should warm it up (if needed).

After measuring the execution time of the payload query, subtract that estimated overhead to get as close to the actual time as possible.

Of course, it's more meaningful for cheap queries to loop 100000 times or execute it on a table with 100000 rows if you can, to make distracting noise insignificant.


PostgreSQL is not Transact-SQL. These are two slightly different things.

In PostgreSQL, this would be something along the lines of

DO $proc$DECLARE  StartTime timestamptz;  EndTime timestamptz;  Delta double precision;BEGIN  StartTime := clock_timestamp();  PERFORM foo FROM bar; /* Put your query here, replacing SELECT with PERFORM */  EndTime := clock_timestamp();  Delta := 1000 * ( extract(epoch from EndTime) - extract(epoch from StartTime) );  RAISE NOTICE 'Duration in millisecs=%', Delta;END;$proc$;

On the other hand, measuring query time does not have to be this complicated. There are better ways:

  1. In postgres command line client there is a \timing feature which measures query time on client side (similar to duration in bottomright corner of SQL Server Management Studio).

  2. It's possible to record query duration in server log (for every query, or only when it lasted longer than X milliseconds).

  3. It's possible to collect server-side timing for any single statement using the EXPLAIN command:

    EXPLAIN (ANALYZE, BUFFERS) YOUR QUERY HERE;