How big is too big for a PostgreSQL table? How big is too big for a PostgreSQL table? sql sql

How big is too big for a PostgreSQL table?


Rows per a table won't be an issue on it's own.

So roughly speaking 1 million rows a day for 90 days is 90 million rows. I see no reason Postgres can't deal with that, without knowing all the details of what you are doing.

Depending on your data distribution you can use a mixture of indexes, filtered indexes, and table partitioning of some kind to speed thing up once you see what performance issues you may or may not have. Your problem will be the same on any other RDMS that I know of. If you only need 3 months worth of data design in a process to prune off the data you don't need any more. That way you will have a consistent volume of data on the table. Your lucky you know how much data will exist, test it for your volume and see what you get. Testing one table with 90 million rows may be as easy as:

select x,1 as c2,2 as c3from generate_series(1,90000000) x;

https://wiki.postgresql.org/wiki/FAQ

Limit   ValueMaximum Database Size       UnlimitedMaximum Table Size          32 TBMaximum Row Size            1.6 TBMaximum Field Size          1 GBMaximum Rows per Table      UnlimitedMaximum Columns per Table   250 - 1600 depending on column typesMaximum Indexes per Table   Unlimited


Another way to speed up your queries significantly on a table with > 100 million rows is in the off hours cluster the table on the index that is most often used in your queries. We have a table with > 218 million rows and have found 30X improvements.

Also, for a very large table, it's a good idea to create an index on your foreign keys.