Setting Oracle size of row fetches higher makes my app slower? Setting Oracle size of row fetches higher makes my app slower? oracle oracle

Setting Oracle size of row fetches higher makes my app slower?


Possible explanations:

  1. Java is doing nothing, while Oracle is computing the first 1000 rows instead of first 10.

  2. Oracle is doing nothing, while Java is computing the last 1000 rows instead of last 10.

  3. Communication protocols (e.g. TCP/IP) wait a lot and then must handle more data at once, but the peak data transfer will be throttled down by hardware limits. This is countered by protocol's overhead, so there should be optimal fetch size and anything less or more would be slower ;))

  4. It would get worse if the process of fetching is synchronous with other Java code, so that Java asks for more rows only after processing the previous data and Oracle does nothing in the mean time.

    Imagine there are 3 people:

    • 1st one folds A4 paper in half
    • 2nd one brings stacks of folded paper from one room to another
    • 3rd cuts some shape from the folded paper.

    How big should the stacks be, if the 1st one has to wait until the 2nd one returns and the 2nd one has to wait until the 3rd one finishes their job?

    Stacks of 1000 will not be better than stacks of 10 i guess ;))


As with everything, there is no FAST=TRUE setting. While the JDBC default fetch size of 10 is not ideal for your situation, it is OK for a "typical" OLTP application, and really isn't that bad for your case either, it seems. Apparently a large fetch size is not ideal for your situation either. But again, it isn't that bad to do 1000 at a time.

The other factor which you haven't mentioned is how WIDE the rows are that are being pulled. Consider that the chunk of data you are pulling from the database server across the network to the app server is the sum(WIDTH*ROWS). If your rows are 5000 bytes across, and you're pulling 1000 at a time, then each fetch is going to bring in 5 MB of data. In another case, perhaps your rows are "skinny" at only 100 bytes across. Then fetching 1000 of those is only shuttling 100K pieces around.

Because only YOU can know what the data will look like coming back, the recommendation is to set the fetch size system-wide for the "general" case, then adjust the oddball queries individually as needed.

In general, I too have found 100 to be a better setting for large data processes. That's not a recommendation, but relaying an observation.


BTW, at least with Oracle you need to be careful with the fetch size because Oracle driver sets aside an array for maximum possible size taken by each row, not the actual data size. So if you have a fat table, you memory footprint can suffer.

Take a look here - http://www.oracle.com/technetwork/database/enterprise-edition/memory.pdf

In Oracle you can find out maximum possible space taken by a column in user_tab_columns metadata table (data_length). It can be used to determine fetch size.

In rough tests, I found that 4 * 1024 * 1024 / sum(data_length for all columns of your table) is a reasonable fetch size.