Optimizing batch inserts, SQLite Optimizing batch inserts, SQLite sqlite sqlite

Optimizing batch inserts, SQLite


I'm a bit hazy on the Java API but I think you should start a transaction first, otherwise calling commit() is pointless. Do it with conn.setAutoCommit(false). Otherwise SQLite will be journaling for each individual insert / update. Which requires syncing the file, which will contribute towards slowness.

EDIT: The questioner updated to say that this is already set true. In that case:

That is a lot of data. That length of time doesn't sound out of this world. The best you can do is to do tests with different buffer sizes. There is a balance between buffer jitter from them being too small and virtual memory kicking in for large sizes. For this reason, you shouldn't try to put it all into one buffer at once. Split up the inserts into your own batches.


You are only executing executeBatchonce, which means that all 10 million statements are send to the database in the executeBatch call. This is way too much to handle for a database.You should additionally execute int[] updateCounts = prep.executeBatch(); in your loop let's say all 1000 rows. Just make an if statement which tests on counter % 1000 == 0. Then the database can asynchronously already work on the data you sent.