Ormlite Android bulk inserts
I've had the same problem, and found a reasonable workaround. This took insert time from 2 seconds to 150ms:
final OrmLiteSqliteOpenHelper myDbHelper = ...;final SQLiteDatabase db = myDbHelper.getWritableDatabase();db.beginTransaction();try{ // do ormlite stuff as usual, no callBatchTasks() needed db.setTransactionSuccessful();}finally { db.endTransaction();}
Update:
Just tested this on Xperia M2 Aqua (Android4.4/ARM) and callBatchTasks()
is actually faster. 90ms vs 120ms. So I think more details are in order.
We have 3 tables/classes/DAOs: Parent, ChildWrapper, Child.
Relations: Parent to ChildWrapper - 1 to n, ChildWrapper to Child - n to 1.
Code goes like this:
void saveData(xml){ for (parents in xml){ parentDao.createOrUpdate(parent); for (children in parentXml){ childDao.createOrUpdate(child); childWrapperDao.createOrUpdate(generateWrapper(parent, child)); } }}
I've got original speed up on a specific Android4.2/MIPS set-top-box (STB).callBatchTasks
was the first option because that's what we use througout all the code and it works well.
parentDao.callBatchTasks( // ... saveData(); // ...);
But inserts were slow, so we've tried to nest callBatchTasks
for every used DAO, set autocommit off, startThreadConnection and probably something else - don't remember at the moment. To no avail.
From my own experience and other similar posts it seems the problem occurs when several tables/DAOs are involved and it has something to do with implemetation specifics of Android (or SQLite) for concrete devices.
Unfortunately, this may be "expected". I get similar performance when I do that number of inserts under my emulator as well. The batch-tasks and turning off auto-commit don't seem to help.
If you are looking to load a large amount of data into a database, you might consider replaying a database dump instead. See here:
My guess would be that you are slowing somewhat because you are doing two IO tasks at one time (at least in the code shown above). You are reading from a file and writing to a database (which is a file). Also, from what I understand transactions should be a reasonable size. 1600 seems like a very high number. I would start with 100 but play around with the size.
So essentially I suggest you "chunk" your reads and inserts.
Read 100 lines to a temp Array, then insert that 100. Then read the next 100, then insert, etc.