Database deadlocks Database deadlocks database database

Database deadlocks


  1. All transactions areinserting\updating in the same order.
  2. Deletes; identify records to bedeleted outside a transaction andthen attempt the deletion in thesmallest possible transaction, e.g.looking up by the primary key or similaridentified during the lookup stage.
  3. Small transactions generally.
  4. Indexing and other performancetuning to both speed transactionsand to promote index lookups overtablescans.
  5. Avoid 'Hot tables',e.g. one table with incrementingcounters for other tables primarykeys. Any other 'switchboard' typeconfiguration is risky.
  6. Especially if not using Oracle, learnthe looking behaviour of the targetRDBMS in detail (optimistic /pessimistic, isolation levels, etc.)Ensure you do not allow row locks toescalate to table locks as someRDMSes will.


Deadlocks are no biggie. Just be prepared to retry your transactions on failure.

And keep them short. Short transactions consisting of queries that touch very few records (via the magic of indexing) are ideal to minimize deadlocks - fewer rows are locked, and for a shorter period of time.

You need to know that modern database engines don't lock tables; they lock rows; so deadlocks are a bit less likely.

You can also avoid locking by using MVCC and the CONSISTENT READ transaction isolation level: instead of locking, some threads will just see stale data.


  1. Carefully design your database processes to eliminate as much as possible transactions that involve multiple tables. When I've had database design control there has never been a case of deadlock for which I could not design out the condition that caused it. That's not to say they don't exist and perhaps abound in situations outside my limited experience; but I've had no shortage of opportunities to improve designs causing these kinds of problems. One obvious strategy is to start with a chronological write-only table for insertion of new complete atomic transactions with no interdependencies, and apply their effects in an orderly asynchronous process.

  2. Always use the database default isolation levels and locking settings unless you are absolutely sure what risks they incur, and have proven it by testing. Redesign your process if at all possible first. Then, impose the least increase in protection required to eliminate the risk (and test to prove it.) Don't increase restrictiveness "just in case" - this often leads to unintended consequences, sometimes of the type you intended to avoid.

  3. To repeat the point from another direction, most of what you will read on this and other sites advocating the alteration of database settings to deal with transaction risks and locking problems is misleading and/or false, as demonstrated by how they conflict with each other so regularly. Sadly, especially for SQL Server, I have found no source of documentation that isn't hopelessly confusing and inadequate.