Delete large amount of data in sql server Delete large amount of data in sql server sql-server sql-server

Delete large amount of data in sql server


If you need to restrict to what rows you need to delete and not do a complete delete, or you can't use TRUNCATE TABLE (e.g. the table is referenced by a FK constraint, or included in an indexed view), then you can do the delete in chunks:

DECLARE @RowsDeleted INTEGERSET @RowsDeleted = 1WHILE (@RowsDeleted > 0)    BEGIN        -- delete 10,000 rows a time        DELETE TOP (10000) FROM MyTable [WHERE .....] -- WHERE is optional        SET @RowsDeleted = @@ROWCOUNT    END

Generally, TRUNCATE is the best way and I'd use that if possible. But it cannot be used in all scenarios. Also, note that TRUNCATE will reset the IDENTITY value for the table if there is one.

If you are using SQL 2000 or earlier, the TOP condition is not available, so you can use SET ROWCOUNT instead.

DECLARE @RowsDeleted INTEGERSET @RowsDeleted = 1SET ROWCOUNT 10000 -- delete 10,000 rows a timeWHILE (@RowsDeleted > 0)    BEGIN        DELETE FROM MyTable [WHERE .....] -- WHERE is optional        SET @RowsDeleted = @@ROWCOUNT    END


If you have that many records in your table and you want to delete them all, you should consider truncate <table> instead of delete from <table>. It will be much faster, but be aware that it cannot activate a trigger.

See for more details (this case sql server 2000):http://msdn.microsoft.com/en-us/library/aa260621%28SQL.80%29.aspx

Deleting the table within the application row by row will end up in long long time, as your dbms can not optimize anything, as it doesn't know in advance, that you are going to delete everything.


The first has clearly better performance.

When you specify DELETE [MyTable] it will simply erase everything without doing checks for ID. The second one will waste time and disk operation to locate a respective record each time before deleting it.

It also gets worse because every time a record disappears from the middle of the table, the engine may want to condense data on disk, thus wasting time and work again.

Maybe a better idea would be to delete data based on clustered index columns in descending order. Then the table will basically be truncated from the end at every delete operation.