Same query - different execution plans Same query - different execution plans sql-server sql-server

Same query - different execution plans


Query 2 uses a variable.

At the time the batch is compiled SQL Server does not know the value of the variable so just falls back to heuristics very similar to OPTIMIZE FOR (UNKNOWN)

For > it will assume that 30% of the rows will end up matching (or 3000 rows in your example data). This can be seen in the execution plan image as below. This is significantly over and above the 12 rows (0.12%) which is the tipping point for this query in whether it uses a clustered index scan or a non clustered index seek and key lookups.

You would need to use OPTION (RECOMPILE) to get it to take account of the actual variable value as shown in the third plan below.

Execution Plans Image

Script

CREATE TABLE #Sale(    SaleId INT IDENTITY(1, 1)        CONSTRAINT PK_Sale PRIMARY KEY,    Test1 VARCHAR(10) NULL,    RowVersion rowversion NOT NULL        CONSTRAINT UQ_Sale_RowVersion UNIQUE)/*A better way of populating the table!*/INSERT INTO #Sale (Test1)SELECT TOP 10000 NULL FROM master..spt_values v1, master..spt_values v2GOSELECT *FROM #SaleWHERE RowVersion > 0x000000000001C310-- Query #1DECLARE @LastVersion rowversion = 0x000000000001C310SELECT *FROM #SaleWHERE RowVersion > @LastVersion-- Query #2SELECT *FROM #SaleWHERE RowVersion > @LastVersionOPTION (RECOMPILE)-- Query #3DROP TABLE #Sale


Try creating a covering index for the actual data that you need to retrieve and avoid select *, depending on the data in your table that's the only sure thing that will force SQL Server to not tip and fallback to a scan.

A covering index is an index in which the search filter is in the same order and each output column is included in the index.

Also, since we're dealing with parameterization, it's worth trying to see if optimize for unknown has any impact on the execution plan here.