SQL Server silently truncates varchar's in stored procedures SQL Server silently truncates varchar's in stored procedures sql-server sql-server

SQL Server silently truncates varchar's in stored procedures


It just is.

I've never noticed a problem though because one of my checks would be to ensure my parameters match my table column lengths. In the client code too. Personally, I'd expect SQL to never see data that is too long. If I did see truncated data, it'd be bleeding obvious what caused it.

If you do feel the need for varchar(max) beware a massive performance issue because of datatype precedence. varchar(max) has higher precedence than varchar(n) (longest is highest). So in this type of query you'll get a scan not a seek and every varchar(100) value is CAST to varchar(max)

UPDATE ...WHERE varchar100column = @varcharmaxvalue

Edit:

There is an open Microsoft Connect item regarding this issue.

And it's probably worthy of inclusion in Erland Sommarkog's Strict settings (and matching Connect item).

Edit 2, after Martins comment:

DECLARE @sql VARCHAR(MAX), @nsql nVARCHAR(MAX);SELECT @sql = 'B', @nsql = 'B'; SELECT    LEN(@sql),    LEN(@nsql),    DATALENGTH(@sql),    DATALENGTH(@nsql);DECLARE @t table(c varchar(8000));INSERT INTO @t values (replicate('A', 7500));SELECT LEN(c) from @t;SELECT    LEN(@sql + c),    LEN(@nsql + c),    DATALENGTH(@sql + c),    DATALENGTH(@nsql + c) FROM @t;


Thanks, as always, to StackOverflow for eliciting this kind of in-depth discussion. I have recently been scouring through my Stored Procedures to make them more robust using a standard approach to transactions and try/catch blocks. I disagree with Joe Stefanelli that "My suggestion would be to make the application side responsible", and fully agree with Jez: "Having SQL Server verify the string length would be much preferable". The whole point for me of using stored procedures is that they are written in a language native to the database and should act as a last line of defence. On the application side the difference between 255 and 256 is just a meangingless number but within the database environment, a field with a maximum size of 255 will simply not accept 256 characters. The application validation mechanisms should reflect the backend db as best they can, but maintenance is hard so I want the database to give me good feedback if the application mistakenly allows unsuitable data. That's why I'm using a database instead of a bunch of text files with CSV or JSON or whatever.

I was puzzled why one of my SPs threw the 8152 error and another silently truncated. I finally twigged: The SP which threw the 8152 error had a parameter which allowed one character more than the related table column. The table column was set to nvarchar(255) but the parameter was nvarchar(256). So, wouldn't my "mistake" address gbn's concern: "massive performance issue"? Instead of using max, perhaps we could consistently set the table column size to, say, 255 and the SP parameter to just one character longer, say 256. This solves the silent truncation problem and doesn't incur any performance penalty. Presumably there is some other disadvantage that I haven't thought of, but it seems a good compromise to me.

Update:I'm afraid this technique is not consistent. Further testing reveals that I can sometimes trigger the 8152 error and sometimes the data is silently truncated. I would be very grateful if someone could help me find a more reliable way of dealing with this.

Update 2:Please see Pyitoechito's answer on this page.


The same behavior can be seen here:

declare @testStringField [nvarchar](5)set @testStringField = N'string which is too long'select @testStringField

My suggestion would be to make the application side responsible for validating the input before calling the stored procedure.