Why should I use int instead of a byte or short in C# Why should I use int instead of a byte or short in C# sql-server sql-server

Why should I use int instead of a byte or short in C#


Performance-wise, an int is faster in almost all cases. The CPU is designed to work efficiently with 32-bit values.

Shorter values are complicated to deal with. To read a single byte, say, the CPU has to read the 32-bit block that contains it, and then mask out the upper 24 bits.

To write a byte, it has to read the destination 32-bit block, overwrite the lower 8 bits with the desired byte value, and write the entire 32-bit block back again.

Space-wise, of course, you save a few bytes by using smaller datatypes. So if you're building a table with a few million rows, then shorter datatypes may be worth considering. (And the same might be good reason why you should use smaller datatypes in your database)

And correctness-wise, an int doesn't overflow easily. What if you think your value is going to fit within a byte, and then at some point in the future some harmless-looking change to the code means larger values get stored into it?

Those are some of the reasons why int should be your default datatype for all integral data. Only use byte if you actually want to store machine bytes. Only use shorts if you're dealing with a file format or protocol or similar that actually specifies 16-bit integer values. If you're just dealing with integers in general, make them ints.


I am only 6 years late but maybe I can help someone else.

Here are some guidelines I would use:

  • If there is a possibility the data will not fit in the future then use the larger int type.
  • If the variable is used as a struct/class field then by default it will be padded to take up the whole 32-bits anyway so using byte/int16 will not save memory.
  • If the variable is short lived (like inside a function) then the smaller data types will not help much.
  • "byte" or "char" can sometimes describe the data better and can do compile time checking to make sure larger values are not assigned to it on accident. e.g. If storing the day of the month(1-31) using a byte and try to assign 1000 to it then it will cause an error.
  • If the variable is used in an array of roughly 100 or more I would use the smaller data type as long as it makes sense.
  • byte and int16 arrays are not as thread safe as an int (a primitive).

One topic that no one brought up is the limited CPU cache. Smaller programs execute faster then larger ones because the CPU can fit more of the program in the faster L1/L2/L3 caches.

Using the int type can result in fewer CPU instructions however it will also force a higher percentage of the data memory to not fit in the CPU cache. Instructions are cheap to execute. Modern CPU cores can execute 3-7 instructions per clock cycle however a single cache miss on the other hand can cost 1000-2000 clock cycles because it has to go all the way to RAM.

When memory is conserved it also results in the rest of the application performing better because it is not squeezed out of the cache.

I did a quick sum test with accessing random data in random order using both a byte array and an int array.

const int SIZE = 10000000, LOOPS = 80000;byte[] array = Enumerable.Repeat(0, SIZE).Select(i => (byte)r.Next(10)).ToArray();int[] visitOrder = Enumerable.Repeat(0, LOOPS).Select(i => r.Next(SIZE)).ToArray();System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();sw.Start();int sum = 0;foreach (int v in visitOrder)    sum += array[v];sw.Stop();

Here are the results in time(ticks): (x86, release mode, without debugger, .NET 4.5, I7-3930k) (smaller is better)

________________ Array Size __________________       10  100   1K   10K  100K    1M   10M byte: 549  559  552   552   568   632  3041  int : 549  566  552   562   590  1803  4206
  • Accessing 1M items randomly using byte on my CPU had a 285% performance increase!
  • Anything under 10,000 was hardly noticeable.
  • int was never faster then byte for this basic sum test.
  • These values will vary with different CPUs with different cache sizes.

One final note, Sometimes I look at the now open-source .NET framework to see what Microsoft's experts do. The .NET framework uses byte/int16 surprisingly little. I could not find any actually.


You would have to be dealing with a few BILLION rows before this makes any significant difference in terms of storage capacity. Lets say you have three columns, and instead of using a byte-equivalent database type, you use an int-equivalent.

That gives us 3 (columns) x 3 (bytes extra) per row, or 9 bytes per row.

This means, for "a few million rows" (lets say three million), you are consuming a whole extra 27 megabytes of disk space! Fortunately as we're no longer living in the 1970s, you shouldn't have to worry about this :)

As said above, stop micro-optimising - the performance hit in converting to/from different integer-like numeric types is going to hit you much, much harder than the bandwidth/diskspace costs, unless you are dealing with very, very, very large datasets.