Can someone explain this: 0.2 + 0.1 = 0.30000000000000004? [duplicate] Can someone explain this: 0.2 + 0.1 = 0.30000000000000004? [duplicate] python-3.x python-3.x

Can someone explain this: 0.2 + 0.1 = 0.30000000000000004? [duplicate]


That's because .1 cannot be represented exactly in a binary floating point representation. If you try

>>> .1

Python will respond with .1 because it only prints up to a certain precision, but there's already a small round-off error. The same happens with .3, but when you issue

>>> .2 + .10.30000000000000004

then the round-off errors in .2 and .1 accumulate. Also note:

>>> .2 + .1 == .3False


Not all floating point numbers are exactly representable on a finite machine. Neither 0.1 nor 0.2 are exactly representable in binary floating point. And nor is 0.3.

A number is exactly representable if it is of the form a/b where a and b are an integers and b is a power of 2. Obviously, the data type needs to have a large enough significand to store the number also.

I recommend Rob Kennedy's useful webpage as a nice tool to explore representability.