Double vs. BigDecimal? Double vs. BigDecimal? java java

Double vs. BigDecimal?


A BigDecimal is an exact way of representing numbers. A Double has a certain precision. Working with doubles of various magnitudes (say d1=1000.0 and d2=0.001) could result in the 0.001 being dropped alltogether when summing as the difference in magnitude is so large. With BigDecimal this would not happen.

The disadvantage of BigDecimal is that it's slower, and it's a bit more difficult to program algorithms that way (due to + - * and / not being overloaded).

If you are dealing with money, or precision is a must, use BigDecimal. Otherwise Doubles tend to be good enough.

I do recommend reading the javadoc of BigDecimal as they do explain things better than I do here :)


My English is not good so I'll just write a simple example here.

double a = 0.02;double b = 0.03;double c = b - a;System.out.println(c);BigDecimal _a = new BigDecimal("0.02");BigDecimal _b = new BigDecimal("0.03");BigDecimal _c = _b.subtract(_a);System.out.println(_c);

Program output:

0.0099999999999999980.01

Does anyone still want to use double? ;)


There are two main differences from double:

  • Arbitrary precision, similarly to BigInteger they can contain number of arbitrary precision and size (whereas a double has a fixed number of bits)
  • Base 10 instead of Base 2, a BigDecimal is n*10^-scale where n is an arbitrary large signed integer and scale can be thought of as the number of digits to move the decimal point left or right

It is still not true to say that BigDecimal can represent any number. But two reasons you should use BigDecimal for monetary calculations are:

  • It can represent all numbers that can be represented in decimal notion and that includes virtually all numbers in the monetary world (you never transfer 1/3 $ to someone).
  • The precision can be controlled to avoid accumulated errors. With a double, as the magnitude of the value increases, its precision decreases and this can introduce significant error into the result.