The real difference between float32 and float64 The real difference between float32 and float64 numpy numpy

The real difference between float32 and float64


a = np.array([0.123456789121212,2,3], dtype=np.float16)print("16bit: ", a[0])a = np.array([0.123456789121212,2,3], dtype=np.float32)print("32bit: ", a[0])b = np.array([0.123456789121212121212,2,3], dtype=np.float64)print("64bit: ", b[0])
  • 16bit: 0.1235
  • 32bit: 0.12345679
  • 64bit: 0.12345678912121212


float32 is a 32 bit number - float64 uses 64 bits.

That means that float64’s take up twice as much memory - and doing operations on them may be a lot slower in some machine architectures.

However, float64’s can represent numbers much more accurately than 32 bit floats.

They also allow much larger numbers to be stored.

For your Python-Numpy project I'm sure you know the input variables and their nature.

To make a decision we as programmers need to ask ourselves

  1. What kind of precision does my output need?
  2. Is speed not an issue at all?
  3. what precision is needed in parts per million?

A naive example would be if I store weather data of my city as[12.3, 14.5, 11.1, 9.9, 12.2, 8.2]

Next day Predicted Output could be of 11.5 or 11.5164374

do your think storing float 32 or float 64 would be necessary?