Is it possible to force exponent or significand of a float to match another float (Python)? Is it possible to force exponent or significand of a float to match another float (Python)? numpy numpy

Is it possible to force exponent or significand of a float to match another float (Python)?


It depends what you mean by "mantissa."

Internally, floats are stored using scientific notation in base 2. So if you mean the base 2 mantissa, it is actually very easy: Just multiply or divide by powers of two (not powers of 10), and the mantissa will stay the same (provided the exponent doesn't get out of range; if it does, you'll get clamped to infinity or zero, or possibly go into denormal numbers depending on architectural details). It's important to understand that the decimal expansions will not match up when you rescale on powers of two. It's the binary expansion that's preserved with this method.

But if you mean the base 10 mantissa, no, it's not possible with floats, because the rescaled value may not be exactly representable. For example, 1.1 cannot be represented exactly in base 2 (with a finite number of digits) in much the same way as 1/3 cannot be represented in base 10 (with a finite number of digits). So rescaling 11 down by 1/10 cannot be done perfectly accurately:

>>> print("%1.29f" % (11 * 0.1))1.10000000000000008881784197001

You can, however, do the latter with decimals. Decimals work in base 10, and will behave as expected in terms of base 10 rescaling. They also provide a fairly large amount of specialized functionality to detect and handle various kinds of loss of precision. But decimals don't benefit from NumPy speedups, so if you have a very large volume of data to work with, they may not be efficient enough for your use case. Since NumPy depends on hardware support for floating point, and most (all?) modern architectures provide no hardware support for base 10, this is not easily remedied.


Try replacing the second line by

f2 = f2*np.max(f1) + (1.0-f2)*np.min(f1)

Explanation: There are 2 places where the difference could creep in:

Step 1) f2 = (f2-np.min(f2))/(np.max(f2)-np.min(f2))

When you inspect np.min(f2) and np.max(f2), do you get exactly 0 and 1 or something like 1.0000003?

Step 2) f2 = f2*(np.max(f1)-np.min(f1)) + np.min(f1)

The expression like (a-b)+b doesn't always produce exactly a, due to rounding error. The suggested expression is slightly more stable.

For a very detailed explanation, please seeWhat Every Computer Scientist Should Know About Floating-Point Arithmetic by David Goldberg.


TL;DR

Use

f2 = f2*np.max(f1)-np.min(f1)*(f2-1)  # f2 is now between min(f1) and max(f1)

and make sure you're using double precision, compare floating point numbers by looking at absolute or relative differences, avoid rounding for adjusting (or comparing) floating point numbers, and don't set the underlying components of floating point numbers manually.

Details

This isn't a very easy error to reproduce, as you have discovered. However, working with floating numbers is subject to error. E.g., adding together 1 000 000 000 + 0 . 000 000 000 1 gives 1 000 000 000 . 000 000 000 1, but this is too many significant figures even for double precision (which supports around 15 significant figures), so the trailing decimal is dropped. Moreover, some "short" numbers can't be represented exactly, as noted in @Kevin's answer. See, e.g., here, for more. (Search for something like "floating point truncation roundoff error" for even more.)

Here's an example which does demonstrate a problem:

import numpy as npnumpy.set_printoptions(precision=16)dtype=np.float32                     f1 = np.linspace(-1000, 0.001, 3, dtype=dtype)f2 = np.linspace(0, 1, 3, dtype=dtype)f2 = (f2-np.min(f2))/(np.max(f2)-np.min(f2)) # f2 is now between 0.0 and 1.0f2 = f2*(np.max(f1)-np.min(f1)) + np.min(f1)  # f2 is now between min(f1) and max(f1)print (f1)print (f2)

output

[ -1.0000000000000000e+03  -4.9999951171875000e+02   1.0000000474974513e-03][ -1.0000000000000000e+03  -4.9999951171875000e+02   9.7656250000000000e-04]

Following @Mark Dickinson's comment, I have used 32 bit floating point. This is consistent with the error you reported, a relative error of around 10^-7, around the 7th significant figure

In: (5.0230602 - 5.0230593) / 5.0230593Out: 1.791736760621852e-07

Going to dtype=np.float64 makes things better but it still isn't perfect. The program above then gives

[ -1.0000000000000000e+03  -4.9999950000000001e+02   1.0000000000000000e-03][ -1.0000000000000000e+03  -4.9999950000000001e+02   9.9999999997635314e-04]

This isn't perfect, but is generally close enough. When comparing floating point numbers you almost never want to use strict equality because of the possibility of small errors as noted above. Instead subtract one number from the other and check the absolute difference is less than some tolerance, and/or look at the relative error. See, e.g., numpy.isclose.

Returning to your problem, it seems like it should be possible to do better. After all, f2 has the range 0 to 1, so you should be able to replicate the maximum in f1. The problem comes in the line

f2 = f2*(np.max(f1)-np.min(f1)) + np.min(f1)  # f2 is now between min(f1) and max(f1)

because when an element of f2 is 1 you're doing a lot more to it than just multiplying 1 by the max of f1, leading to the possibility of floating point arithmetic errors occurring. Notice that you can multiply out the brackets f2*(np.max(f1)-np.min(f1)) to f2*np.max(f1) - f2*np.min(f1), and then factorize the resulting - f2*np.min(f1) + np.min(f1) to np.min(f1)*(f2-1) giving

f2 = f2*np.max(f1)-np.min(f1)*(f2-1)  # f2 is now between min(f1) and max(f1)

So when an element of f2 is 1, we have 1*np.max(f1) - np.min(f1)*0. Conversely when an element of f2 is 0, we have 0*np.max(f1) - np.min(f1)*1. The numbers 1 and 0 can be exactly represented so there should be no errors.

The modified program outputs

[ -1.0000000000000000e+03  -4.9999950000000001e+02   1.0000000000000000e-03][ -1.0000000000000000e+03  -4.9999950000000001e+02   1.0000000000000000e-03]

i.e. as desired.

Nevertheless I would still strongly recommend only using inexact floating point comparison (with tight bounds if you need) unless you have a very good reason not to do so. There are all sorts of subtle errors that can occur in floating point arithmetic and the easiest way to avoid them is never to use exact comparison.

An alternative approach to that given above, that might be preferable, would be to rescale both arrays to between 0 and 1. This might be the most suitable form to use within the program. (And both arrays could be multiplied by a scaling factor such the original range of f1, if necessary.)

Re using rounding to solve your problem, I would not recommend this. The problem with rounding -- apart from the fact that it unnecessary reduces the accuracy of your data -- is that numbers that are very close can round in different directions. E.g.

f1 = np.array([1.000049])f2 = np.array([1.000051])print (f1)print (f2)scale = 10**(-np.floor(np.log10(np.max(f1))) + 4)f2 = np.round(f2*scale)/scalef1 = np.round(f1*scale)/scaleprint (f1)print (f2)

Output

[ 1.000049][ 1.000051][ 1.][ 1.0001]

This is related to the fact that although it's common to discuss numbers matching to so many significant figures, people don't actually compare them this way in the computer. You calculate the difference and then divide by the correct number (for a relative error).

Re mantissas and exponents, see math.frexp and math.ldexp, documented here. I would not recommend setting these yourself however (consider two numbers that are very close but have different exponents, for example -- do you really want to set the mantissa). Much better to just directly set the maximum of f2 explicitly to the maximum of f1, if you want to ensure the numbers are exactly the same (and similarly for the minimum).