Stocking large numbers into numpy array Stocking large numbers into numpy array arrays arrays

Stocking large numbers into numpy array


As Nils Werner already mentioned, numpy's native ctypes cannot save numbers that large, but python itself can since the int objects use an arbitrary length implementation.So what you can do is tell numpy not to convert the numbers to ctypes but use the python objects instead. This will be slower, but it will work.

In [14]: x = np.array([18,30,31,31,15], dtype=object)In [15]: 150**xOut[15]: array([1477891880035400390625000000000000000000L,       191751059232884086668491363525390625000000000000000000000000000000L,       28762658884932613000273704528808593750000000000000000000000000000000L,       28762658884932613000273704528808593750000000000000000000000000000000L,       437893890380859375000000000000000L], dtype=object)

In this case the numpy array will not store the numbers themselves but references to the corresponding int objects. When you perform arithmetic operations they won't be performed on the numpy array but on the objects behind the references.
I think you're still able to use most of the numpy functions with this workaround but they will definitely be a lot slower than usual.

But that's what you get when you're dealing with numbers that large :D
Maybe somewhere out there is a library that can deal with this issue a little better.

Just for completeness, if precision is not an issue, you can also use floats:

In [19]: x = np.array([18,30,31,31,15], dtype=np.float64)In [20]: 150**xOut[20]: array([  1.47789188e+39,   1.91751059e+65,   2.87626589e+67,         2.87626589e+67,   4.37893890e+32])


150 ** 28 is way beyond what an int64 variable can represent (it's in the ballpark of 8e60 while the maximum possible value of an unsigned int64 is roughly 18e18).

Python may be using an arbitrary length integer implementation, but NumPy doesn't.

As you deduced correctly, negative numbers are a symptom of an int overflow.