What is the problem with my implementation of the cross-entropy function?
You're not that far off at all, but remember you are taking the average value of N sums, where N = 2 (in this case). So your code could read:
def cross_entropy(predictions, targets, epsilon=1e-12): """ Computes cross entropy between targets (encoded as one-hot vectors) and predictions. Input: predictions (N, k) ndarray targets (N, k) ndarray Returns: scalar """ predictions = np.clip(predictions, epsilon, 1. - epsilon) N = predictions.shape[0] ce = -np.sum(targets*np.log(predictions+1e-9))/N return cepredictions = np.array([[0.25,0.25,0.25,0.25], [0.01,0.01,0.01,0.96]])targets = np.array([[0,0,0,1], [0,0,0,1]])ans = 0.71355817782 #Correct answerx = cross_entropy(predictions, targets)print(np.isclose(x,ans))
Here, I think it's a little clearer if you stick with np.sum()
. Also, I added 1e-9 into the np.log()
to avoid the possibility of having a log(0) in your computation. Hope this helps!
NOTE: As per @Peter's comment, the offset of 1e-9
is indeed redundant if your epsilon value is greater than 0
.
def cross_entropy(x, y): """ Computes cross entropy between two distributions. Input: x: iterabale of N non-negative values y: iterabale of N non-negative values Returns: scalar """ if np.any(x < 0) or np.any(y < 0): raise ValueError('Negative values exist.') # Force to proper probability mass function. x = np.array(x, dtype=np.float) y = np.array(y, dtype=np.float) x /= np.sum(x) y /= np.sum(y) # Ignore zero 'y' elements. mask = y > 0 x = x[mask] y = y[mask] ce = -np.sum(x * np.log(y)) return cedef cross_entropy_via_scipy(x, y): ''' SEE: https://en.wikipedia.org/wiki/Cross_entropy''' return entropy(x) + entropy(x, y)from scipy.stats import entropy, truncnormx = truncnorm.rvs(0.1, 2, size=100)y = truncnorm.rvs(0.1, 2, size=100)print np.isclose(cross_entropy(x, y), cross_entropy_via_scipy(x, y))