Easy way of counting precision, recall and F1-score in R Easy way of counting precision, recall and F1-score in R r r

Easy way of counting precision, recall and F1-score in R


using the caret package:

library(caret)y <- ... # factor of positive / negative casespredictions <- ... # factor of predictionsprecision <- posPredValue(predictions, y, positive="1")recall <- sensitivity(predictions, y, positive="1")F1 <- (2 * precision * recall) / (precision + recall)

A generic function that works for binary and multi-class classification without using any package is:

f1_score <- function(predicted, expected, positive.class="1") {    predicted <- factor(as.character(predicted), levels=unique(as.character(expected)))    expected  <- as.factor(expected)    cm = as.matrix(table(expected, predicted))    precision <- diag(cm) / colSums(cm)    recall <- diag(cm) / rowSums(cm)    f1 <-  ifelse(precision + recall == 0, 0, 2 * precision * recall / (precision + recall))    #Assuming that F1 is zero when it's not possible compute it    f1[is.na(f1)] <- 0    #Binary F1 or Multi-class macro-averaged F1    ifelse(nlevels(expected) == 2, f1[positive.class], mean(f1))}

Some comments about the function:

  • It's assumed that an F1 = NA is zero
  • positive.class is used only inbinary f1
  • for multi-class problems, the macro-averaged F1 is computed
  • If predicted and expected had different levels, predicted will receive the expected levels


The ROCR library calculates all these and more (see also http://rocr.bioinf.mpi-sb.mpg.de):

library (ROCR);...y <- ... # logical array of positive / negative casespredictions <- ... # array of predictionspred <- prediction(predictions, y);# Recall-Precision curve             RP.perf <- performance(pred, "prec", "rec");plot (RP.perf);# ROC curveROC.perf <- performance(pred, "tpr", "fpr");plot (ROC.perf);# ROC area under the curveauc.tmp <- performance(pred,"auc");auc <- as.numeric(auc.tmp@y.values)...


Just to update this as I came across this thread now, the confusionMatrix function in caretcomputes all of these things for you automatically.

cm <- confusionMatrix(prediction, reference = test_set$label)# extract F1 score for all classescm[["byClass"]][ , "F1"] #for multiclass classification problems

You can substitute any of the following for "F1" to extract the relevant values as well:

"Sensitivity", "Specificity", "Pos Pred Value", "Neg Pred Value", "Precision", "Recall", "F1", "Prevalence", "Detection", "Rate", "Detection Prevalence", "Balanced Accuracy"

I think this behaves slightly differently when you're only doing a binary classifcation problem, but in both cases, all of these values are computed for you when you look inside the confusionMatrix object, under $byClass