check whether all elements of a list are in equal in R check whether all elements of a list are in equal in R r r

check whether all elements of a list are in equal in R


How about

allSame <- function(x) length(unique(x)) == 1allSame(test_true)# [1] TRUEallSame(test_false)# [1] FALSE

As @JoshuaUlrich pointed out below, unique may be slow on lists. Also, identical and unique may use different criteria. Reduce is a function I recently learned about for extending pairwise operations:

identicalValue <- function(x,y) if (identical(x,y)) x else FALSEReduce(identicalValue,test_true)# [1] 1 2 3Reduce(identicalValue,test_false)# [1] FALSE

This inefficiently continues making comparisons after finding one non-match. My crude solution to that would be to write else break instead of else FALSE, throwing an error.


To summarize the solutions. Data for the tests:

x1 <- as.list(as.data.frame(replicate(1000, 1:100)))x2 <- as.list(as.data.frame(replicate(1000, sample(1:100, 100))))

Solutions:

comp_list1 <- function(x) length(unique.default(x)) == 1Lcomp_list2 <- function(x) all(vapply(x[-1], identical, logical(1L), x = x[[1]]))comp_list3 <- function(x) all(vapply(x[-1], function(x2) all(x[[1]] == x2), logical(1L)))comp_list4 <- function(x) sum(duplicated.default(x)) == length(x) - 1L

Test on the data:

for (i in 1:4) cat(match.fun(paste0("comp_list", i))(x1), " ")#> TRUE  TRUE  TRUE  TRUE   for (i in 1:4) cat(match.fun(paste0("comp_list", i))(x2), " ")#> FALSE  FALSE  FALSE  FALSE  

Benchmarks:

library(microbenchmark)microbenchmark(comp_list1(x1), comp_list2(x1), comp_list3(x1), comp_list4(x1))#> Unit: microseconds#>            expr      min        lq      mean   median        uq      max neval cld#>  comp_list1(x1)  138.327  148.5955  171.9481  162.013  188.9315  269.342   100 a  #>  comp_list2(x1) 1023.932 1125.2210 1387.6268 1255.985 1403.1885 3458.597   100  b #>  comp_list3(x1) 1130.275 1275.9940 1511.7916 1378.789 1550.8240 3254.292   100   c#>  comp_list4(x1)  138.075  144.8635  169.7833  159.954  185.1515  298.282   100 a  microbenchmark(comp_list1(x2), comp_list2(x2), comp_list3(x2), comp_list4(x2))#> Unit: microseconds#>            expr     min        lq      mean   median        uq      max neval cld#>  comp_list1(x2) 139.492  140.3540  147.7695  145.380  149.6495  218.800   100  a #>  comp_list2(x2) 995.373 1030.4325 1179.2274 1054.711 1136.5050 3763.506   100   b#>  comp_list3(x2) 977.805 1029.7310 1134.3650 1049.684 1086.0730 2846.592   100   b#>  comp_list4(x2) 135.516  136.4685  150.7185  139.030  146.7170  345.985   100  a

As we see the most efficient solutions based on the duplicated and unique functions.


I woud do:

all.identical <- function(l) all(mapply(identical, head(l, 1), tail(l, -1)))all.identical(test_true)# [1] TRUEall.identical(test_false)# [1] FALSE