问题
I have the following table:
FN LN LN1 LN2 LN3 LN4 LN5
a b b x x x x
a c b d e NA NA
a d c a b x x
a e b c d x e
I'm filtering records for which LN is present in LN1 to LN5.
The code I used:
testFilter = filter(test, LN %in% c(LN1, LN2, LN3, LN4, LN5))
The result is not what I expect:
ï..FN LN LN1 LN2 LN3 LN4 LN5
1 a b b x x x x
2 a c b d e <NA> <NA>
3 a d c a b x x
4 a e b c d x e
I understand that c(LN1, LN2, LN3, LN4, LN5)
gives: "b" "b" "c" "b" "x" "d" "a" "c" "x" "e" "b" "d" "x" NA "x" "x" "x" NA "x" "e"
and know this is where the mistake is.
Ideally, I want to return only the 1st and 4th record.
FN LN LN1 LN2 LN3 LN4 LN5
a b b x x x x
a e b c d x e
I want to filter them only using column names. This is just a subset of 5.4M records.
回答1:
Using apply:
# data
df1 <- read.table(text = "
FN LN LN1 LN2 LN3 LN4 LN5
a b b x x x x
a c b d e NA NA
a d c a b x x
a e b c d x e", header = TRUE, stringsAsFactors = FALSE)
df1[ apply(df1, 1, function(i) i[2] %in% i[3:7]), ]
# FN LN LN1 LN2 LN3 LN4 LN5
# 1 a b b x x x x
# 4 a e b c d x e
Note: Consider using other solutions below for big datasets, which can be 60 times faster than this apply solution.
回答2:
There is an alternative approach using data.table
and Reduce()
:
library(data.table)
cols <- paste0("LN", 1:5)
setDT(test)[test[, .I[Reduce(`|`, lapply(.SD, function(x) !is.na(x) & LN == x))],
.SDcols = cols]]
FN LN LN1 LN2 LN3 LN4 LN5 1: a b b x x x x 2: a e b c d x e
Data
library(data.table)
test <- fread(
"FN LN LN1 LN2 LN3 LN4 LN5
a b b x x x x
a c b d e NA NA
a d c a b x x
a e b c d x e")
Benchmark
library(data.table)
library(dplyr)
n_row <- 1e6L
set.seed(123L)
DT <- data.table(
FN = "a",
LN = sample(letters, n_row, TRUE))
cols <- paste0("LN", 1:5)
DT[, (cols) := lapply(1:5, function(x) sample(c(letters, NA), n_row, TRUE))]
DT
df1 <- as.data.frame(DT)
bm <- microbenchmark::microbenchmark(
zx8754 = {
df1[ apply(df1, 1, function(i) i[2] %in% i[3:7]), ]
},
eric = {
df1[ which(df1$LN == df1$LN1 |
df1$LN == df1$LN2 |
df1$LN == df1$LN3 |
df1$LN == df1$LN4 |
df1$LN == df1$LN5), ]
},
uwe = {
DT[DT[, .I[Reduce(`|`, lapply(.SD, function(x) !is.na(x) & LN == x))],
.SDcols = cols]]
},
axe = {
filter_at(df1, vars(num_range("LN", 1:5)), any_vars(. == LN))
},
jaap = {df1[!!rowSums(df1$LN == df1[, 3:7], na.rm = TRUE),]},
times = 50L
)
print(bm, "ms")
Unit: milliseconds expr min lq mean median uq max neval cld zx8754 3120.68925 3330.12289 3508.03001 3460.83459 3589.10255 4552.9070 50 c eric 69.74435 79.11995 101.80188 83.78996 98.24054 309.3864 50 a uwe 93.26621 115.30266 130.91483 121.64281 131.75704 292.8094 50 a axe 69.82137 79.54149 96.70102 81.98631 95.77107 315.3111 50 a jaap 362.39318 489.86989 543.39510 544.13079 570.10874 1110.1317 50 b
For 1 M rows, the hard coded subsetting is the fastest, followed by the data.table
/Reduce()
and dplyr
/filter_at
approaches. Using apply()
is 60 times slower.
ggplot(bm, aes(expr, time)) + geom_violin() + scale_y_log10() + stat_summary(fun.data = mean_cl_boot)
回答3:
not the simplest code, but
df1[ which(df1$LN == df1$LN1 |
df1$LN == df1$LN2 |
df1$LN == df1$LN3 |
df1$LN == df1$LN4 |
df1$LN == df1$LN5), ]
#> FN LN LN1 LN2 LN3 LN4 LN5
#> 1 a b b x x x x
#> 4 a e b c d x e
回答4:
A quick and very easy dplyr
solution:
filter_at(df1, vars(num_range("LN", 1:5)), any_vars(. == LN))
This is very similar in performance as the hard coded answer by @EricFail, because this simply internally extends the call to:
filter(df1, (LN1 == LN) | (LN2 == LN) | (LN3 == LN) | (LN4 == LN) | (LN5 == LN))
Instead of num_range
any other select
helpers can be used within vars
to easily select many variables based on their names. Or one can directly give column positions.
回答5:
You could also use rowSums
:
df1[!!rowSums(df1$LN == df1[, 3:7], na.rm = TRUE),]
which gives:
FN LN LN1 LN2 LN3 LN4 LN5 1 a b b x x x x 4 a e b c d x e
For a benchmark, see the answer of @Uwe.
来源:https://stackoverflow.com/questions/48336563/filtering-rows-in-a-dataset-by-columns