In the following example, why should we favour using f1
over f2
? Is it more efficient in some sense? For someone used to base R, it seems more natural to use the "substitute + eval" option.
library(dplyr)
d = data.frame(x = 1:5,
y = rnorm(5))
# using enquo + !!
f1 = function(mydata, myvar) {
m = enquo(myvar)
mydata %>%
mutate(two_y = 2 * !!m)
}
# using substitute + eval
f2 = function(mydata, myvar) {
m = substitute(myvar)
mydata %>%
mutate(two_y = 2 * eval(m))
}
all.equal(d %>% f1(y), d %>% f2(y)) # TRUE
In other words, and beyond this particular example, my question is: can I get get away with programming using dplyr
NSE functions with good ol' base R like substitute+eval, or do I really need to learn to love all those rlang
functions because there is a benefit to it (speed, clarity, compositionality,...)?
I want to give an answer that is independent of dplyr
, because there is a very clear advantage to using enquo
over substitute
. Both look in the calling environment of a function to identify the expression that was given to that function. The difference is that substitute()
does it only once, while !!enquo()
will correctly walk up the entire calling stack.
Consider a simple function that uses substitute()
:
f <- function( myExpr ) {
eval( substitute(myExpr), list(a=2, b=3) )
}
f(a+b) # 5
f(a*b) # 6
This functionality breaks when the call is nested inside another function:
g <- function( myExpr ) {
val <- f( substitute(myExpr) )
## Do some stuff
val
}
g(a+b)
# myExpr <-- OOPS
Now consider the same functions re-written using enquo()
:
library( rlang )
f2 <- function( myExpr ) {
eval_tidy( enquo(myExpr), list(a=2, b=3) )
}
g2 <- function( myExpr ) {
val <- f2( !!enquo(myExpr) )
val
}
g2( a+b ) # 5
g2( b/a ) # 1.5
And that is why enquo()
+ !!
is preferable to substitute()
+ eval()
.
dplyr
simply takes full advantage of this property to build a coherent set of NSE functions.
enquo()
and !!
also allows you to program with other dplyr
verbs such as group_by
and select
. I'm not sure if substitute
and eval
can do that. Take a look at this example where I modify your data frame a little bit
library(dplyr)
set.seed(1234)
d = data.frame(x = c(1, 1, 2, 2, 3),
y = rnorm(5),
z = runif(5))
# select, group_by & create a new output name based on input supplied
my_summarise <- function(df, group_var, select_var) {
group_var <- enquo(group_var)
select_var <- enquo(select_var)
# create new name
mean_name <- paste0("mean_", quo_name(select_var))
df %>%
select(!!select_var, !!group_var) %>%
group_by(!!group_var) %>%
summarise(!!mean_name := mean(!!select_var))
}
my_summarise(d, x, z)
# A tibble: 3 x 2
x mean_z
<dbl> <dbl>
1 1. 0.619
2 2. 0.603
3 3. 0.292
Edit: also enquos
& !!!
make it easier to capture list of variables
# example
grouping_vars <- quos(x, y)
d %>%
group_by(!!!grouping_vars) %>%
summarise(mean_z = mean(z))
# A tibble: 5 x 3
# Groups: x [?]
x y mean_z
<dbl> <dbl> <dbl>
1 1. -1.21 0.694
2 1. 0.277 0.545
3 2. -2.35 0.923
4 2. 1.08 0.283
5 3. 0.429 0.292
# in a function
my_summarise2 <- function(df, select_var, ...) {
group_var <- enquos(...)
select_var <- enquo(select_var)
# create new name
mean_name <- paste0("mean_", quo_name(select_var))
df %>%
select(!!select_var, !!!group_var) %>%
group_by(!!!group_var) %>%
summarise(!!mean_name := mean(!!select_var))
}
my_summarise2(d, z, x, y)
# A tibble: 5 x 3
# Groups: x [?]
x y mean_z
<dbl> <dbl> <dbl>
1 1. -1.21 0.694
2 1. 0.277 0.545
3 2. -2.35 0.923
4 2. 1.08 0.283
5 3. 0.429 0.292
Credit: Programming with dplyr
Imagine there is a different x you want to multiply:
> x <- 3
> f1(d, !!x)
x y two_y
1 1 -2.488894875 6
2 2 -1.133517746 6
3 3 -1.024834108 6
4 4 0.730537366 6
5 5 -1.325431756 6
vs without the !!
:
> f1(d, x)
x y two_y
1 1 -2.488894875 2
2 2 -1.133517746 4
3 3 -1.024834108 6
4 4 0.730537366 8
5 5 -1.325431756 10
!!
gives you more control over scoping than substitute
- with substitute you can only get the 2nd way easily.
来源:https://stackoverflow.com/questions/49700912/why-is-enquo-preferable-to-substitute-eval