robust and clustered standard error in R for probit and logit regression

后端 未结 1 418
温柔的废话
温柔的废话 2020-12-06 07:36

Originally, I mainly want to run a probit/logit model with clustered standard error in R which is quite intuitive in Stata. I came across with the answer here Logistic regre

相关标签:
1条回答
  • 2020-12-06 07:46

    I prefer the sandwich package to compute robust standard errors. One reason is its excellent documentation. See vignette("sandwich") which clearly shows all available defaults and options, and the corresponding article which explains how you can use ?sandwich with custom bread and meat for special cases.

    We can use sandwich to figure out the difference between the options you posted. The difference will most likely be the degree of freedom correction. Here a comparison for the simple linear regression:

    library(rms)
    library(sandwich)
    
    fitlm <-lm(Sepal.Length~Sepal.Width+Petal.Length+Petal.Width,iris)
    
    #Your Blog Post:
    X <- model.matrix(fitlm)
    n <- dim(X)[1]; k <- dim(X)[2]; dfc <- n/(n-k)    
    u <- matrix(resid(fitlm))
    meat1 <- t(X) %*% diag(diag(crossprod(t(u)))) %*% X
    Blog <- sqrt(dfc*diag(solve(crossprod(X)) %*% meat1 %*% solve(crossprod(X))))
    
    # rms fits:
    fitols <- ols(Sepal.Length~Sepal.Width+Petal.Length+Petal.Width, x=T, y=T, data=iris)
    Harrell <- sqrt(diag(robcov(fitols, method="huber")$var))
    Harrell_2 <- sqrt(diag(robcov(fitols, method="efron")$var))
    
    # Variations available in sandwich:    
    variations <- c("const", "HC0", "HC1", "HC2","HC3", "HC4", "HC4m", "HC5")
    Zeileis <- t(sapply(variations, function(x) sqrt(diag(vcovHC(fitlm, type = x)))))
    rbind(Zeileis, Harrell, Harrell_2, Blog)
    
              (Intercept) Sepal.Width Petal.Length Petal.Width
    const       0.2507771  0.06664739   0.05671929   0.1275479
    HC0         0.2228915  0.05965267   0.06134461   0.1421440
    HC1         0.2259241  0.06046431   0.06217926   0.1440781
    HC2         0.2275785  0.06087143   0.06277905   0.1454783
    HC3         0.2324199  0.06212735   0.06426019   0.1489170
    HC4         0.2323253  0.06196108   0.06430852   0.1488708
    HC4m        0.2339698  0.06253635   0.06482791   0.1502751
    HC5         0.2274557  0.06077326   0.06279005   0.1454329
    Harrell     0.2228915  0.05965267   0.06134461   0.1421440
    Harrell_2   0.2324199  0.06212735   0.06426019   0.1489170
    Blog        0.2259241  0.06046431   0.06217926   0.1440781
    
    1. The result from the blog entry is equivalent to HC1. If the blog entry is similar to your Stata output, Stata uses HC1.
    2. Frank Harrel's function yields results similar to HC0. As far as I understand, this was the first proposed solution and when you look through vignette(sandwich) or the articles mentioned in ?sandwich::vcovHC, other methods have slightly better properties. They differ in their degree of freedom adjustments. Also note that the call to robcov(., method = "efron") is similar to HC3.

    In any case, if you want identical output, use HC1 or just adjust the variance-covariance matrix approriately. After all, after looking at vignette(sandwich) for the differences between different versions, you see that you just need to rescale with a constant to get from HC1 to HC0, which should not be too difficult. By the way, note that HC3 or HC4 are typically preferred due to better small sample properties, and their behavior in the presence of influential observations. So, you probably want to change the defaults in Stata.

    You can use these variance-covariance matrices by supplying it to appropriate functions, such as lmtest::coeftest or car::linearHypothesis. For instance:

    library(lmtest)
    coeftest(fitlm, vcov=vcovHC(fitlm, "HC1"))
    
    t test of coefficients:
    
                  Estimate Std. Error t value  Pr(>|t|)    
    (Intercept)   1.855997   0.225924  8.2151 1.038e-13 ***
    Sepal.Width   0.650837   0.060464 10.7640 < 2.2e-16 ***
    Petal.Length  0.709132   0.062179 11.4046 < 2.2e-16 ***
    Petal.Width  -0.556483   0.144078 -3.8624 0.0001683 ***
    ---
    Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
    

    For cluster-robust standard errors, you'll have to adjust the meat of the sandwich (see ?sandwich) or look for a function doing that. There are already several sources explaining in excruciating detail how to do it with appropriate codes or functions. There is no reason for me to reinvent the wheel here, so I skip this.

    There is also a relatively new and convenient package computing cluster-robust standard errors for linear models and generalized linear models. See here.

    0 讨论(0)
提交回复
热议问题