standard-error

Different Robust Standard Errors of Logit Regression in Stata and R

雨燕双飞 提交于 2020-01-09 06:36:12
问题 I am trying to replicate a logit regression from Stata to R. In Stata I use the option "robust" to have the robust standard error (heteroscedasticity-consistent standard error). I am able to replicate the exactly same coefficients from Stata, but I am not able to have the same robust standard error with the package "sandwich". I have tried some OLS linear regression examples; it seems like the sandwich estimators of R and Stata give me the same robust standard error for OLS. Does anybody know

How can I get confidence intervals for a one-tailed, bootstrapped Pearson correlation in R?

半腔热情 提交于 2020-01-06 09:54:07
问题 I want to calculate 95% bootstrap confidence intervals for a one-tailed, nonparametric bootstrapped Pearson correlation test in R . However, boot.ci only gives two-tailed CIs. How can I calculate one-tailed bootstrap CIs? Here's my code for a one-tailed, bootstrapped Pearson correlation test using cor.test . (It includes boot.ci at the end, which returns two-tailed CI, not desired one-tailed CI. The output is included as comments ( # ) for comparison.) # Load boot package library(boot) # Make

robust standard errors in ggplot2

拥有回忆 提交于 2019-12-30 07:09:12
问题 I would like to plot a model with ggplot2. I have estimated a robust variance-covariance matrix which I would like to use when estimating the confidence interval. Can I tell ggplot2 to use my VCOV, or, alternatively, can I somehow force predict.lm to use my VCOV matrix? A dummy example: source("http://people.su.se/~ma/clmclx.R") df <- data.frame(x1 = rnorm(100), x2 = rnorm(100), y = rnorm(100), group = as.factor(sample(1:10, 100, replace=T))) lm1 <- lm(y ~ x1 + x2, data = df) coeftest(lm1) ##

plotting barplots with standard errors using R

ぐ巨炮叔叔 提交于 2019-12-29 01:58:30
问题 I am trying to plot a simple barplot with standard errors and its driving me crazy. I did look up some examples and got as far as this: rt5 <- data.frame(rtgrp=c(37.2,38.0,38.3,38.5,38.9), mort=c(35,11,16,8,4), se=c(0.08,0.01,0.005,0.01,0.02)) rt5 xvals=with(rt5, barplot(mort,names.arg=rtgrp, xlab="PTEMP_R group mean",ylab="%",ylim=c(0,max(mort+10+se)))) I am trying to get through the last line of script but have been on it for quite a while: with(rt5, arrows(xvals,mort,xvals,mort+se,length

R: No way to get double-clustered standard errors for an object of class “c('pmg', 'panelmodel')”?

一世执手 提交于 2019-12-24 13:22:29
问题 I am estimating Fama-Macbeth regression. I have taken the code from this site fpmg <- pmg(Mumbo~Jumbo, test, index=c("year","firmid")) summary(fpmg) Mean Groups model Call: pmg(formula = Mumbo ~ Jumbo, data = superfdf, index = c("day","Firm")) Residuals Min. 1st Qu. Median Mean 3rd Qu. Max. -0.142200 -0.006930 0.000000 0.000000 0.006093 0.142900 Coefficients Estimate Std. Error z-value Pr(>|z|) (Intercept) -3.0114e-03 3.7080e-03 -0.8121 0.4167 Jumbo 4.9434e-05 3.4309e-04 0.1441 0.8854 Total

Calculating Standard Error of Coefficients for Logistic Regression in Spark

亡梦爱人 提交于 2019-12-22 18:19:01
问题 I know this question has been asked previously here. But I couldn't find the correct answer. The answer provided in the previous post suggests the usage of Statistics.chiSqTest(data) which provides the goodness of fit test (Pearson's Chi-Square tests), not the Wald Chi-Square tests for significance of coefficients. I was trying to build the parameter estimate table for logistic regression in Spark. I was able to get the coefficients and intercepts, but I couldn't find the spark API to get the

Problem placing error bars at the center of the columns in ggplot()

守給你的承諾、 提交于 2019-12-21 17:54:32
问题 I am having a problem with my bar chart- the error bars just appear on the corner of the columns of the grouping variable rather than on them in a centralised way. The code I am using is this: a <- data.frame (Cond = c("In", "In", "Out", "Out"), Temp = c("Hot", "Cool", "Hot", "Cool"), Score = c(.03, -.15, 0.84, 0.25), SE = c(.02, .08, .14, .12)) a.bar <- ggplot (data = a, aes(x = Cond, y = Score, fill = Temp)) + theme_bw() + theme(panel.grid = element_blank ()) + coord_cartesian (ylim = c(-0

Clustered Standard Errors with data containing NAs

匆匆过客 提交于 2019-12-20 23:28:26
问题 I'm unable to cluster standard errors using R and guidance based on this post. The cl function returns the error: Error in tapply(x, cluster1, sum) : arguments must have same length After reading up on tapply I'm still not sure why my cluster argument is the wrong length, and what is causing this error. Here is a link to the data set that I'm using. https://www.dropbox.com/s/y2od7um9pp4vn0s/Ec%201820%20-%20DD%20Data%20with%20Controls.csv Here is the R code: # read in data charter<-read.csv

Newey-West standard errors with Mean Groups/Fama-MacBeth estimator

半城伤御伤魂 提交于 2019-12-20 09:52:14
问题 I'm trying to get Newey-West standard errors to work with the output of pmg() (Mean Groups/Fama-MacBeth estimator) from the plm package. Following the example from here: require(foreign) require(plm) require(lmtest) test <- read.dta("http://www.kellogg.northwestern.edu/faculty/petersen/htm/papers/se/test_data.dta") fpmg <- pmg(y~x, test, index=c("firmid", "year")) # Time index in second position, unlike the example I can use coeftest directly just fine to get the Fama-MacBeth standard errors:

Double clustered standard errors for panel data

戏子无情 提交于 2019-12-17 18:33:33
问题 I have a panel data set in R (time and cross section) and would like to compute standard errors that are clustered by two dimensions, because my residuals are correlated both ways. Googling around I found http://thetarzan.wordpress.com/2011/06/11/clustered-standard-errors-in-r/ which provides a function to do this. It seems a bit ad-hoc so I wanted to know if there is a package that has been tested and does this? I know sandwich does HAC standard errors, but it doesn't do double clustering (i