glm

LC50 / LD50 confidence intervals from multiple regression glm with interaction

混江龙づ霸主 提交于 2019-12-03 16:34:46
I have a quasibinomial glm with two continuous explanatory variables (let's say "LogPesticide" and "LogFood") and an interaction. I would like to calculate the LC50 of the pesticide with confidence intervals at different amounts of food (e. g. the minimum and maximum food value). How can this be achieved? Example: First I generate a data set. mydata <- data.frame( LogPesticide = rep(log(c(0, 0.1, 0.2, 0.4, 0.8, 1.6) + 0.05), 4), LogFood = rep(log(c(1, 2, 4, 8)), each = 6) ) set.seed(seed=16) growth <- function(x, a = 1, K = 1, r = 1) { # Logistic growth function. a = position of turning point

Confidence Intervals for Lethal Dose (LD) for Logistic Regression in R

青春壹個敷衍的年華 提交于 2019-12-03 15:16:57
问题 I want to find Lethal Dose ( LD50 ) with its confidence interval in R . Other softwares line Minitab, SPSS, SAS provide three different versions of such confidence intervals. I could not find such intervals in any package in R (I also used findFn function from sos package). How can I find such intervals? I coded for one type of intervals based on Delta method (as not sure about it correctness) but would like to use any established function from R package. Thanks MWE: dose <- c(10.2, 7.7, 5.1,

glmer - predict with binomial data (cbind count data)

感情迁移 提交于 2019-12-03 15:03:23
I am trying to predict values over time (Days in x axis) for a glmer model that was run on my binomial data. Total Alive and Total Dead are count data. This is my model, and the corresponding steps below. full.model.dredge<-glmer(cbind(Total.Alive,Total.Dead)~(CO2.Treatment+Lime.Treatment+Day)^3+(Day|Container)+(1|index), data=Survival.data,family="binomial") We have accounted for overdispersion as you can see in the code (1:index). We then use the dredge command to determine the best fitted models with the main effects (CO2.Treatment, Lime.Treatment, Day) and their corresponding interactions.

How to plot interaction effects from extremely large data sets (esp. from rxGlm output)

醉酒当歌 提交于 2019-12-03 11:17:00
I am currenlty computing glm models off a huge data data set. Both glm and even speedglm take days to compute. I currently have around 3M observations and altogether 400 variables, only some of which are used for the regression. In my regression I use 4 integer independent variables ( iv1 , iv2 , iv3 , iv4 ), 1 binary independent variable as factor ( iv5 ), the interaction term ( x * y , where x is an integer and y is a binary dummy variable as factor). Finally, I have fixed effects along years ff1 and company ids ff2 . I have 15 years and 3000 conmpanies. I have introduced the fixed effects

R probit regression marginal effects

。_饼干妹妹 提交于 2019-12-03 08:43:10
I am using R to replicate a study and obtain mostly the same results the author reported. At one point, however, I calculate marginal effects that seem to be unrealistically small. I would greatly appreciate if you could have a look at my reasoning and the code below and see if I am mistaken at one point or another. My sample contains 24535 observations, the dependent variable "x028bin" is a binary variable taking on the values 0 and 1, and there are furthermore 10 explaining variables. Nine of those independent variables have numeric levels, the independent variable "f025grouped" is a factor

Extract pvalue from glm

梦想与她 提交于 2019-12-03 08:14:18
问题 I'm running many regressions and am only interested in the effect on the coefficient and p-value of one particular variable. So, in my script, I'd like to be able to just extract the p-value from the glm summary (getting the coefficient itself is easy). The only way I know of to view the p-value is using summary(myReg). Is there some other way? e.g.: fit <- glm(y ~ x1 + x2, myData) x1Coeff <- fit$coefficients[2] # only returns coefficient, of course x1pValue <- ??? I've tried treating fit

MCMCglmm multinomial model in R

你离开我真会死。 提交于 2019-12-03 08:10:41
I'm trying to create a model using the MCMCglmm package in R. The data are structured as follows, where dyad, focal, other are all random effects, predict1-2 are predictor variables, and response 1-5 are outcome variables that capture # of observed behaviors of different subtypes: dyad focal other r present village resp1 resp2 resp3 resp4 resp5 1 10101 14302 0.5 3 1 0 0 4 0 5 2 10405 11301 0.0 5 0 0 0 1 0 1 … So a model with only one outcome (teaching) is as follows: prior_overdisp_i <- list(R=list(V=diag(2),nu=0.08,fix=2), G=list(G1=list(V=1,nu=0.08), G2=list(V=1,nu=0.08), G3=list(V=1,nu=0.08

How to save glm result without data or only with coeffients for prediction?

て烟熏妆下的殇ゞ 提交于 2019-12-03 07:28:07
When I use the following R code, model_glm=glm(V1~. , data=xx,family="binomial"); save(file="modelfile",model_glm); The size of modelfile will be as much as the data, which will be 1gig in my case. How can I remove the data part in the result of model_glm, so I can only save a small file. Setting model = FALSE in your call to glm should prevent the model.frame from being returned. Also setting y = FALSE will prevent the response vector from being returned. x = FALSE is the default setting and prevents the model.matrix from being returned. This combination should shrink the size of your glm

Difference between glmnet() and cv.glmnet() in R?

浪子不回头ぞ 提交于 2019-12-03 06:05:27
I'm working on a project that would show the potential influence a group of events have on an outcome. I'm using the glmnet() package, specifically using the Poisson feature. Here's my code: # de <- data imported from sql connection x <- model.matrix(~.,data = de[,2:7]) y <- (de[,1]) reg <- cv.glmnet(x,y, family = "poisson", alpha = 1) reg1 <- glmnet(x,y, family = "poisson", alpha = 1) **Co <- coef(?reg or reg1?,s=???)** summ <- summary(Co) c <- data.frame(Name= rownames(Co)[summ$i], Lambda= summ$x) c2 <- c[with(c, order(-Lambda)), ] The beginning imports a large amount of data from my

Confidence Intervals for Lethal Dose (LD) for Logistic Regression in R

有些话、适合烂在心里 提交于 2019-12-03 05:06:37
I want to find Lethal Dose ( LD50 ) with its confidence interval in R . Other softwares line Minitab, SPSS, SAS provide three different versions of such confidence intervals. I could not find such intervals in any package in R (I also used findFn function from sos package). How can I find such intervals? I coded for one type of intervals based on Delta method (as not sure about it correctness) but would like to use any established function from R package. Thanks MWE: dose <- c(10.2, 7.7, 5.1, 3.8, 2.6, 0) total <- c(50, 49, 46, 48, 50, 49) affected <- c(44, 42, 24, 16, 6, 0) finney71 <- data