regression

NLS Function - Number of Iterations Exceeds max

浪子不回头ぞ 提交于 2019-12-11 07:57:49
问题 I have a dataset that looks like this: dput(testing1) structure(list(x = c(0, 426.263081392053, 852.526162784105, 1278.78924417616, 1705.05232556821, 2131.31540696026, 2557.57848835232, 2983.84156974437, 3410.10465113642, 3836.36773252847, 4262.63081392053, 4688.89389531258, 5115.15697670463, 5541.42005809668, 5967.68313948874, 6393.94622088079, 6820.20930227284, 7246.4723836649, 7672.73546505695, 8098.998546449, 8525.26162784105, 8951.52470923311, 9377.78779062516, 9804.05087201721, 10230

Run lm with multiple responses and weights

眉间皱痕 提交于 2019-12-11 07:39:02
问题 This question was migrated from Cross Validated because it can be answered on Stack Overflow. Migrated 6 years ago . I have to fit a linear model with the same model matrix to multiple responses. This can be easily done in R by specifying the response as matrix instead of a vector. Computation is very fast in this way. Now I would also like to add weights to the model that correspond to the accuracy of responses. Therefore, for each response vector I would need also different weight vector.

Calculating Multivariate regression using TensorFlow

杀马特。学长 韩版系。学妹 提交于 2019-12-11 07:06:01
问题 I am trying to implement a Multivariate regression in tensorflow where I have 192 examples with 6 features and one output variable. From my model I get a matrix (192, 6) while it should be (192, 1). Does anybody know what is wrong with my code? I provided my code below. # Parameters learning_rate = 0.0001 training_epochs = 50 display_step = 5 train_X = Data_ABX3[0:192, 0:6] train_Y = Data_ABX3[0:192, [24]] # placeholders for a tensor that will be always fed. X = tf.placeholder('float', shape

How to match a data frame of variable names and another with data for a regression?

与世无争的帅哥 提交于 2019-12-11 07:05:10
问题 I have two data frames: x = data.frame(Var1= c("A", "B", "C", "D","E"),Var2=c("F","G","H","I","J"), Value= c(11, 12, 13, 14,18)) y = data.frame(A= c(11, 12, 13, 14,18), B= c(15, 16, 17, 14,18),C= c(17, 22, 23, 24,18), D= c(11, 12, 13, 34,18),E= c(11, 5, 13, 55,18), F= c(8, 12, 13, 14,18),G= c(7, 5, 13, 14,18), H= c(8, 12, 13, 14,18), I= c(9, 5, 13, 14,18), J= c(11, 12, 13, 14,18)) Var3 <- rep("time", each=length(x$Var1)) x=cbind(x,Var3) time=seq(1:length(y[,1])) y=cbind(y,time) > x Var1 Var2

Total Least squares regression without intercept, with R

别来无恙 提交于 2019-12-11 07:00:56
问题 I need to calculate the beta of a regression between two prices with: No intercept Using Total Least Squares estimate In R there is the function prcomp to perform it. After it, how can I extract the beta? the code is `library(quantmod) # how to get closes getCloses <- function(sym) { ohlc <- getSymbols(sym, from="2009-01-01", to="2011-01-01", auto.assign=FALSE, return.class="zoo") Cl(ohlc)} # how to import data (2 assets) closes <- merge(IWM=getCloses("IWM"), VXZ=getCloses("VXZ"), all=FALSE)

scikit-learn Decision trees Regression: retrieve all samples for leaf (not mean)

末鹿安然 提交于 2019-12-11 06:51:58
问题 I have started using scikit-learn Decision Trees and so far it is working out quite well but one thing I need to do is retrieve the set of sample Y values for the leaf node, especially when running a prediction. That is given an input feature vector X, I want to know the set of corresponding Y values at the leaf node instead of just the regression value which is the mean (or median) of those values. Of course one would want the sample mean to have a small variance but I do want to extract the

polr(..) ordinal logistic regression in R

荒凉一梦 提交于 2019-12-11 06:44:40
问题 I'm experiencing some trouble when using the polr function. Here is a subset of the data I have: # response variable rep = factor(c(0.00, 0.04, 0.06, 0.13, 0.15, 0.05, 0.07, 0.00, 0.06, 0.04, 0.05, 0.00, 0.92, 0.95, 0.95, 1, 0.97, 0.06, 0.06, 0.03, 0.03, 0.08, 0.07, 0.04, 0.08, 0.03, 0.07, 0.05, 0.05, 0.06, 0.04, 0.04, 0.08, 0.04, 0.04, 0.04, 0.97, 0.03, 0.04, 0.02, 0.04, 0.01, 0.06, 0.06, 0.07, 0.08, 0.05, 0.03, 0.06,0.03)) # "rep" is discrete variable which represents proportion so that it

decreasing coefficients in R's coefplot?

和自甴很熟 提交于 2019-12-11 06:39:14
问题 coefplot from library(coefplot) has a variable decreasing which when set to to TRUE the coefficients should be plotted in descending order But when I run a toy example: data(tips, package = "reshape2") mod1 <- lm(tip ~ day + sex + smoker, data = tips) coefplot.glm(mod2, decreasing = TRUE) the coefficients aren't in descending order. What am I missing? EDIT I was missing sort = "magnitude" . However, this doesn't work with multiplot : data(tips, package = "reshape2") mod1 <- lm(tip ~ day + sex

Calculate and compare coefficient estimates from a regression interaction for each group

为君一笑 提交于 2019-12-11 06:37:19
问题 A) I am interested in the effects of a continuous variable ( Var1 ) on a continuous dependent variable ( DV ) conditional on four different groups, which are defined by two bivariate variables ( Dummy1 and Dummy2 ). I thus run a three-way interaction. Var1 <- sample(0:10, 100, replace = T) Dummy1 <- sample(c(0,1), 100, replace = T) Dummy2 <- sample(c(0,1), 100, replace = T) DV <-2*Var1 + Var1*Dummy1 + 2*Var1*Dummy2 + 10*Var1*Dummy1*Dummy2 + rnorm(100) fit <- lm(DV ~ Var1*Dummy1*Dummy2) I

Loop to implement Leave-One-Out observation and run glm, one variable at a time

你离开我真会死。 提交于 2019-12-11 05:50:11
问题 I have a data frame with 96 observations and 1106 variables. I would like to run logistic regression on the observations by leaving one out , one at a time. (So for the first set of observations there would be 95 total with the first observation removed, the second set of observations there would be 95 total with the second observation removed, and so forth so that there are 95 sets of observations that each have one observation left out.) In addition, I would like to run each set of these