regression

Speed-up inverse calculation of weighted least squares mean estimate in R

末鹿安然 提交于 2019-12-24 21:09:08
问题 I need to speed up the calculation of the mean estimate of beta in a WLS in R - I was able to speed up the covariance calculation thanks to SO, and now I am wondering if there is another trick to also speed up the mean calculation (or if what I am doing is already efficient enough). n = 10000 y = rnorm(n, 3, 0.4) X = matrix(c(rnorm(n,1,2), sample(c(1,-1), n, replace = TRUE), rnorm(n,2,0.5)), nrow = n, ncol = 3) Q = diag(rnorm(n, 1.5, 0.3)) wls.cov.matrix = crossprod(X / sqrt(diag(Q))) Q.inv =

How to find and plot the local maxima of a polynomial regression curve in R?

梦想与她 提交于 2019-12-24 16:56:07
问题 I have yield data for a farm (independent variable) and various nutrients which serve as predictors. I have performed univariate (cubic) linear regression using lm(y ~ ploy(x,3)) . Then I plotted the predictor variable (P) against the yield, and added a best fit curve (Figure 1). How do I then find the local maxima of this curve, and add a point to my plot which includes the value of this fitted yield (Figure 2)? I have looked into the findPeaks() function from the quantmod package, but have

How to find and plot the local maxima of a polynomial regression curve in R?

你说的曾经没有我的故事 提交于 2019-12-24 16:55:04
问题 I have yield data for a farm (independent variable) and various nutrients which serve as predictors. I have performed univariate (cubic) linear regression using lm(y ~ ploy(x,3)) . Then I plotted the predictor variable (P) against the yield, and added a best fit curve (Figure 1). How do I then find the local maxima of this curve, and add a point to my plot which includes the value of this fitted yield (Figure 2)? I have looked into the findPeaks() function from the quantmod package, but have

Neural Network for regression

好久不见. 提交于 2019-12-24 16:37:06
问题 The way I understand regression for neural networks is weights being added to each x-input from the dataset. I want something slightly different. I want weights added to the function that computes each x-input we'll call these s-inputs The function to compute the x-inputs is a summation function of all s-inputs I want each s-input to have its own weight So I say regression because I want the end result to be a beautiful continuous function between the mapping x -> y ...but that is

Neural Network for regression

扶醉桌前 提交于 2019-12-24 16:36:01
问题 The way I understand regression for neural networks is weights being added to each x-input from the dataset. I want something slightly different. I want weights added to the function that computes each x-input we'll call these s-inputs The function to compute the x-inputs is a summation function of all s-inputs I want each s-input to have its own weight So I say regression because I want the end result to be a beautiful continuous function between the mapping x -> y ...but that is

R squared and adjusted R squared with one predictor

梦想与她 提交于 2019-12-24 15:43:02
问题 Using the following to estimate the coefficient of determination in MATLAB: load hospital y = hospital.BloodPressure(:,1); X = double(hospital(:,2:5)); X2 = X(:,3); mdl = fitlm(X2,y); Estimated Coefficients: Estimate SE tStat pValue ________ ________ ______ __________ (Intercept) 116.72 3.9389 29.633 1.0298e-50 x1 0.039357 0.025208 1.5613 0.12168 Number of observations: 100, Error degrees of freedom: 98 Root Mean Squared Error: 6.66 R-squared: 0.0243, Adjusted R-Squared 0.0143 F-statistic vs.

R: No way to get double-clustered standard errors for an object of class “c('pmg', 'panelmodel')”?

一世执手 提交于 2019-12-24 13:22:29
问题 I am estimating Fama-Macbeth regression. I have taken the code from this site fpmg <- pmg(Mumbo~Jumbo, test, index=c("year","firmid")) summary(fpmg) Mean Groups model Call: pmg(formula = Mumbo ~ Jumbo, data = superfdf, index = c("day","Firm")) Residuals Min. 1st Qu. Median Mean 3rd Qu. Max. -0.142200 -0.006930 0.000000 0.000000 0.006093 0.142900 Coefficients Estimate Std. Error z-value Pr(>|z|) (Intercept) -3.0114e-03 3.7080e-03 -0.8121 0.4167 Jumbo 4.9434e-05 3.4309e-04 0.1441 0.8854 Total

Linear Regression in R: “Error in eval(expr, envir, enclos) : object not found”

末鹿安然 提交于 2019-12-24 10:50:03
问题 I'm trying to do a simple least-squares regression in R and have been getting errors constantly. This is really frustrating, can anyone point out what I am doing wrong? First I attach the dataset (17 variables, 440 observations, each observation on a single line, no column titles). Here, I get a "masked" error. From what I've read, the "masked" error happens when objects overlap. However here I am not using any packages but the default, and I loaded a new workspace image before this. Not sure

Linear regression with `lm()`: prediction interval for aggregated predicted values

故事扮演 提交于 2019-12-24 10:47:21
问题 I'm using predict.lm(fit, newdata=newdata, interval="prediction") to get predictions and their prediction intervals (PI) for new observations. Now I would like to aggregate (sum and mean) these predictions and their PI's based on an additional variable (i.e. a spatial aggregation on the zip code level of predictions for single households). I learned from StackExchange, that you cannot aggregate the prediction intervals of single predictions just by aggregating the limits of the prediction

Mini Batch Gradient Descent, adam and epochs

大憨熊 提交于 2019-12-24 10:44:51
问题 I am taking a course on Deep Learning in Python and I am stuck on the following lines of an example: regressor.compile(optimizer = 'adam', loss = 'mean_squared_error') regressor.fit(X_train, y_train, epochs = 100, batch_size = 32) From the definitions I know, 1 epoch = going through all training examples once to do one weight update. batch_size is used in optimizer that divide the training examples into mini batches. Each mini batch is of size batch_size . I am not familiar with adam