regression

Stock price predictions of keras multilayer LSTM model converge to a constant value

流过昼夜 提交于 2019-12-01 01:17:10
I've made a multilayer LSTM model that uses regression to predict next frame's values of the data. The model finishes after 20 epochs. I then get some predictions and compare them to my ground truth values. As you can see them in the picture above, predictions converge to a constant value. I don't know why this happens. Here is my model so far: from keras.models import Sequential from keras.layers.core import Dense, Activation, Dropout from keras.layers import LSTM, BatchNormalization from tensorflow.python.keras.initializers import RandomUniform init = RandomUniform(minval=-0.05, maxval= 0.05

Logistic Regression in R (SAS-like output)

≯℡__Kan透↙ 提交于 2019-12-01 00:53:10
I have a problem at hand which I'd think is fairly common amongst groups were R is being adopted for Analytics in place of SAS. Users would like to obtain results for logistic regression in R that they have become accustomed to in SAS. Towards this end, I was able to propose the Design package in R which contains many functions to extract the various metrics that SAS reports. If you have suggestions pertaining to other packages, or sample code that replicates some of the SAS outputs for logistic regression, I would be glad to hear of them. Some of the requirements are: Stepwise variable

Weighted Least Square

谁说我不能喝 提交于 2019-11-30 21:13:56
I want to do a regression of y~x (just 1 dependent and 1 independent variable) but I have heteroskedasticity. The variability of y increases as x increases. To deal with it, I would like to use weighted least squares through the "gls()" function in R. But I have to admit that I don't understand how to use it. I have to apply a variance function to the "weights" argument of the gls function. But I don't which one to choose and how to use it. Here's an example of taking care of poisson count like data where the variation will be proportional to the mean (which it sounds like you have). fit = lm

Logistic Regression in R (SAS-like output)

ε祈祈猫儿з 提交于 2019-11-30 19:56:01
问题 I have a problem at hand which I'd think is fairly common amongst groups were R is being adopted for Analytics in place of SAS. Users would like to obtain results for logistic regression in R that they have become accustomed to in SAS. Towards this end, I was able to propose the Design package in R which contains many functions to extract the various metrics that SAS reports. If you have suggestions pertaining to other packages, or sample code that replicates some of the SAS outputs for

Regression (logistic) in R: Finding x value (predictor) for a particular y value (outcome)

杀马特。学长 韩版系。学妹 提交于 2019-11-30 19:39:16
I've fitted a logistic regression model that predicts the a binary outcome vs from mpg ( mtcars dataset). The plot is shown below. How can I determine the mpg value for any particular vs value? For example, I'm interested in finding out what the mpg value is when the probability of vs is 0.50. Appreciate any help anyone can provide! model <- glm(vs ~ mpg, data = mtcars, family = binomial) ggplot(mtcars, aes(mpg, vs)) + geom_point() + stat_smooth(method = "glm", method.args = list(family = "binomial"), se = FALSE) The easiest way to calculate predicted values from your model is with the predict

Python threading error - must be an iterable, not int

◇◆丶佛笑我妖孽 提交于 2019-11-30 18:27:53
问题 I'm trying to calculate rolling r-squared of regression among first column and other columns in a dataframe (first column and second, first column and third etc.) But when I try threading, it kept telling me the error that TypeError: ParallelRegression() argument after * must be an iterable, not int". I'm wondering how do I fix this? Thanks very much! import threading totalThreads=3 #three different colors def ParallelRegression(threadnum): for i in range(threadnum): res[:,i]=sm.OLS(df.iloc[:

Scale back linear regression coefficients in R from scaled and centered data

♀尐吖头ヾ 提交于 2019-11-30 17:20:16
问题 I'm fitting a linear model using OLS and have scaled my regressors with the function scale in R because of the different units of measure between variables. Then, I fit the model using the lm command and get the coefficients of the fitted model. As far as I know the coefficients of the fitted model are not in the same units of the original regressors variables and therefore must be scaled back before they can be interpreted. I have been searching for a direct way to do it by couldn't find

how does sklearn do Linear regression when p >n?

落花浮王杯 提交于 2019-11-30 16:10:06
it's known that when the number of variables (p) is larger than the number of samples (n) the least square estimator is not defined. In sklearn I receive this values: In [30]: lm = LinearRegression().fit(xx,y_train) In [31]: lm.coef_ Out[31]: array([[ 0.20092363, -0.14378298, -0.33504391, ..., -0.40695124, 0.08619906, -0.08108713]]) In [32]: xx.shape Out[32]: (1097, 3419) Call [30] should return an error. How does sklearn work when p>n like in this case? EDIT: It seems that the matrix is filled with some values if n > m: # need to extend b matrix as it will be filled with # a larger solution

Looping through covariates in regression using R

核能气质少年 提交于 2019-11-30 15:41:24
问题 I'm trying to run 96 regressions and save the results as 96 different objects. To complicate things, I want the subscript on one of the covariates in the model to also change 96 times. I've almost solved the problem but I've unfortunately hit a wall. The code so far is, for(i in 1:96){ assign(paste("z.out", i,sep=""), lm(rMonExp_EGM~ TE_i + Month2+Month3+Month4+Month5+Month6+Month7+Month8+Month9+ Month10+Month11+Month12+Yrs_minus_2004 + as.factor(LGA),data=Pokies)) } This works on the object

Manually build logistic regression model for prediction in R

余生颓废 提交于 2019-11-30 15:34:16
问题 I'm attempting to test a logistic regression model (e.g. 3 coefficients for 3 predictor variables, X1,X2,X3), on a dataset. I'm aware of how to test a model after i created the model object using, for example, mymodel <- glm( Outcome ~ X1 + X2 + X3 , family = binomial,data=trainDat) and then test the data prob <- predict(mymodel,type="response",newdata=test) But i want to, now, create a logistic model using coefficients and intercept that I have, and then test this model on data. Basically I