regression

Using Keras ImageDataGenerator in a regression model

强颜欢笑 提交于 2019-12-03 06:15:48
I want to use the flow_from_directory method of the ImageDataGenerator to generate training data for a regression model, where the target value can be any float value between 1 and -1. flow_from_directory has a "class_mode" parameter with the descripton class_mode: one of "categorical", "binary", "sparse" or None. Default: "categorical". Determines the type of label arrays that are returned: "categorical" will be 2D one-hot encoded labels, "binary" will be 1D binary labels, "sparse" will be 1D integer labels. Which of these values should I take? None of them seems to really fit... At this

Fit a no-intercept model in caret

守給你的承諾、 提交于 2019-12-03 05:28:12
In R, I specify a model with no intercept as follows: data(iris) lmFit <- lm(Sepal.Length ~ 0 + Petal.Length + Petal.Width, data=iris) > round(coef(lmFit),2) Petal.Length Petal.Width 2.86 -4.48 However, if I fit the same model with caret, the resulting model includes an intercept: library(caret) caret_lmFit <- train(Sepal.Length~0+Petal.Length+Petal.Width, data=iris, "lm") > round(coef(caret_lmFit$finalModel),2) (Intercept) Petal.Length Petal.Width 4.19 0.54 -0.32 How do I tell caret::train to exclude the intercept term? As discussed in a linked SO question https://stackoverflow.com/a/41731117

Plot the results of a multivariate logistic regression model in R

孤街醉人 提交于 2019-12-03 04:38:57
问题 I would like to plot the results of a multivariate logistic regression analysis (GLM) for a specific independent variables adjusted (i.e. independent of the confounders included in the model) relationship with the outcome (binary). I have seen posts that recommend the following method using the predict command followed by curve , here's an example; x <- data.frame(binary.outcome, cont.exposure) model <- glm(binary.outcome ~ cont.exposure, family=binomial, data=x) plot(cont.exposure, binary

Extract only coefficients whose p values are significant from a logistic model

这一生的挚爱 提交于 2019-12-03 04:28:42
问题 I have run a logistic regression, the summary of which I name. "score" Accordingly, summary(score) gives me the following Deviance Residuals: Min 1Q Median 3Q Max -1.3616 -0.9806 -0.7876 1.2563 1.9246 Estimate Std. Error z value Pr(>|z|) (Intercept) -4.188286233 1.94605597 -2.1521921 0.031382230 * Overall -0.013407201 0.06158168 -0.2177141 0.827651866 RTN -0.052959314 0.05015013 -1.0560154 0.290961160 Recorded 0.162863294 0.07290053 2.2340482 0.025479900 * PV -0.086743611 0.02950620 -2

Calculate confidence band of least-square fit

蓝咒 提交于 2019-12-03 04:28:16
I got a question that I fight around for days with now. How do I calculate the (95%) confidence band of a fit? Fitting curves to data is the every day job of every physicist -- so I think this should be implemented somewhere -- but I can't find an implementation for this neither do I know how to do this mathematically. The only thing I found is seaborn that does a nice job for linear least-square. import numpy as np from matplotlib import pyplot as plt import seaborn as sns import pandas as pd x = np.linspace(0,10) y = 3*np.random.randn(50) + x data = {'x':x, 'y':y} frame = pd.DataFrame(data,

Performing lm() and segmented() on multiple columns in R

亡梦爱人 提交于 2019-12-03 04:05:42
I am trying to perform lm() and segmented() in R using the same independent variable (x) and multiple dependent response variables (Curve1, Curve2, etc.) one by one. I wish to extract the estimated break point and model coefficients for each response variable. I include an example of my data below. x Curve1 Curve2 Curve3 1 -0.236422 98.8169 95.6828 101.7910 2 -0.198083 98.3260 95.4185 101.5170 3 -0.121406 97.3442 94.8899 100.9690 4 0.875399 84.5815 88.0176 93.8424 5 0.913738 84.1139 87.7533 93.5683 6 1.795530 73.3582 78.1278 82.9956 7 1.833870 72.8905 77.7093 82.7039 8 1.872200 72.4229 77.3505

how can I do a maximum likelihood regression using scipy.optimize.minimize

丶灬走出姿态 提交于 2019-12-03 04:05:31
How can I do a maximum likelihood regression using scipy.optimize.minimize ? I specifically want to use the minimize function here, because I have a complex model and need to add some constraints. I am currently trying a simple example using the following: from scipy.optimize import minimize def lik(parameters): m = parameters[0] b = parameters[1] sigma = parameters[2] for i in np.arange(0, len(x)): y_exp = m * x + b L = sum(np.log(sigma) + 0.5 * np.log(2 * np.pi) + (y - y_exp) ** 2 / (2 * sigma ** 2)) return L x = [1,2,3,4,5] y = [2,3,4,5,6] lik_model = minimize(lik, np.array([1,1,1]), method

R: Bootstrapped binary mixed-model logistic regression using bootMer() of the new lme4 package

╄→гoц情女王★ 提交于 2019-12-03 03:57:19
I want to use the new bootMer() feature of the new lme4 package (the developer version currently). I am new to R and don't know which function should I write for its FUN argument. It says it needs a numerical vector, but I have no idea what that function will perform. So I have a mixed-model formula which is cast to the bootMer(), and have a number of replicates. So I don't know what that external function does? Is it supposed to be a template for bootstrapping methods? Aren't bootstrapping methods already implemented in he bootMer? So why they need an external "statistic of interest"? And

Time series prediction using R

不羁岁月 提交于 2019-12-03 03:18:44
I have the following R code library(forecast) value <- c(1.2, 1.7, 1.6, 1.2, 1.6, 1.3, 1.5, 1.9, 5.4, 4.2, 5.5, 6, 5.6, 6.2, 6.8, 7.1, 7.1, 5.8, 0, 5.2, 4.6, 3.6, 3, 3.8, 3.1, 3.4, 2, 3.1, 3.2, 1.6, 0.6, 3.3, 4.9, 6.5, 5.3, 3.5, 5.3, 7.2, 7.4, 7.3, 7.2, 4, 6.1, 4.3, 4, 2.4, 0.4, 2.4) sensor<-ts(value,frequency=24) fit <- auto.arima(sensor) LH.pred<-predict(fit,n.ahead=24) plot(sensor,ylim=c(0,10),xlim=c(0,5),type="o", lwd="1") lines(LH.pred$pred,col="red",type="o",lwd="1") grid() The resulting graph is But I am not satisfied with the prediction. Is there any way to make the prediction look

Plot logistic regression curve in R

廉价感情. 提交于 2019-12-03 03:16:44
I want to plot a logistic regression curve of my data, but whenever I try to my plot produces multiple curves. Here's a picture of my last attempt: last attempt Here's the relevant code I am using: fit = glm(output ~ maxhr, data=heart, family=binomial) predicted = predict(fit, newdata=heart, type="response") plot(output~maxhr, data=heart, col="red4") lines(heart$maxhr, predicted, col="green4", lwd=2) My professor uses the following code, but when I try to run it I get an error on the last line saying that the x and y lengths do not match: # fit logistic regression model fit = glm(output ~