regression

3D coordinates as the output of a Neural Network

与世无争的帅哥 提交于 2019-12-03 23:05:30
Neural Networks are mostly used to classify. So, the activation of a neuron in the output layer indicates the class of whatever you are classifying. Is it possible (and correct) to design a NN to get 3D coordinates? This is, three output neurons with values in ranges, for example [-1000.0, 1000.0], each one. Yes. You can use a neural network to perform linear regression , and more complicated types of regression, where the output layer has multiple nodes that can be interpreted as a 3-D coordinate (or a much higher-dimensional tuple). To achieve this in TensorFlow, you would create a final

Multivariate LSTM Forecast Loss and evaluation

こ雲淡風輕ζ 提交于 2019-12-03 23:05:08
问题 I have a CNN-RNN model architecture with Bidirectional LSTMS for time series regression problem. My loss does not converge over 50 epochs. Each epoch has 20k samples. The loss keeps bouncing between 0.001 - 0.01 . batch_size=1 epochs = 50 model.compile(loss='mean_squared_error', optimizer='adam') trainingHistory=model.fit(trainX,trainY,epochs=epochs,batch_size=batch_size,shuffle=False) I tried to train the model with incorrectly paired X and Y data for which the loss stays around 0.5 , is it

Equations for 2 variable Linear Regression

a 夏天 提交于 2019-12-03 22:17:10
We are using a programming language that does not have a linear regression function in it. We have already implemented a single variable linear equation: y = Ax + B and have simply calculated the A and B coefficents from the data using a solution similar to this Stack Overflow answer . I know this problem gets geometrically harder as variables are added, but for our purposes, we only need to add one more: z = Ax + By + C Does anyone have the closed form equations, or code in any language that can solve for A, B and C given an array of x, y, and z's? so you have three linear equations k = aX1 +

Fminsearch Matlab (Non Linear Regression )

喜你入骨 提交于 2019-12-03 21:15:23
Can anyone explain to me how I can apply non linear regression to this equation t find out K using the matlab command window. I = 10^-9(exp(38.68V/k)-1). Screenshot of Equation I have data values as follows: Voltage := [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0]: Current:= [0, 0, 0, 0, 0, 0, 0, 0.07, 0.92, 12.02, 158.29]: Screenshot of Equation [NEW]: Now I used FminSearch as an alternative another and another error message appeared. Matrix dimensions must agree. Error in @(k)sum((I(:)-Imodel(V(:),k)).^2) Error in fminsearch (line 189) fv(:,1) = funfcn(x,varargin{:}); I used this

Calculate cross validation for Generalized Linear Model in Matlab

流过昼夜 提交于 2019-12-03 20:50:51
I am doing a regression using Generalized Linear Model.I am caught offguard using the crossVal function. My implementation so far; x = 'Some dataset, containing the input and the output' X = x(:,1:7); Y = x(:,8); cvpart = cvpartition(Y,'holdout',0.3); Xtrain = X(training(cvpart),:); Ytrain = Y(training(cvpart),:); Xtest = X(test(cvpart),:); Ytest = Y(test(cvpart),:); mdl = GeneralizedLinearModel.fit(Xtrain,Ytrain,'linear','distr','poisson'); Ypred = predict(mdl,Xtest); res = (Ypred - Ytest); RMSE_test = sqrt(mean(res.^2)); The code below is for calculating cross validation for mulitple

Knn Regression in R

倖福魔咒の 提交于 2019-12-03 20:23:32
I am investigating Knn regression methods and later Kernel Smoothing. I wish to demonstrate these methods using plots in R. I have generated a data set using the following code: x = runif(100,0,pi) e = rnorm(100,0,0.1) y = sin(x)+e I have been trying to follow a description of how to use "knn.reg" in 9.2 here: https://daviddalpiaz.github.io/r4sl/k-nearest-neighbors.html#regression grid2=data.frame(x) knn10 = FNN::knn.reg(train = x, test = grid2, y = y, k = 10) My predicted values seem reasonable to me but when I try to plot a line with them on top of my x~y plot I don't get what I'm hoping for

Can't get aggregate() work for regression by group

馋奶兔 提交于 2019-12-03 18:12:18
问题 I want to use aggregate with this custom function: #linear regression f-n CalculateLinRegrDiff = function (sample){ fit <- lm(value~ date, data = sample) diff(range(fit$fitted)) } dataset2 = aggregate(value ~ id + col, dataset, CalculateLinRegrDiff(dataset)) I receive the error: Error in get(as.character(FUN), mode = "function", envir = envir) : object 'FUN' of mode 'function' was not found What is wrong? 回答1: Your syntax on using aggregate is wrong in the first place. Pass function

Using Keras ImageDataGenerator in a regression model

孤街浪徒 提交于 2019-12-03 17:07:30
问题 I want to use the flow_from_directory method of the ImageDataGenerator to generate training data for a regression model, where the target value can be any float value between 1 and -1. flow_from_directory has a "class_mode" parameter with the descripton class_mode: one of "categorical", "binary", "sparse" or None. Default: "categorical". Determines the type of label arrays that are returned: "categorical" will be 2D one-hot encoded labels, "binary" will be 1D binary labels, "sparse" will be

Why is logistic regression called regression? [closed]

淺唱寂寞╮ 提交于 2019-12-03 15:36:24
According to what I have understood, linear regression predicts the outcome which can have continuous values, whereas logistic regression predicts outcome which is discrete. It seems to me that logistic regression is similar to a classification problem. So, why is it called regression ? There is also a related question: What is the difference between linear regression and logistic regression? There is a strict link between linear regression and logistic regression. With linear regression you're looking for the k i parameters: h = k 0 + Σ k i &dot; X i = K t &dot; X With logistic regression you

Python Multiple Simple Linear Regression

爱⌒轻易说出口 提交于 2019-12-03 15:30:42
Note this is not a question about multiple regression, it is a question about doing simple (single-variable) regression multiple times in Python/NumPy (2.7). I have two m x n arrays x and y . The rows correspond to each other, and each pair is the set of (x,y) points for a measurement. That is, plt.plot(x.T, y.T, '.') would plot each of m datasets/measurements. I'm wondering what the best way to perform the m linear regressions is. Currently I loop over the rows and use scipy.stats.linregress() . (Assume I don't want solutions based on doing linear algebra with the matrices but instead want to