least-squares

how to solve many overdetermined systems of linear equations using vectorized codes?

拥有回忆 提交于 2020-01-11 03:22:12
问题 I need to solve a system of linear equations Lx=b, where x is always a vector (3x1 array), L is an Nx3 array, and b is an Nx1 vector. N usually ranges from 4 to something like 10. I have no problems solving this using scipy.linalg.lstsq(L,b) However, I need to do this many times (something like 200x200=40000 times) as x is actually something associated with each pixel in an image. So x is actually stored in an PxQx3 array where P and Q is something like 200-300, and the last number '3' refers

Linear Least Squares Fit of Sphere to Points

删除回忆录丶 提交于 2020-01-10 14:37:42
问题 I'm looking for an algorithm to find the best fit between a cloud of points and a sphere. That is, I want to minimise where C is the centre of the sphere, r its radius, and each P a point in my set of n points. The variables are obviously Cx , Cy , Cz , and r . In my case, I can obtain a known r beforehand, leaving only the components of C as variables. I really don't want to have to use any kind of iterative minimisation (e.g. Newton's method, Levenberg-Marquardt, etc) - I'd prefer a set of

Least square optimization (of matrices) in R

北城余情 提交于 2020-01-07 06:49:12
问题 Yesterday I asked a question about least square optimization in R and it turned out that lm function is the thing that I was looking for. On the other hand, now I have an other least square optimization question and I am wondering if lm could also solve this problem, or if not, how it can be handled in R. I have fixed matrices B (of dimension n x m) and V (of dimension n x n), I am looking for an m -long vector u such that sum( ( V - ( B %*% diag(u) %*% t(B)) )^2 ) is minimized. 回答1: 1) lm

Multiple solution with scipy.optimize.nnls

守給你的承諾、 提交于 2020-01-05 05:36:08
问题 I am using scipy.optimize.nnls to compute non-negative least square fit with a coefficients sum to 1. I always get the same solution when I run the computation. This is the code I am using : #! /usr/bin/env python3 import numpy as np import scipy.optimize as soptimize if __name__ == '__main__': C = np.array([[112.771820, 174.429720, 312.175750, 97.348620], [112.857010, 174.208300, 312.185270, 93.467580], [114.897210, 175.661850, 314.275100, 99.015480] ]); d = np.array([[112.7718, 174.4297,

R script - least squares solution to the following [duplicate]

本小妞迷上赌 提交于 2020-01-04 13:23:21
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: Finding where two linear fits intersect in R Given some points on a graph (usually only about 6 or 7 points), I need to find a best fit solution where the solution consists of the following: Two linear lines The lines must intersect The intersection point (the x point) must lie between two values I specify (such as xLow and xHigh) How would I do this using nls (or something better?)? If there are multiple best

How to fit a circle to a set of points with a constrained radius?

一曲冷凌霜 提交于 2020-01-04 05:20:43
问题 I have a set of points that represent a small arc of a circle. The current code fits a circle to these points using linear least-squares: void fit_circle(const std::vector<cv::Point2d> &pnts,cv::Point2d &centre, double &radius) { int cols = 3; cv::Mat X( static_cast<int>(pnts.size()), cols, CV_64F ); cv::Mat Y( static_cast<int>(pnts.size()), 1, CV_64F ); cv::Mat C; if (int(pnts.size()) >= 3 ) { for (size_t i = 0; i < pnts.size(); i++) { X.at<double>(static_cast<int>(i),0) = 2 * pnts[i].x; X

Fitting SIR model based on least squares

▼魔方 西西 提交于 2020-01-03 02:46:06
问题 I would like to optimize the fitting of SIR model. If I fit the SIR model with only 60 data points I get a "good" result. "Good" means, the fitted model curve is close to data points till t=40. My question is, how can I get a better fit, maybe based on all data points? ydata = ['1e-06', '1.49920166169172e-06', '2.24595472686361e-06', '3.36377954575331e-06', '5.03793663882291e-06', '7.54533628058909e-06', '1.13006564683911e-05', '1.69249500601052e-05', '2.53483161761933e-05', '3

Orthogonal regression fitting in scipy least squares method

ⅰ亾dé卋堺 提交于 2020-01-01 04:18:32
问题 The leastsq method in scipy lib fits a curve to some data. And this method implies that in this data Y values depends on some X argument. And calculates the minimal distance between curve and the data point in the Y axis (dy) But what if I need to calculate minimal distance in both axes (dy and dx) Is there some ways to implement this calculation? Here is a sample of code when using one axis calculation: import numpy as np from scipy.optimize import leastsq xData = [some data...] yData =

Orthogonal regression fitting in scipy least squares method

时光总嘲笑我的痴心妄想 提交于 2020-01-01 04:18:11
问题 The leastsq method in scipy lib fits a curve to some data. And this method implies that in this data Y values depends on some X argument. And calculates the minimal distance between curve and the data point in the Y axis (dy) But what if I need to calculate minimal distance in both axes (dy and dx) Is there some ways to implement this calculation? Here is a sample of code when using one axis calculation: import numpy as np from scipy.optimize import leastsq xData = [some data...] yData =

Python: two-curve gaussian fitting with non-linear least-squares

旧城冷巷雨未停 提交于 2019-12-29 03:05:08
问题 My knowledge of maths is limited which is why I am probably stuck. I have a spectra to which I am trying to fit two Gaussian peaks. I can fit to the largest peak, but I cannot fit to the smallest peak. I understand that I need to sum the Gaussian function for the two peaks but I do not know where I have gone wrong. An image of my current output is shown: The blue line is my data and the green line is my current fit. There is a shoulder to the left of the main peak in my data which I am