polynomials

R: Translate a model having orthogonal polynomials to a function using qr decomposition

一世执手 提交于 2019-11-30 16:59:25
I'm using R to create a linear regression model having orthogonal polynomial. My model is: fit=lm(log(UFB2_BITRATE_REF3) ~ poly(QPB2_REF3,2) + B2DBSA_REF3,data=UFB) UFB2_FPS_REF1= 29.98 27.65 26.30 25.69 24.68 23.07 22.96 22.16 21.51 20.75 20.75 26.15 24.59 22.91 21.02 19.59 18.80 18.21 17.07 16.74 15.98 15.80 QPB2_REF1 = 36 34 32 30 28 26 24 22 20 18 16 36 34 32 30 28 26 24 22 20 18 16 B2DBSA_REF1 = DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DOFFSOFF DONSON DONSON DONSON DONSON DONSON DONSON DONSON DONSON DONSON DONSON DONSON Levels: DOFFSOFF

Convolution of NumPy arrays of arbitrary dimension for Cauchy product of multivariate power series

∥☆過路亽.° 提交于 2019-11-29 18:08:24
I'm trying to implement the idea I have suggested here , for Cauchy product of multivariate finite power series (i.e. polynomials) represented as NumPy ndarrays. numpy.convolve does the job for 1D arrays, respectively. But to my best knowledge there is no implementations of convolution for arbitrary dimensional arrays. In the above link, I have suggested the equation: for convolution of two n dimensional arrays Phi of shape P=[p1,...,pn] and Psi of the shape Q=[q1,...,qn] , where: omega s are the elements of n dimensional array Omega of the shape O=P+Q-1 <A,B>_F is the generalization of

How to calculate sum of two polynomials?

落花浮王杯 提交于 2019-11-29 18:03:34
For instance 3x^4 - 17x^2 - 3x + 5. Each term of the polynomial can be represented as a pair of integers (coefficient,exponent). The polynomial itself is then a list of such pairs like [(3,4), (-17,2), (-3,1), (5,0)] for the polynomial as shown. Zero polynomial, 0, is represented as the empty list [] , since it has no terms with nonzero coefficients. I want to write two functions to add and multiply two input polynomials with the same representation of tuple (coefficient, exponent): addpoly(p1, p2) multpoly(p1, p2) Test Cases: addpoly([(4,3),(3,0)], [(-4,3),(2,1)]) should give [(2, 1),(3, 0)]

Which is the simplest way to make a polynomial regression with sklearn?

前提是你 提交于 2019-11-29 16:13:36
I have some data that doesn't fit a linear regression, In fact should fit 'exactly' a quadratic function: P = R*I**2 I'm miking this: model = sklearn.linear_model.LinearRegression() X = alambres[alambre]['mediciones'][x].reshape(-1, 1) Y = alambres[alambre]['mediciones'][y].reshape(-1, 1) model.fit(X,Y) Is there any chance to solve it by doing something like: model.fit([X,X**2],Y) ? You can use numpy's polyfit . import numpy as np from matplotlib import pyplot as plt X = np.linspace(0, 100, 50) Y = 23.24 + 2.2*X + 0.24*(X**2) + 10*np.random.randn(50) #added some noise coefs = np.polyfit(X, Y,

How to calculate sum of two polynomials?

早过忘川 提交于 2019-11-28 12:48:48
问题 For instance 3x^4 - 17x^2 - 3x + 5. Each term of the polynomial can be represented as a pair of integers (coefficient,exponent). The polynomial itself is then a list of such pairs like [(3,4), (-17,2), (-3,1), (5,0)] for the polynomial as shown. Zero polynomial, 0, is represented as the empty list [] , since it has no terms with nonzero coefficients. I want to write two functions to add and multiply two input polynomials with the same representation of tuple (coefficient, exponent): addpoly

Convolution of NumPy arrays of arbitrary dimension for Cauchy product of multivariate power series

戏子无情 提交于 2019-11-28 12:32:14
问题 I'm trying to implement the idea I have suggested here, for Cauchy product of multivariate finite power series (i.e. polynomials) represented as NumPy ndarrays. numpy.convolve does the job for 1D arrays, respectively. But to my best knowledge there is no implementations of convolution for arbitrary dimensional arrays. In the above link, I have suggested the equation: for convolution of two n dimensional arrays Phi of shape P=[p1,...,pn] and Psi of the shape Q=[q1,...,qn] , where: omega s are

Function for polynomials of arbitrary order (symbolic method preferred)

≯℡__Kan透↙ 提交于 2019-11-28 12:26:00
I've found polynomial coefficients from my data: R <- c(0.256,0.512,0.768,1.024,1.28,1.437,1.594,1.72,1.846,1.972,2.098,2.4029) Ic <- c(1.78,1.71,1.57,1.44,1.25,1.02,0.87,0.68,0.54,0.38,0.26,0.17) NN <- 3 ft <- lm(Ic ~ poly(R, NN, raw = TRUE)) pc <- coef(ft) So I can create a polynomial function: f1 <- function(x) pc[1] + pc[2] * x + pc[3] * x ^ 2 + pc[4] * x ^ 3 And for example, take a derivative: g1 <- Deriv(f1) How to create a universal function so that it doesn't have to be rewritten for every new polynomial degree NN ? My original answer may not be what you really want, as it was

Which is the simplest way to make a polynomial regression with sklearn?

生来就可爱ヽ(ⅴ<●) 提交于 2019-11-28 09:31:18
问题 I have some data that doesn't fit a linear regression, In fact should fit 'exactly' a quadratic function: P = R*I**2 I'm miking this: model = sklearn.linear_model.LinearRegression() X = alambres[alambre]['mediciones'][x].reshape(-1, 1) Y = alambres[alambre]['mediciones'][y].reshape(-1, 1) model.fit(X,Y) Is there any chance to solve it by doing something like: model.fit([X,X**2],Y) ? 回答1: You can use numpy's polyfit. import numpy as np from matplotlib import pyplot as plt X = np.linspace(0,

In R formulas, why do I have to use the I() function on power terms, like y ~ I(x^3)

一曲冷凌霜 提交于 2019-11-27 11:07:05
I'm trying to get my head around the use of the tilde operator, and associated functions. My 1st question is why does I() need to be used to specify arithmetic operators? For example, these 2 plots generate different results (the former having a straight line, and the latter the expected curve) x <- c(1:100) y <- seq(0.1,10,0.1) plot(y~x^3) plot(y~I(x^3)) further, both of the following plots also generate the expected result plot(x^3, y) plot(I(x^3), y) My second question is, perhaps the examples I've been using are too simple, but I don't understand where ~ should actually be used. The issue

What does the capital letter “I” in R linear regression formula mean?

扶醉桌前 提交于 2019-11-27 07:28:23
I haven't been able to find an answer to this question, largely because googling anything with a standalone letter (like "I") causes issues. What does the "I" do in a model like this? data(rock) lm(area~I(peri - mean(peri)), data = rock) Considering that the following does NOT work: lm(area ~ (peri - mean(peri)), data = rock) and that this does work: rock$peri - mean(rock$peri) Any key words on how to research this myself would also be very helpful. I isolates or insulates the contents of I( ... ) from the gaze of R's formula parsing code. It allows the standard R operators to work as they