interpolation

fastest way to use numpy.interp on a 2-D array

穿精又带淫゛_ 提交于 2019-11-30 15:22:36
I have the following problem. I am trying to find the fastest way to use the interpolation method of numpy on a 2-D array of x-coordinates. import numpy as np xp = [0.0, 0.25, 0.5, 0.75, 1.0] np.random.seed(100) x = np.random.rand(10) fp = np.random.rand(10, 5) So basically, xp would be the x-coordinates of the data points, x would be an array containing the x-coordinates of the values I want to interpolate, and fp would be a 2-D array containing y-coordinates of the datapoints. xp [0.0, 0.25, 0.5, 0.75, 1.0] x array([ 0.54340494, 0.27836939, 0.42451759, 0.84477613, 0.00471886, 0.12156912, 0

What is the best drop-in replacement for numpy.interp if I want the null interpolation (piecewise constant)?

大城市里の小女人 提交于 2019-11-30 14:17:51
numpy.interp is very convenient and relatively fast. In certain contexts I'd like to compare its output against a non-interpolated variant where the sparse values are propagated (in the "denser" output) and the result is piecewise constant between the sparse inputs. The function I want could also be called a "sparse -> dense" converter that copies the latest sparse value until it finds a later value (a kind of null interpolation as if zero time/distance has ever elapsed from the earlier value). Unfortunately, it's not easy to tweak the source for numpy.interp because it's just a wrapper around

Array interpolation

会有一股神秘感。 提交于 2019-11-30 14:14:41
问题 Before I start, please accept my apologies that I'm not a mathematician and don't really know the proper names for what I'm trying to do... ;) Pointers to any plain-English explanations that might help would be most appreciated (as I'm purely Googling at the moment based upon what I think the solution might be). If have a multi-dimensionsal array of source values and wanted to upscale that array by a factor of n , I think that what I'd need to use is Bicubic Interpolation Certainly the image

How do I integrate two 1-D data arrays in Python?

China☆狼群 提交于 2019-11-30 13:57:50
问题 I have two tabulated data arrays, x and y, and I don't know the function that generated the data. I want to be able to evaluate the integral of the line produced by the data at any point along the x-axis. Rather than interpolating a piecewise function to the data and then attempting to integrate that, which I am having trouble with, is there something I can use that will simply provide the integral by evaluating the arrays? When searching for solutions, I have seen references to iPython and

Python MemoryError in Scipy Radial Basis Function (scipy.interpolate.rbf)

只谈情不闲聊 提交于 2019-11-30 13:52:18
I'm trying to interpolate a not-so-large (~10.000 samples) pointcloud representing a 2D surface, using Scipy Radial Basis Function (Rbf). I got some good results, but with my last datasets I'm consistently getting MemoryError , even though the error appears almost instantly during execution (the RAM is obviously not being eaten up). I decided to hack a copy of the rbf.py file from Scipy, starting by filling it up with some print statements, which have been very useful. By decomposing the _euclidean_norm method line by line, like this: def _euclidean_norm(self, x1, x2): d = x1 - x2 s = d**2 su

What do the different values of the kind argument mean in scipy.interpolate.interp1d?

一个人想着一个人 提交于 2019-11-30 12:45:51
The SciPy documentation explains that interp1d 's kind argument can take the values ‘linear’ , ‘nearest’ , ‘zero’ , ‘slinear’ , ‘quadratic’ , ‘cubic’ . The last three are spline orders and 'linear' is self-explanatory. What do 'nearest' and 'zero' do? nearest "snaps" to the nearest data point. zero is a zero order spline. It's value at any point is the last raw value seen. linear performs linear interpolation and slinear uses a first order spline. They use different code and can produce similar but subtly different results . quadratic uses second order spline interpolation. cubic uses third

UIImageView scaling/interpolation

梦想的初衷 提交于 2019-11-30 12:21:39
I have a small IPhone app that I am working on and I am displaying an image with a UIImageView. I am scaling it up using the Aspect Fit mode. I would like to have the image scale up with no interpolation/smoothing (I want it to look pixellated). Is there any way I can change this behavior? A little more general question would be can I implement my own scaling algorithm, or are there other built in ones that I can select? You would need to set the magnificationFilter property on the view's layer to the nearest neighbour filter: [[view layer] setMagnificationFilter:kCAFilterNearest] Make sure

How can I escape meta-characters when I interpolate a variable in Perl's match operator?

天涯浪子 提交于 2019-11-30 11:57:17
Suppose I have a file containing lines I'm trying to match against: foo quux bar In my code, I have another array: foo baz quux Let's say we iterate through the file, calling each element $word , and the internal list we are checking against, @arr . if( grep {$_ =~ m/^$word$/i} @arr) This works correctly, but in the somewhat possible case where we have an test case of fo. in the file, the . operates as a wildcard operator in the regex, and fo. then matches foo , which is not acceptable. This is of course because Perl is interpolating the variable into a regex. The question: How do I force Perl

Better way than if else if else… for linear interpolation

一世执手 提交于 2019-11-30 11:51:31
question is easy. Lets say you have function double interpolate (double x); and you have a table that has map of known x-> y for example 5 15 7 18 10 22 note: real tables are bigger ofc, this is just example. so for 8 you would return 18+((8-7)/(10-7))*(22-18)=19.3333333 One cool way I found is http://www.bnikolic.co.uk/blog/cpp-map-interp.html (long story short it uses std::map, key= x, value = y for x->y data pairs). If somebody asks what is the if else if else way in title it is basically: if ((x>=5) && (x<=7)) { //interpolate } else if((x>=7) && x<=10) { //interpolate } So is there a more

Speeding up an interpolation exercise

风流意气都作罢 提交于 2019-11-30 10:29:22
I'm running about 45,000 local linear regressions (essentially) on about 1.2 million observations, so I'd appreciate some help trying to speed things up because I'm impatient. I'm basically trying to construct year-by-position wage contracts--the function wage(experience given firm,year,position)--for a bunch of firms. Here's the data (basic structure of) set I'm working with: > wages firm year position exp salary 1: 0007 1996 4 1 20029 2: 0007 1996 4 1 23502 3: 0007 1996 4 1 22105 4: 0007 1996 4 2 23124 5: 0007 1996 4 2 22700 --- 1175141: 994 2012 5 2 47098 1175142: 994 2012 5 2 45488 1175143