interpolation

How to interpolate Excel file of unordered coordinates to a grid

假装没事ソ 提交于 2019-12-02 05:37:33
I have csv files which are 1200 Rows x 3 Columns. Number of rows can differ from as low as 500 to as large as 5000 but columns remain same. I want to create a feature vector from these files which will thus maintain consistent cells/vector length & thus help in finding out the distance between these vectors. FILE_1 A, B, C (267.09669678867186, 6.3664069175720197, 1257325.5809999991), (368.24070923984374, 9.0808353424072301, 49603.662999999884), (324.21470826328124, 11.489830970764199, 244391.04699999979), (514.33452027500005, 7.5162401199340803, 56322.424999999988), (386.19673340976561, 9

storing the weights used by scipy griddata for re-use

旧时模样 提交于 2019-12-02 05:26:32
I am trying to interpolate data from an unstructured mesh M1 to another unstructured mesh M2 . For this, scipy.interpolate.griddata seems good. However, I will need to interpolate many times from M1 to M2 , changing only the data not the meshes. I guess that, internally, the scipy.interpolate.griddata defines some weight coefficients when interpolating from M1 to M2 and that this may be one of the expensive parts of the computation. Therefore, I would like to avoid re-compute these weigths each time. Is there a way to do this? I.e., interpolating many times from one unstructured mesh to

How to save and load spline interpolation functions in R?

坚强是说给别人听的谎言 提交于 2019-12-02 05:08:05
I need to create thousands and thousands of interpolation splines, each based on 5 pairs of (x, y) values. I would like to save them in a database (or csv file). How can I export / import them, say in a text format or as an array of real parameters to rebuild each function when I need them? If you are using the splinefun function from R base package stats , it is very easy to export its construction information. set.seed(0) xk <- c(0, 1, 2) yk <- round(runif(3), 2) f <- splinefun(xk, yk, "natural") ## natural cubic spline construction_info <- environment(f)$z str(construction_info) # $ method:

Form a monthly series from a quarterly series

做~自己de王妃 提交于 2019-12-02 04:57:55
Assume that we have quarterly GDP change data like the following: Country 1999Q3 0.01 1999Q4 0.01 2000Q1 0.02 2000Q2 0.00 2000Q3 -0.01 Now, I would like to turn this into a monthly series based on e.g. the mean of the previous two quarters, as one measure to represent the economic conditions. I.e. with the above data I would like to produce the following: Country 2000-01 0.01 2000-02 0.01 2000-03 0.01 2000-04 0.015 2000-05 0.015 2000-06 0.015 2000-07 0.01 2000-08 0.01 2000-09 0.01 2000-10 -0.005 2000-11 -0.005 2000-12 -0.005 This is so that I can run regressions with other monthly series.

Time Series Interpolation

妖精的绣舞 提交于 2019-12-02 03:22:17
问题 I have two series of data (calibration and sample) and am trying to interpolate the calibration data from monthly to the frequency of the sample which randomly changes between minutely to secondly. I tried this (Interpolating timeseries) and here's my code: require(zoo) zc <- zoo(calib$MW2, calib$Date) zs <- zoo(sample$MW.2, sample$DateMW.2) z <- merge(zc, zs) zc <- zoo(calib$MW2, calib$Date) zs <- zoo(sample$MW.2, sample$DateMW.2) # "merge" gets data frames only zc <- data.frame(zc) zs <-

Interpolating 3D Coordinates between known missing time intervals

て烟熏妆下的殇ゞ 提交于 2019-12-02 03:07:38
The data is a Path in space. I have 3D location data (x,y,z) and the time the location point was recorded. The x ,y, and z coordinates are the point locations of something traveling through 3D Space. The time values is the time (beginning at 0) that each point was recorded. x y z time(s) 0.1 2.2 3.3 0 2.4 2.4 4.2 0.3 4.5 2.5 1.8 0.6 I will eventually miss some recording events. (this is known and accepted as true) And the data stream will continue at a different time interval: x y z time(s) 0.1 2.2 3.3 0 2.4 2.4 4.2 0.3 //missing x,y,z data point at time 0.6 //missing x,y,z data point at time

Precompute weights for multidimensional linear interpolation

本小妞迷上赌 提交于 2019-12-01 23:03:06
I have a non-uniform rectangular grid along D dimensions, a matrix of logical values V on the grid, and a matrix of query data points X. The number of grid points differs across dimensions. I run the interpolation multiple times for the same grid G and query X, but for different values V. The goal is to precompute the indexes and weights for the interpolation and to reuse them, because they are always the same. Here is an example in 2 dimensions, in which I have to compute indexes and values every time within the loop, but I want to compute them only once before the loop. I keep the data types

Fast 1D linear np.NaN interpolation over large 3D array

时间秒杀一切 提交于 2019-12-01 22:12:53
I have a 3D array (z, y, x) with shape=(92, 4800, 4800) where each value along axis 0 represents a different point in time. The acquisition of values in the time domain failed in a few instances causing some values to be np.NaN . In other instances no values have been acquired and all values along z are np.NaN . What is the most efficient way to use linear interpolation to fill np.NaN along axis 0 disregarding instances where all values are np.NaN ? Here is a working example of what I'm doing that employs pandas wrapper to scipy.interpolate.interp1d . This takes around 2 seconds per slice on

Spline representation with scipy.interpolate: Poor interpolation for low-amplitude, rapidly oscillating functions

三世轮回 提交于 2019-12-01 20:05:25
I need to (numerically) calculate the first and second derivative of a function for which I've attempted to use both splrep and UnivariateSpline to create splines for the purpose of interpolation the function to take the derivatives. However, it seems that there's an inherent problem in the spline representation itself for functions who's magnitude is order 10^-1 or lower and are (rapidly) oscillating. As an example, consider the following code to create a spline representation of the sine function over the interval (0,6*pi) (so the function oscillates three times only): import scipy from

Reduce the size of a large data set by sampling/interpolation to improve chart performance

女生的网名这么多〃 提交于 2019-12-01 19:06:34
I have a large set (>2000) of time series data that I'd like to display using d3 in the browser. D3 is working great for displaying a subset of the data (~100 points) to the user, but I also want a "context" view ( like this ) to show the entire data set and allow users to select as subregion to view in detail. However, performance is abysmal when trying to display that many points in d3. I feel like a good solution would be to select a sample of the data and then use some kind of interpolation (spline, polynomial, etc., this is the part I know how to do) to draw a curve that is reasonably