scientific-computing

Large plot: ~20 million samples, gigabytes of data

六月ゝ 毕业季﹏ 提交于 2019-11-26 19:25:51
I have got a problem (with my RAM) here: it's not able to hold the data I want to plot. I do have sufficient HD space. Is there any solution to avoid that "shadowing" of my data-set? Concretely I deal with Digital Signal Processing and I have to use a high sample-rate. My framework (GNU Radio) saves the values (to avoid using too much disk space) in binary. I unpack it. Afterwards I need to plot. I need the plot zoomable, and interactive. And that is an issue. Is there any optimization potential to this, or another software/programming language (like R or so) which can handle larger data-sets?

Practices for programming in a scientific environment? [closed]

前提是你 提交于 2019-11-26 17:53:09
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . Background Last year, I did an internship in a physics research group at a university. In this group, we mostly used LabVIEW to write programs for controlling our setups, doing data acquisition and analyzing our data. For the first two purposes, that works quite OK, but for data

binning data in python with scipy/numpy

梦想的初衷 提交于 2019-11-26 11:35:01
is there a more efficient way to take an average of an array in prespecified bins? for example, i have an array of numbers and an array corresponding to bin start and end positions in that array, and I want to just take the mean in those bins? I have code that does it below but i am wondering how it can be cut down and improved. thanks. from scipy import * from numpy import * def get_bin_mean(a, b_start, b_end): ind_upper = nonzero(a >= b_start)[0] a_upper = a[ind_upper] a_range = a_upper[nonzero(a_upper < b_end)[0]] mean_val = mean(a_range) return mean_val data = rand(100) bins = linspace(0,

Interactive large plot with ~20 million sample points and gigabytes of data

隐身守侯 提交于 2019-11-26 05:00:54
问题 I have got a problem (with my RAM) here: it\'s not able to hold the data I want to plot. I do have sufficient HD space. Is there any solution to avoid that \"shadowing\" of my data-set? Concretely I deal with Digital Signal Processing and I have to use a high sample-rate. My framework (GNU Radio) saves the values (to avoid using too much disk space) in binary. I unpack it. Afterwards I need to plot. I need the plot zoomable, and interactive. And that is an issue. Is there any optimization

multithreaded blas in python/numpy

匆匆过客 提交于 2019-11-26 04:11:15
问题 I am trying to implement a large number of matrix-matrix multiplications in Python. Initially, I assumed that NumPy would use automatically my threaded BLAS libraries since I built it against those libraries. However, when I look at top or something else it seems like the code does not use threading at all. Any ideas what is wrong or what I can do to easily use BLAS performance? 回答1: Not all of NumPy uses BLAS, only some functions -- specifically dot() , vdot() , and innerproduct() and

binning data in python with scipy/numpy

不羁岁月 提交于 2019-11-26 01:51:16
问题 is there a more efficient way to take an average of an array in prespecified bins? for example, i have an array of numbers and an array corresponding to bin start and end positions in that array, and I want to just take the mean in those bins? I have code that does it below but i am wondering how it can be cut down and improved. thanks. from scipy import * from numpy import * def get_bin_mean(a, b_start, b_end): ind_upper = nonzero(a >= b_start)[0] a_upper = a[ind_upper] a_range = a_upper