signal-processing

Designing an FIR notch filter with python

浪尽此生 提交于 2019-12-13 08:26:39
问题 I am writing some code with Python using the scipy.signal library to filter electromagnetic data that is mixed with various undesirable signatures that I want to filter out. For example, I have power line harmonics at various frequencies (i.e. 60, 120 Hz, etc....) with a width of only a few Hz that I would like to remove from the data using a notch filter. Is there already an existing function in python where I can merely inform the code how many data points i wish to use for the filter, the

Giving large no of samples to KissFFT

故事扮演 提交于 2019-12-13 07:31:26
问题 I wanted to find the 4096 point DFT of an audio signal of duration 10 seconds with sampling rate 44100 Hz. Hence there are 441000 input samples. But KissFFT takes only up to 4096 as input size. How to go about finding FFT of such a large signal? 回答1: The power spectrum of most real-world audio signals (speech, music, etc) is time-varying, so typically you calculate a series of short-term FFTs using overlapping windows, to produce a sequence of power spectra, aka a spectrogram . I suggest

FFTW on real data sequence

≡放荡痞女 提交于 2019-12-13 07:25:23
问题 I'm reading a raw sound file, and I' m trying to run the fft on it, with aim of getting the PSD at the end, but I'm in the start and I get an error that I can't understand, hope getting some help here, the code is: #include <stdio.h> #include <fftw3.h> int main(){ char* fileName = "sound.raw"; FILE* inp = NULL; double* data = NULL; int index = 0; fftw_plan plan; fftw_complex* out; double r,i; int N = 8192; //Allocating the memory for the input data data = (double*) fftw_malloc(sizeof(double)

Calculating The Average Amplitude of an Audio File Using FFT in Javascript

岁酱吖の 提交于 2019-12-13 07:18:44
问题 I am currently involved in a project in which I want to find the average amplitude for given audio data in any given AAC file. I am currently reading the file as an array buffer and passing into an Uint8Array. var dataArray = new Uint8Array(buffer) Then I set up two arrays, one real(containing the audio data) and one imaginary(containing all zeros), and pass them into an FFT. The audio data is then placed into a new array such that the numbers within the array are no longer treated as

Scipy FFT function and normalization not providing correct results

﹥>﹥吖頭↗ 提交于 2019-12-13 07:02:05
问题 I have a oscillator bank made in SuperCollider which receives phases and amplitudes from python via OSC. However the results don't sound correct at all. At first I thought the problem is in my SuperCollider code, but now I'm beginning to doubt my FFT function and normalization, here's my code: def readNormalize(length, location,sample): samplerate, data = wavfile.read(location) a = data.T[0] # first track of audio c = fft(a[sample:], length) ownSum = 0; length = int(length/2) for i in range(0

Plotting an audio spectrum

青春壹個敷衍的年華 提交于 2019-12-13 06:13:08
问题 I'm trying to implement an app that plot the spectrum of an audio using bass audio (http://www.un4seen.com/). My understanding is that I will have to: Get the FFT data from the stream float[] buffer = new float[256]; Bass.BASS_ChannelGetData(handle, buffer, (int)(BASS_DATA_FFT_COMPLEX|BASS_DATA_FFT_NOWINDOW)); For each fft, compute it’s magnitude Apply a window function to the FFT (Hanning or Hamming will do) then, draw a beautiful spectrum analysis The problem however is that: It seems that

Is it possible to process two microphones input in real time using DSP System Toolbox(MATLAB)?

跟風遠走 提交于 2019-12-13 06:03:36
问题 I have been trying to implement an Active Noise Cancellation(ANC) system using the Digital System Processing system toolbox. I have used the dsp.AudioRecorder and dsp.AudioPlayer as well. This is my initialization code: mic_reference = dsp.AudioRecorder('NumChannels',1); mic_reference.DeviceName='ASIO4ALL v2'; mic_error = dsp.AudioRecorder('NumChannels',1); mic_error.DeviceName='ASIO4ALL v2'; sink1_2 = dsp.AudioPlayer; sink1_2.DeviceName='ASIO4ALL v2'; where I call step(frame) for each of the

fft on samples of an audio file in matlab

ⅰ亾dé卋堺 提交于 2019-12-13 02:16:22
问题 I'm trying to extract information from a sound file in order to use it in a video classification algorithm I'm working on. My problem is that I don't know how to work exactly with audio files in Matlab. Below is what I need to accomplish: open the audio file and get the sampling rate/frequency I need to work on a window of 2 seconds so I have to loop over the file and get each 2 seconds as a window and then do the ftt (Fast-Fourier-Transform) on each window. After that it is my turn to use

Converting from one MFCC type to another - HTK

北战南征 提交于 2019-12-13 02:10:21
问题 I am working with the HTK toolkit on a word spotting task and have a classic training and testing data mismatch. The training data consisted of only "clean" (recorded over a mic) data. The data was converted to MFCC_E_D_A parameters which were then modelled by HMMs (phone-level). My test data has been recorded over landline and mobile phone channels (inviting distortions and the like). Using the MFCC_E_D_A parameters with HVite results in incorrect output. I want to make use of cepstral mean

apply fourier shift theorem to complex signal

不想你离开。 提交于 2019-12-13 00:45:35
问题 Im trying to apply the fourier phase shift theorem to a complex signal in R. However, only the magnitude of my signal shifts as I expect it. I think it should be possible to apply this theorem to complex signals, so probably I make an error somewhere. My guess is that there is an error in the frequency axis I calculate. How do I correctly apply the fourier shift theorem to a complex signal (using R)? i = complex(0,0,1) t.in = (1+i)*matrix(c(1,0,0,0,0,0,0,0,0,0)) n.shift = 5 #the output of fft