signal-processing

Error while importing scikits.talkbox

别说谁变了你拦得住时间么 提交于 2020-01-04 09:07:10
问题 I want to use scikits.talkbox, but i get the following error while import scikits.talkbox. Traceback (most recent call last): File "/home/seref/Desktop/machine learning codes/MFCC/main.py", line 3, in from scikits.talkbox.features.mfcc import mfcc File "/usr/local/lib/python3.5/dist-packages/scikits/talkbox/ init .py", line 3, in from tools import * ImportError: No module named 'tools' code sample import scipy.io.wavfile from scikits.talkbox.features.mfcc import mfcc sample_rate, X = scipy.io

How to concatenate sine waves without phase jumps

为君一笑 提交于 2020-01-04 01:56:54
问题 I need to make a python script which generates sine waves of a given frequency and plays them using pyaudio (blocking mode), I also need to be able to change this frequency while it runs, modulate it and plot it using pyqtgraph. For now I have a thread generating chunks of data and my approach to 'connect' those sines was to get the fft and then calculate the angle (numpy.angle), store it in a variable and use it as the phase offset to the next chunk, but I'm not getting the results I

Plotting the ROC curve

喜夏-厌秋 提交于 2020-01-03 05:52:21
问题 If I have a matrix A of size m x n . The elements in the matrix represent the results of a specific detector. What I want is to characterize the performance of the detector by an ROC curve (sensitivity or Probability of detection by function of the probability of False alarm or 1-specificity). Interestingly, when (A(i,j) >= threshold) => the target is present , else it is absent . But of course, there will be some errors like as False alarm (False Positive) or Miss (False Negative). Lets

Read and write stereo .wav file with python + metadatas

橙三吉。 提交于 2020-01-03 02:22:13
问题 What's the easiest way to read and write a stereo .wav file in Python ? Should I use scipy.io.wavfile.read ? Should I use a 2-dimension array (how ?) in order to have x[n,j] where j is the channel number? I also want to read/write metadatas stored in the wav file like the markers , MIDI root note (Soundforge, as well as other sound editors, can read/write this specific .wav metadata called "MIDI root note") Thank you PS : I already know how to do with a mono file : from scipy.io.wavfile

Hamming Filter in Frequency and Spatial Domain

亡梦爱人 提交于 2020-01-02 08:36:35
问题 I want to remove the Gibbs artifact in a 1D signal by applying the Hamming filter on that in MATLAB. What I have is the k1 which is the signal in frequency domain. I can get the signal in time domain by applying DFT on k1 : s1 = ifft(ifftshift(k1)); This signal has Gibbs artifact. Now, I want to remove it by (A) multiplying Hamming filter to k1 in teh frequency domain and (B) convolving IFFT of Hamming filter with s1 in the spatial domain. I am expecting same output from both of these: % (A)

Hamming Filter in Frequency and Spatial Domain

匆匆过客 提交于 2020-01-02 08:36:32
问题 I want to remove the Gibbs artifact in a 1D signal by applying the Hamming filter on that in MATLAB. What I have is the k1 which is the signal in frequency domain. I can get the signal in time domain by applying DFT on k1 : s1 = ifft(ifftshift(k1)); This signal has Gibbs artifact. Now, I want to remove it by (A) multiplying Hamming filter to k1 in teh frequency domain and (B) convolving IFFT of Hamming filter with s1 in the spatial domain. I am expecting same output from both of these: % (A)

Pydub - How to change frame rate without changing playback speed

删除回忆录丶 提交于 2020-01-02 05:43:46
问题 I have a couple audio files that I open in Pydub with AudioSegment . I want to decrease the audio quality from frame rate 22050 to 16000 Hz. (One channel files) If I simply change the frame rate of AudioSegment, what I get is the exact same wave played in slower speed. Well, fair enough. But how do I actually change the waves to fit a lower quality, same speed playback? (Manual interpolation is the only thing I can think of, but I don't want to get into that trouble) 回答1: You can use: sound =

calculating frequency with apple's auriotouch example

大憨熊 提交于 2020-01-02 05:40:17
问题 I am working on a program that needs to capture the frequency of sound from a guitar. I have modified the aurioTouch example to output the frequency by using the frequency with the highest magnitude. It works ok for high notes but is very inaccurate on the lower strings. I believe it is due to overtones. I researched ways on how to solve this problem such as Cepstrum Analysis but I am lost on how to implement this within the example code as it is unclear and hard to follow without comments.

FFT - Calculating exact frequency between frequency bins

本秂侑毒 提交于 2020-01-02 03:25:38
问题 I am using a nice FFT library I found online to see if I can write a pitch-detection program. So far, I have been able to successfully let the library do FFT calculation on a test audio signal containing a few sine waves including one at 440Hz (I'm using 16384 samples as the size and the sample rate at 44100Hz). The FFT output looks like: 433.356Hz - Real: 590.644 - Imag: -27.9856 - MAG: 16529.5 436.047Hz - Real: 683.921 - Imag: 51.2798 - MAG: 35071.4 438.739Hz - Real: 4615.24 - Imag: 1170.8

Trilateration of a signal using Time Difference of Arrival

99封情书 提交于 2020-01-02 02:00:33
问题 I am having some trouble to find or implement an algorithm to find a signal source. The objective of my work is to find the sound emitter position. To accomplish this I am using three microfones. The technique that I am using is multilateration that is based on the time difference of arrival . The time difference of arrival between each microfones are found using Cross Correlation of the received signals. I already implemented the algorithm to find the time difference of arrival , but my