signal-processing

Pydub - How to change frame rate without changing playback speed

…衆ロ難τιáo~ 提交于 2019-12-05 13:41:31
I have a couple audio files that I open in Pydub with AudioSegment . I want to decrease the audio quality from frame rate 22050 to 16000 Hz. (One channel files) If I simply change the frame rate of AudioSegment, what I get is the exact same wave played in slower speed. Well, fair enough. But how do I actually change the waves to fit a lower quality, same speed playback? (Manual interpolation is the only thing I can think of, but I don't want to get into that trouble) You can use: sound = AudioSegment.from_file(…) sound = sound.set_frame_rate(16000) 来源: https://stackoverflow.com/questions

Need Android equivalent of AudioInputStream

旧时模样 提交于 2019-12-05 11:37:29
I'm trying to write an Android app that analyzes content from the user's music library. Let's assume that these are mp3 files on the SD drive, for starters. I'm able to find Java algorithms to analyze music files, but I can't find an API to read and decode the files (not play them. There's an API to play the files, and even classes for audio effects, but I don't see any way for an app to get to the decoded data from a music file. I can read from microphone. J2SE has a class AudoInputStream,but it's not part of Android. Any suggestions? My real purpose was to be able to look at the files in my

For what kind of applications can i use dsp core of beagleboard? Can i use the DSP acceleration for background subtraction algorithm?

安稳与你 提交于 2019-12-05 10:26:34
For what kind of applications can i use dsp core of beagleboard? Can i use the DSP acceleration for background subtraction algorithm in OpenCV? You can use the DSP for all kinds of computations. It is a general purpose CPU optimized for DSP applications. So yes, even floating point stuff will work albeit the performance will not be great. The DSP really shines if you do integer computations over large arrays of data. Here the DSP can easily compute so fast that the time to transfer data from and to memory becomes the bottleneck. To give you a figure what is possible: I have an algorithm

How can I transfer a discrete set of data into the frequency domain and back (preferrably losslessly)

拟墨画扇 提交于 2019-12-05 10:15:15
问题 I would like to take an array of bytes of roughly size 70-80k and transform them from the time domain to the frequency domain (probably using a DFT). I have been following wiki and gotten this code so far. for (int k = 0; k < windows.length; k++) { double imag = 0.0; double real = 0.0; for (int n = 0; n < data.length; n++) { double val = (data[n]) * Math.exp(-2.0 * Math.PI * n * k / data.length) / 128; imag += Math.cos(val); real += Math.sin(val); } windows[k] = Math.sqrt(imag * imag + real *

Filtering signal with Python lfilter

﹥>﹥吖頭↗ 提交于 2019-12-05 08:26:49
I'm new with Python and I'm completely stuck when filtering a signal. This is the code: import numpy as np import matplotlib.pyplot as plt from scipy import signal fs=105e6 fin=70.1e6 N=np.arange(0,21e3,1) # Create a input sin signal of 70.1 MHz sampled at 105 MHz x_in=np.sin(2*np.pi*(fin/fs)*N) # Define the "b" and "a" polynomials to create a CIC filter (R=8,M=2,N=6) b=np.zeros(97) b[[0,16,32,48,64,80,96]]=[1,-6,15,-20,15,-6,1] a=np.zeros(7) a[[0,1,2,3,4,5,6]]=[1,-6,15,-20,15,-6,1] w,h=signal.freqz(b,a) plt.plot(w/max(w),20*np.log10(abs(h)/np.nanmax(h))) plt.title('CIC Filter Response')

WebRTC AGC (Automatic Gain Control)

荒凉一梦 提交于 2019-12-05 08:25:39
问题 I am testing the WebRTC AGC but I must be doing something wrong because the signal just passes through unmodified. Here's how I create and initialize the AGC: agcConfig.compressionGaindB = 9; agcConfig.limiterEnable = 1; agcConfig.targetLevelDbfs = 9; /* 9dB below full scale */ WebRtcAgc_Create(&agc); WebRtcAgc_Init(agc, minLevel, maxLevel, kAgcModeFixedDigital, 8000); WebRtcAgc_set_config(agc, agcConfig); And then for each 10ms sample block I do the following: WebRtcAgc_Process(agc, micData,

scipy.signal.spectrogram compared to matplotlib.pyplot.specgram

徘徊边缘 提交于 2019-12-05 07:59:22
The following code generates a spectrogram using either scipy.signal.spectrogram or matplotlib.pyplot.specgram . The color contrast of the specgram function is, however, rather low. Is there a way to increase it? import numpy as np from scipy import signal import matplotlib.pyplot as plt # Generate data fs = 10e3 N = 5e4 amp = 4 * np.sqrt(2) noise_power = 0.01 * fs / 2 time = np.arange(N) / float(fs) mod = 800*np.cos(2*np.pi*0.2*time) carrier = amp * np.sin(2*np.pi*time + mod) noise = np.random.normal(scale=np.sqrt(noise_power), size=time.shape) noise *= np.exp(-time/5) x = carrier + noise

Detect major events in signal data?

淺唱寂寞╮ 提交于 2019-12-05 06:55:33
问题 If I have a signal as the one below, how would I go about finding the beginning and end of the two "major events" (illustrated by a green arrow where the event begins, and a red arrow where it ends)? I've tried the method suggested in this answer, but it seems that no matter how much I play around with the lag , threshold and influence variables, it either reacts to the tiny changes in the beginning, middle and end of the graph (where there are no major events), or it doesn't react at all. I

Wrong values in calculating Frequency using FFT

99封情书 提交于 2019-12-05 06:35:25
I'm getting wrong frequency, I don't understand why i'm getting wrong values.since i have calculating as per instructions followed by stackoverflow. I've used FFT from http://introcs.cs.princeton.edu/java/97data/FFT.java.html and complex from http://introcs.cs.princeton.edu/java/97data/Complex.java.html audioRec.startRecording(); audioRec.read(bufferByte, 0,bufferSize); for(int i=0;i<bufferSize;i++){ bufferDouble[i]=(double)bufferByte[i]; } Complex[] fftArray = new Complex[bufferSize]; for(int i=0;i<bufferSize;i++){ fftArray[i]=new Complex(bufferDouble[i],0); } FFT.fft(fftArray); double[]

Speaker Recognition using MARF

ⅰ亾dé卋堺 提交于 2019-12-05 06:08:53
问题 I am using MARF(Modular Audio Recognition Framework) to recognize the Speaker's voice. In this, i have trained MARF with the voice of person 'A' and tested MARF with voice of person 'B'. Trained using --train training-samples Tested using --ident testing-samples/G.wav In my speakers.txt file I have mentioned the voice samples of both the persons i.e. A & B. But I am not getting the correct response means both the trained voice and testing voice are different but MARF is giving the Audio