signal-processing

“32 bit float mono audio” in Jack

拟墨画扇 提交于 2019-12-05 22:58:56
I was playing with Jack , and I noticed that the default audio type JACK_DEFAULT_AUDIO_TYPE is set to "32 bit float mono audio". I'm a bit confused: IEEE defines 32-bit C float range approximately from 3.4E–38 to 3.4E+38, and I was wondering what is the maximum and minimum "undistorted" amplitude that a jack_default_audio_sample_t can hold with that audio type. For example, if some DSP algorithm gives me samples in the range [0,1], how can I correctly convert between them and Jack's format? It's pretty common to do signal processing operations in floating point, then scale and cast the results

How should I implement accurate pitch-detection in Java for Android phones?

☆樱花仙子☆ 提交于 2019-12-05 22:36:45
I want to develop an application that would require accurate pitch-detection for musical instruments through the Android phone's microphone. Most suggestions I read of involve using Fast Fourier Transforms (FFT), but they mentioned it having issues with accuracy and processing power (considering it should run smoothly on a smartphone). One answer suggested a 5Hz error margin, which would be quite noticeable in the low frequency tone range. If the error is logarithmic rather than static in nature, the error margin from each note should be less than 10 Cents (1 Cent=100th root of a semitone, 1

Apply FFT to a both channels of a stereo signal separately?

情到浓时终转凉″ 提交于 2019-12-05 21:42:12
I'm reading a wave-file and would like to apply the fast fourier transformation to it. However I've got a stereo signal and I'm wondering what to do with the left and right channel. Does the FFT need to be applied to both channels separately? Yes and no. Certainly the FFT of each channel is independent, so you want separate FFTs for each of them. However, it is possible to compute two FFTs of real data using one call to a routine for FFTs of complex data and some additional arithmetic. This is described in Numerical Recipes and here . One real signal is used as the real part of a complex

Reverse Spectrogram A La Aphex Twin in MATLAB

心已入冬 提交于 2019-12-05 20:54:10
问题 I'm trying to convert an image into an audio signal in MATLAB by treating it as a spectrogram as in Aphex Twin's song on Windowlicker. Unfortunately, I'm having trouble getting a result. Here it what I have at the moment: function signal = imagetosignal(path, format) % Read in the image and make it symmetric. image = imread(path, format); image = [image; flipud(image)]; [row, column] = size(image); signal = []; % Take the ifft of each column of pixels and piece together the real-valued

Generate 16 QAM signal

好久不见. 提交于 2019-12-05 20:00:47
I know the way to generate QPSK signals using the following TxS=round(rand(1,N))*2-1; % QPSK symbols are transmitted symbols TxS=TxS+sqrt(-1)*(round(rand(1,N))*2-1); In the above, the symbols are 2 alphabets +1/-1. But I cannot understand how to generate 16- Quadrature Amplitude Modulation signal for the same alphabet space? Is it possible? Or what is the usual way for generating ? Also, is it a practice to work with complex signals and not real ? Take a look at this: http://www.mathworks.com/help/comm/ref/comm.rectangularqamdemodulator-class.html hMod = comm.RectangularQAMModulator(

Hamming Filter in Frequency and Spatial Domain

让人想犯罪 __ 提交于 2019-12-05 18:59:21
I want to remove the Gibbs artifact in a 1D signal by applying the Hamming filter on that in MATLAB. What I have is the k1 which is the signal in frequency domain. I can get the signal in time domain by applying DFT on k1 : s1 = ifft(ifftshift(k1)); This signal has Gibbs artifact. Now, I want to remove it by (A) multiplying Hamming filter to k1 in teh frequency domain and (B) convolving IFFT of Hamming filter with s1 in the spatial domain. I am expecting same output from both of these: % (A) Multiplying Hamming filter to `k1` n = size(k1,2); wk = hamming(n,'symmetric')'; k2 = wk.*k1; s2 = ifft

Finding the 'volume' of a .wav at a given time

非 Y 不嫁゛ 提交于 2019-12-05 17:24:51
I am working on a small example application for my fourth year project (dealing with Functional Reactive Programming). The idea is to create a simple program that can play a .wav file and then shows a 'bouncing' animation of the current volume of the playing song (like in audio recording software). I'm building this in Scala so have mainly been looking at Java libraries and existing solutions. Currently, I have managed to play a .wav file easily but I can't seem to achieve the second goal. Basically is there a way I can decode a .wav file so I can have someway of accessing the 'volume' at any

Converting from samplerate/cutoff frequency to pi-radians/sample in a discrete time sampled IIR filter system

▼魔方 西西 提交于 2019-12-05 16:27:34
I am working on doing some digital filter work using Python and Numpy/Scipy. I'm using scipy.signal.iirdesign to generate my filter coefficents, but it requires the filter passband coefficents in a format I am not familiar with wp, ws : float Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: Lowpass: wp = 0.2, ws = 0.3 Highpass: wp = 0.3, ws = 0.2 ( from here ) I'm not familiar with digital filters (I'm coming from a hardware design background). In an analog context, I would determine the desired slope and the 3db down point,

calculating frequency with apple's auriotouch example

北城以北 提交于 2019-12-05 16:10:33
I am working on a program that needs to capture the frequency of sound from a guitar. I have modified the aurioTouch example to output the frequency by using the frequency with the highest magnitude. It works ok for high notes but is very inaccurate on the lower strings. I believe it is due to overtones. I researched ways on how to solve this problem such as Cepstrum Analysis but I am lost on how to implement this within the example code as it is unclear and hard to follow without comments. any help would be greatly appreciated, thanks! As you have discovered, musical pitch is not the same as

Increase/Decrease Play Speed of a WAV file Python

强颜欢笑 提交于 2019-12-05 14:24:04
I want to change play speed (increase or decrease) of a certain WAV audio file using python wave module. I tried below thing : Read frame rate of input file. Double the frame rate. Write a new wave file with increased frame rate using output_wave.setparams() function. But its not working out. Please suggest. Thanks in Advance, WOW! if you no matter to change the pitch when you increase or decrease the speed, you can just change the sample rate ! Can be very simple using python: import wave CHANNELS = 1 swidth = 2 Change_RATE = 2 spf = wave.open('VOZ.wav', 'rb') RATE=spf.getframerate() signal =