signal-processing

Calculate Coefficients of 2nd Order Butterworth Low Pass Filter

与世无争的帅哥 提交于 2019-12-22 04:59:06
问题 With the, Sampling Freq: 10kHz Cut-off Freq: 1kHz How do I actually calculate the coefficients for the difference equation below? I know the difference equation will be in this form, but do not know how to actually work out and come up with the numbers for the coefficients b0, b1, b2, a1, a2 y(n) = b0.x(n) + b1.x(n-1) + b2.x(n-2) + a1.y(n-1) + a2.y(n-2) I will eventually be implementing this LPF in C++ but I need to know how to actually calculate the coefficients first before I can get

iPhone: CPU power to do DSP/Fourier transform/frequency domain?

泄露秘密 提交于 2019-12-22 04:25:18
问题 I want to analyze MIC audio on an ongoing basis (not just a snipper or prerecorded sample), and display frequency graph and filter out certain aspects of the audio. Is the iPhone powerful enough for that? I suspect the answer is a yes, given the Google and iPhone voice recognition, Shazaam and other music recognition apps, and guitar tuner apps out there. However, I don't know what limitations I'll have to deal with. Anyone play around with this area? 回答1: Apple's sample code aurioTouch has a

Change phase of a signal in frequency domain (MatLab)

北城以北 提交于 2019-12-22 01:10:14
问题 I posted this question on dsp.stackexchange, and was informed that it was more relevant for stackoverflow as it is primarily a programming question: I am attempting to write a code which allows me to change the phase of a signal in the frequency domain. However, my output isn't exactly correct, so something must be wrong. For a simple example assume that we have the function y = sin(2*pi*t) and want to implement a phase shift of -pi/2. My code looks as follows: clear all close all N = 64;

Delay a signal in time domain with a phase change in the frequency domain after FFT

丶灬走出姿态 提交于 2019-12-21 19:58:31
问题 I have a problem with a basic time/frequency property implemented in a Matlab script. The property is: I've tried to implement this in a Matlab script. I've supposed a sinusoidal signal with 5Hz of frequency value, Sampling frequency equal to 800Hz and I want to delay this signal by 1.8 seconds. So I've implemented this script: Fs = 800; Time_max = 4; % seconds t = 0:(1/Fs):Time_max; delay = 1.8; % One second of delay f = 5; %Hz y = sin(2 * pi * f * t); figure subplot(2,1,1) plot(t,y); xlabel

How to mix PCM audio sources (Java)?

别来无恙 提交于 2019-12-21 19:25:36
问题 Here's what I'm working with right now: for (int i = 0, numSamples = soundBytes.length / 2; i < numSamples; i += 2) { // Get the samples. int sample1 = ((soundBytes[i] & 0xFF) << 8) | (soundBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535 int sample2 = ((outputBytes[i] & 0xFF) << 8) | (outputBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535 // Normalize for simplicity. float normalizedSample1 = sample1 / 65535.0f; float normalizedSample2 =

Different spectrogram between MATLAB and Python

ぃ、小莉子 提交于 2019-12-21 17:32:22
问题 I have a program in MATLAB which I want to port to Python. The problem is that in it I use the built-in spectrogram function and, although the matplotlib specgram function seems identical, I'm getting different results when I run both. These is the code I've been running. MATLAB: data = 1:999; %Dummy data. Just for testing. Fs = 8000; % All the songs we'll be working on will be sampled at an 8KHz rate tWindow = 64e-3; % The window must be long enough to get 64ms of the signal NWindow = Fs

finding the best/ scale/shift between two vectors

久未见 提交于 2019-12-21 12:25:33
问题 I have two vectors that represents a function f(x), and another vector f(a x+b) i.e. a scaled and shifted version of f(x). I would like to find the best scale and shift factors. *best - by means of least squares error , maximum likelihood, etc. any ideas? for example: f1 = [0;0.450541598502498;0.0838213779969326;0.228976968716819;0.91333736150167;0.152378018969223;0.825816977489547;0.538342435260057;0.996134716626885;0.0781755287531837;0.442678269775446;0]; f2 = [-0.029171964726699;-0

What is the simplest way to continuously sample from the line-in using C#

拜拜、爱过 提交于 2019-12-21 09:23:04
问题 I want to continuously sample from my PC's audio line in using C# (then process that data). What is the best way to do the sampling? 回答1: You can do some (basic) audio capture using the open source NAudio .NET Audio Library. Have a look at the NAudioDemo project to see a simple example of recording to a WAV file using the WaveIn functions. NAudio also now includes the ability to capture audio using WASAPI (Windows Vista and above) and ASIO (if your soundcard has an ASIO driver). 回答2: There is

What are the downsides of convolution by FFT compared to realspace convolution?

老子叫甜甜 提交于 2019-12-21 06:58:00
问题 So I am aware that a convolution by FFT has a lower computational complexity than a convolution in real space. But what are the downsides of an FFT convolution? Does the kernel size always have to match the image size, or are there functions that take care of this, for example in pythons numpy and scipy packages? And what about anti-aliasing effects? 回答1: FFT convolutions are based on the convolution theorem, which states that givem two functions f and g , if Fd() and Fi() denote the direct

DCT Compression - Block Size, Choosing Coefficients

北战南征 提交于 2019-12-21 05:25:09
问题 I'm trying to understand the effect of the Block Size and best strategy of choosing the Coefficients in DCT compression. Basically I want to ask what I wrote here: Video Compression: What is discrete cosine transform? Lets assume the most primitive compression. Making block of an image. Performing a DCT on each blog and zeroing out some coefficients. To my understanding, the smaller the block the better. Smaller blocks means the Pixels are more correlated hence the energy in the DCT spectrum