uwp AudioGraph audio processing

孤者浪人 提交于 2019-11-28 22:00:30

That data is interleaved IEEE float, so it alternates channel data as you step through the array, and the data range for each sample is from -1 to 1. For example, a mono signal only has one channel, so it won't interleave data at all; but a stereo signal has two channels of audio, and so:

dataInFloat[0]

is the first sample of data from the left channel and

dataInFloat[1]

is the first sample of data from the right channel. Then,

dataInFloat[2]

is the second sample of data from the left channel. and they just keep going back and forth. All the other data you'll end up caring about is in windows.media.mediaproperties.audioencodingproperties

So, just knowing this, you (essentially) can immediately get the overall volume of the signal directly from this data by looking at the absolute value of each sample. You'll definitely want to average it out over some amount of time. You can even just attach EQ effects to different nodes, and make seperate Low, Mids, and High analyzer nodes and never even get into FFT stuff. BUT WHAT FUN IS THAT? (it's actually still fun)

And then, yeah, to get your complex harmonic data and make a truly sweet visualizer, you want to do an FFT on it. People enjoy using AForge for learning scenarios, like yours. See Sources/Imaging/ComplexImage.cs for usage, Sources/Math/FourierTransform.cs for implemenation

Then you can easily get your classic bin data and do the classic music visualizer stuff or get more creative or whatever! technology is awesome!

  dataInFloat = (float*)dataInBytes;
  float max = 0;
   for (int i = 0; i < graph.SamplesPerQuantum; i++)
                {
                    max = Math.Max(Math.Abs(dataInFloat[i]), max);

                }

                finalLevel = max;
                Debug.WriteLine(max);
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!