pcm

Core Audio AudioFIleReadPackets… looking for raw audio

陌路散爱 提交于 2019-12-04 19:42:57
I'm trying to get raw audio data from a file (i'm used to seeing floating point values between -1 and 1). I'm trying to pull this data out of the buffers in real time so that I can provide some type of metering for the app. I'm basically reading the whole file into memory using AudioFileReadPackets. I've create a RemoteIO audio unit to do playback and inside of the playbackCallback, i'm supplying the mData to the AudioBuffer so that it can be sent to hardware. The big problem I'm having is that the data being sent to the buffers from my array of data (from AudioFileReadPackets) is UInt32... I

Visualizing volume of PCM samples

穿精又带淫゛_ 提交于 2019-12-04 19:35:13
I have several chunks of PCM audio (G.711) in my C++ application. I would like to visualize the different audio volume in each of these chunks. My first attempt was to calculate the average of the sample values for each chunk and use that as an a volume indicator, but this doesn't work well. I do get 0 for chunks with silence and differing values for chunks with audio, but the values only differ slighly and don't seem to resemble the actual volume. What would be a better algorithem calculate the volume ? I hear G.711 audio is logarithmic PCM. How should I take that into account ? Note, I haven

J2ME/Blackberry - get audio signal amplitude level?

前提是你 提交于 2019-12-04 19:09:25
Is it possible in j2me to measure signal amplitude of audio record made by JSR-135 Player? I know I can access buffer, but then what? Target model Bold 9000, supported formats PCM and AMR. Which format I should use? See also Blackberry Audio Recording Sample Code How To - Record Audio on a BlackBerry smartphone Thank you! Maksym Gontar Get raw PCM signal level Use menu and trackwheel to zoom in/out and move left/right within graph. Audio format: raw 8000 Hz 16 bit mono pcm. Tested on Bold 9000 RIM OS 4.6 Algorythm should work in any mobile, where j2me and pcm is supported, of course

Incorrect peak frequency in JTransform

浪尽此生 提交于 2019-12-04 19:00:52
I've trying to calculate peak frequency from android mic buffer as per this How to get frequency from fft result? . Unfortunatly i'm getting wrong values. Even i played a tone in 18Khz,but i'm not getting correct peak frequency. This is my code, int sampleRate=44100,bufferSize=4096; AudioRecord audioRec=new AudioRecord(AudioSource.MIC,sampleRate,AudioFormat.CHANNEL_CONFIGURATION_MONO,AudioFormat.ENCODING_PCM_16BIT,bufferSize); audioRec.startRecording(); audioRec.read(bufferByte, 0,bufferSize); for(int i=0;i<bufferByte.length;i++){ bufferDouble2[i]=(double)bufferByte[i]; } //here window techniq

Connecting AVAudioMixerNode to AVAudioEngine

☆樱花仙子☆ 提交于 2019-12-04 17:38:35
I use AVAudioMixerNode to change audio format. this entry helped me a lot. Below code gives me data i want. But i hear my own voice on phone's speaker. How can i prevent it? func startAudioEngine() { engine = AVAudioEngine() guard let engine = engine, let input = engine.inputNode else { // @TODO: error out return } let downMixer = AVAudioMixerNode() //I think you the engine's I/O nodes are already attached to itself by default, so we attach only the downMixer here: engine.attach(downMixer) //You can tap the downMixer to intercept the audio and do something with it: downMixer.installTap(onBus:

Convert PCM to MP3/OGG

此生再无相见时 提交于 2019-12-04 16:30:59
I need to convert a continuous stream of PCM, or encoded audio (ADPCM, uLaw, Opus), into MP3/OGG format so that it can be streamed to a browser (using html's audio tag). I have the "stream-mp3/ogg-using-audio-tag" part working, now I need to develop the conversion layer. I have two questions: How can I convert PCM into MP3/OGG using NAudio and/or some other C# library/framework? I assume there is a code snippet or two in the NAudio demo app that may be doing this, but I haven't been able to find it. Do I have to convert the encoded data (ADPCM, uLaw, OPUS) into PCM (which I can) before I

How to mix PCM audio sources (Java)?

半世苍凉 提交于 2019-12-04 10:24:26
Here's what I'm working with right now: for (int i = 0, numSamples = soundBytes.length / 2; i < numSamples; i += 2) { // Get the samples. int sample1 = ((soundBytes[i] & 0xFF) << 8) | (soundBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535 int sample2 = ((outputBytes[i] & 0xFF) << 8) | (outputBytes[i + 1] & 0xFF); // Automatically converts to unsigned int 0...65535 // Normalize for simplicity. float normalizedSample1 = sample1 / 65535.0f; float normalizedSample2 = sample2 / 65535.0f; float normalizedMixedSample = 0.0f; // Apply the algorithm. if (normalizedSample1 < 0

AudioTrack: should I use an Asynctask, a Thread or a Handler?

£可爱£侵袭症+ 提交于 2019-12-04 07:05:55
问题 In Android: I am trying to play a wav file of size 230mb and 20 min whose properties are as below: ffmpeg -i 1.wav Stream #0:0: Audio: pcm_s16le ([1][0][0][0] / 0x0001), 44100 Hz, stereo, s16, 1411 kb/s The following is the code in android: float volchange=0.5f; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); // Code to set the volume change variable for the audiotrack Button volup = (Button) findViewById

Android PCM to Ulaw encoding wav file

丶灬走出姿态 提交于 2019-12-04 06:00:52
I'm trying to encode raw pcm data as uLaw to save on the bandwidth required to transmit speech data. I have come across a class called UlawEncoderInputStream on This page but there is no documentation! :( The constructor takes an input stream and a max pcm value (whatever that is). /** * Create an InputStream which takes 16 bit pcm data and produces ulaw data. * @param in InputStream containing 16 bit pcm data. * @param max pcm value corresponding to maximum ulaw value. */ public UlawEncoderInputStream(InputStream in, int max) { After looking through the code, I suspect that i should calculate

Deinterleaving PCM (*.wav) stereo audio data

倾然丶 夕夏残阳落幕 提交于 2019-12-04 05:44:46
I understand that PCM data is stored as [left][right][left][right]... . Am trying to convert a stereo PCM to mono Vorbis (*.ogg) which I understand is achievable by halving the left and the right channels ((left+right)*0.5). I have actually achieved this by amending the encoder example in the libvorbis sdk like this, #define READ 1024 signed char readbuffer[READ*4]; and the PCM data is read thus fread(readbuffer, 1, READ*4, stdin) I then halved the two channels, buffer[0][i] = ((((readbuffer[i*4+1]<<8) | (0x00ff&(int)readbuffer[i*4]))/32768.f) + (((readbuffer[i*4+3]<<8) | (0x00ff&(int