audiounit

Bit-shifting audio samples from Float32 to SInt16 results in severe clipping

风流意气都作罢 提交于 2019-12-07 04:18:53
问题 I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits. While there are other ways to solve this problem

Core Audio AudioFIleReadPackets… looking for raw audio

白昼怎懂夜的黑 提交于 2019-12-06 12:41:53
问题 I'm trying to get raw audio data from a file (i'm used to seeing floating point values between -1 and 1). I'm trying to pull this data out of the buffers in real time so that I can provide some type of metering for the app. I'm basically reading the whole file into memory using AudioFileReadPackets. I've create a RemoteIO audio unit to do playback and inside of the playbackCallback, i'm supplying the mData to the AudioBuffer so that it can be sent to hardware. The big problem I'm having is

Writing bytes to audio file using AUHAL audio unit

不想你离开。 提交于 2019-12-06 10:08:59
问题 I am trying to create a wav file from the sound input I get from the default input device of my macbook (built-in mic). However, the resultant file when imported to audacity as raw data is complete garbage. First I initialize the audio file reference so I can later write to it in the audio unit input callback. // struct contains audiofileID as member MyAUGraphPlayer player = {0}; player.startingByte = 0; // describe a PCM format for audio file AudioStreamBasicDescription format = { 0 };

Changing sample rate of an AUGraph on iOS

橙三吉。 提交于 2019-12-06 07:07:09
问题 I've implemented an AUGraph similar to the one on the iOS Developer's Library. In my App, however, I need to be able to playback sound at different sample rates (probably two different ones). I've been looking around Apple's documentation and haven't found a way to set the sample rate at runtime. I've been thinking of three possible work-arounds: Re-initialize the AUGraph every time I need to change the sample rate; Initialize a different AUGraph for each different sample rate; Convert the

Setting up an Audio Unit format and render callback for interleaved PCM audio

被刻印的时光 ゝ 提交于 2019-12-06 06:20:12
问题 I'm currently attempting to play back audio which I receive in a series of UDP packets. These are decoded into PCM frames with the following properties: 2 channels interleaved 2 bytes per sample in a single channel (so 4 bytes per frame) with a sample rate of 48000. Every UDP packet contains 480 frames, so the buffer's size is 480 * 2(channels) * 2(bytes per channel). I need to set up an Audio Unit to play back these packets. So, my first question is, how should I set up the

multi track mp3 playback for iOS application

蓝咒 提交于 2019-12-06 03:44:46
I am doing an application that involves playing back a song in a multi track format (drums, vocals, guitar, piano, etc...). I don't need to do any fancy audio processing to each track, all I need to be able to do is play, pause, and mute/unmute each track. I had been using multiple instances of AVAudioPlayer but when performing device testing, I noticed that the tracks are playing very slightly out of sync when they are first played. Furthermore, when I pause and play the tracks they continue to get more out of sync. After a bit of research I've realized that AVAudioplayer just has too much

iOS Audio Unit - Creating Stereo Sine Waves

不想你离开。 提交于 2019-12-06 02:42:17
问题 Over the weekend I hit a stumbling block learning how to program audio synthesis on iOS. I have been developing on iOS for several years, but I am just getting into the audio synthesis aspect. Right now, I am just programming demo apps to help me learn the concepts. I have currently been able to build and stack sine waves in a playback renderer for Audio Units without a problem. But, I want to understand what is going on in the renderer so I can render 2 separate sine waves in each Left and

Bit-shifting audio samples from Float32 to SInt16 results in severe clipping

匆匆过客 提交于 2019-12-05 10:03:56
I'm new to the iOS and its C underpinnings, but not to programming in general. My dilemma is this. I'm implementing an echo effect in a complex AudioUnits based application. The application needs reverb, echo, and compression, among other things. However, the echo only works right when I use a particular AudioStreamBasicDescription format for the audio samples generated in my app. This format however doesn't work with the other AudioUnits. While there are other ways to solve this problem fixing the bit-twiddling in the echo algorithm might be the most straight forward approach. The*

Mixing down two files together using Extended Audio File Services

╄→尐↘猪︶ㄣ 提交于 2019-12-05 09:20:19
问题 I am doing some custom audio post-processing using audio units. I have two files that I am merging together (links below), but am coming up with some weird noise in the output. What am I doing wrong? I have verified that before this step, the 2 files ( workTrack1 and workTrack2 ) are in a proper state and sound good. No errors are hit in the process as well. Buffer Processing code : - (BOOL)mixBuffersWithBuffer1:(const int16_t *)buffer1 buffer2:(const int16_t *)buffer2 outBuffer:(int16_t *

iOS: Process audio from AVPlayer video track

房东的猫 提交于 2019-12-05 01:41:43
问题 I plan to refactor my recording system in My iOS app. Context: Up to now, I record video and audio separately, starting recording both approximatly at same time. Once record is finished, same system, I play the video and audio separately, applying AudioUnits on the fly on audio. Finally, I merge the video and modified audio. It happens that both records don't start at the same time (for any reasons), producing an unsynchronized result. Would it be possible to refactor my system like this: 1)