audiounit

Core Audio AudioFIleReadPackets… looking for raw audio

陌路散爱 提交于 2019-12-04 19:42:57
I'm trying to get raw audio data from a file (i'm used to seeing floating point values between -1 and 1). I'm trying to pull this data out of the buffers in real time so that I can provide some type of metering for the app. I'm basically reading the whole file into memory using AudioFileReadPackets. I've create a RemoteIO audio unit to do playback and inside of the playbackCallback, i'm supplying the mData to the AudioBuffer so that it can be sent to hardware. The big problem I'm having is that the data being sent to the buffers from my array of data (from AudioFileReadPackets) is UInt32... I

Unable to convert mp3 into PCM using AudioConverterFillComplexBuffer in AudioFileStreamOpen's AudioFileStream_PacketsProc callback

雨燕双飞 提交于 2019-12-04 17:39:57
问题 I have a AudioFileStream_PacketsProc callback set during an AudioFileStreamOpen which handles converting audio packets into PCM using AudioConverterFillComplexBuffer . The issue that I am having is that I am getting a -50 OSStatus (paramErr) after AudioConverterFillComplexBuffer is called. Below is a snippet of what parameters were used in AudioConverterFillComplexBuffer and how they were made: audioConverterRef = AudioConverterRef() // AudioConvertInfo is a struct that contains information /

iOS Audio unit cut sound above some frequency

爷,独闯天下 提交于 2019-12-04 16:43:06
I have some problem with received sound (UDP WiFi) and I want clear it as much as I can. So at start I want cut off sounds above some frequency. Clearly I got raw data from socket, then I copy it to output buffer. I'm sure that exact cut off should be done right there. Could You suggest me? My current callback code static OSStatus outputCallback(void *udata, AudioUnitRenderActionFlags *flags, const AudioTimeStamp *ts, UInt32 busnum, UInt32 nframes, AudioBufferList *buflist) { NXAudioDevice *dev = (__bridge NXAudioDevice *) udata; AudioBuffer *buf = buflist->mBuffers; // Here I get new

How would you connect an iPod library asset to an Audio Queue Service and process with an Audio Unit?

99封情书 提交于 2019-12-04 12:07:40
I need to process audio that comes from the iPod library. The only way to read an asset for the iPod library is AVAssetReader. To process audio with an Audio Unit it needs to be in stereo format so I have values for the left and right channels. But when I use AVAssetReader to read an asset from the iPod library it does not allow me to get it out in stereo format. It comes out in interleaved format which I do not know how to break into left and right audio channels. To get to where I need to go I would need to do one of the following: Get AVAssetReader to give me an AudioBufferList in stereo

Playback and Recording simultaneously using Core Audio in iOS

寵の児 提交于 2019-12-04 11:31:39
问题 I need to play and record simultaneously using Core Audio. I really do not want to use AVFoundation API (AVAudioPlayer + AVAudioRecorder) to do this as I am making a music app and cannot have any latency issues. I've looked at the following source code from Apple: aurioTouch MixerHost I've already looked into the following posts: iOS: Sample code for simultaneous record and playback Record and play audio Simultaneously I am still not clear on how I can do playback and record the same thing

iOS: Audio Units vs OpenAL vs Core Audio

三世轮回 提交于 2019-12-04 07:45:55
问题 Could someone explain to me how OpenAL fits in with the schema of sound on the iPhone? There seem to be APIs at different levels for handling sound. The higher level ones are easy enough to understand. But my understanding gets murky towards the bottom. There is Core Audio, Audio Units, OpenAL. What is the connection between these? Is openAL the substratum, upon which rests Core Audio (which contains as one of its lower-level objects Audio Units) ? OpenAL doesn't seem to be documented by

iOS Audio Unit - Creating Stereo Sine Waves

此生再无相见时 提交于 2019-12-04 07:01:09
Over the weekend I hit a stumbling block learning how to program audio synthesis on iOS. I have been developing on iOS for several years, but I am just getting into the audio synthesis aspect. Right now, I am just programming demo apps to help me learn the concepts. I have currently been able to build and stack sine waves in a playback renderer for Audio Units without a problem. But, I want to understand what is going on in the renderer so I can render 2 separate sine waves in each Left and Right Channel. Currently, I assume that in my init audio section I would need to make the following

How to record and play audio simultaneously in iOS using Swift?

十年热恋 提交于 2019-12-03 11:58:24
In Objective-C, recording and playing audio simultaneously is fairly simple. And there are tonnes of sample code on the internet. But I want to record and play audio simultaneously using Audio Unit/Core Audio in Swift. There are vary small amount of help and sample code on this using Swift. And i couldn't find any help which could show how to achieve this. I am struggling with the bellow code. let preferredIOBufferDuration = 0.005 let kInputBus = AudioUnitElement(1) let kOutputBus = AudioUnitElement(0) init() { // This is my Audio Unit settings code. var status: OSStatus do { try

Unable to convert mp3 into PCM using AudioConverterFillComplexBuffer in AudioFileStreamOpen's AudioFileStream_PacketsProc callback

好久不见. 提交于 2019-12-03 10:35:12
I have a AudioFileStream_PacketsProc callback set during an AudioFileStreamOpen which handles converting audio packets into PCM using AudioConverterFillComplexBuffer . The issue that I am having is that I am getting a -50 OSStatus (paramErr) after AudioConverterFillComplexBuffer is called. Below is a snippet of what parameters were used in AudioConverterFillComplexBuffer and how they were made: audioConverterRef = AudioConverterRef() // AudioConvertInfo is a struct that contains information // for the converter regarding the number of packets and // which audiobuffer is being allocated

Playback and Recording simultaneously using Core Audio in iOS

丶灬走出姿态 提交于 2019-12-03 07:28:36
I need to play and record simultaneously using Core Audio. I really do not want to use AVFoundation API (AVAudioPlayer + AVAudioRecorder) to do this as I am making a music app and cannot have any latency issues. I've looked at the following source code from Apple: aurioTouch MixerHost I've already looked into the following posts: iOS: Sample code for simultaneous record and playback Record and play audio Simultaneously I am still not clear on how I can do playback and record the same thing simultaneously using Core Audio. Any pointers towards how I can achieve this will be greatly appreciable.