core-audio

implicit conversion of an Objective-C pointer to 'void *' is disallowed with ARC

吃可爱长大的小学妹 提交于 2019-12-03 02:18:14
问题 What does this mean and what alternative do I have? implicit conversion of an Objective-C pointer to 'void *' is disallowed with ARC I am porting an Xcode3 project to iOS5 wich uses AudioSessionInitialize like this AudioSessionInitialize(NULL, NULL, NULL, self); where self here is a ViewController. 回答1: You can't do implicit casts to void* anymore, AudioSessionInitialize(NULL, NULL, NULL, objc_unretainedPointer(self)); should do the trick. EDIT: Historical point, the answer above was from

iOS AudioUnits pass through

≯℡__Kan透↙ 提交于 2019-12-03 01:38:08
I am trying to write an iOS application that will pass the sound received from microphone to speaker without any changes. I've read apple docs and guides. I choosed the first pattern from this guide . But nothing happening - silence. As you can see I've tried to use the AUAudioGraph (commented) - same result (do I need it in this simple example at all?). I saw few examples in the internet where callbacks are used, but I do not want use any. Is it possible? Any suggestions? Thanks for attention. The actual code #import "AudioController.h" #import <AudioToolbox/AudioToolbox.h> #import

How to use afconvert to convert from .caf to .mp3 format? [closed]

只谈情不闲聊 提交于 2019-12-03 01:23:07
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. I am using the afconvert command line utility to convert an audio file from .caf to .mp3 format. I have used afconvert : afconvert -f 'MPG3 ' -d '.mp3' -v input.caf output.mp3 But this gives me the following error: Input file: input.caf, 19008 frames Error: ExtAudioFileSetProperty ('cfmt') failed ('fmt?') I have also tried the following: afconvert -f 'MPG3 ' -d LEI32@44100 -v input.caf output.mp3 This also gives me the

How do I synthesize sounds with CoreAudio on iPhone/Mac

你离开我真会死。 提交于 2019-12-03 01:13:46
问题 I'd like to play a synthesised sound in an iPhone. Instead of using a pre-recorded sound and using SystemSoundID to play an existing binary, I'd like to synthesise it. Partially, that's because I want to be able to play the sound continuously (e.g. when the user's finger is on the screen) instead of a one-off sound sample. If I wanted to synthesise a Middle A+1 (A4) (440Hz), I can calculate a sine wave using sin(); what I don't know is how to arrange those bits into a packet which CoreAudio

What's the difference between all these audio frameworks?

心已入冬 提交于 2019-12-03 01:02:47
问题 In the documentation I see several frameworks for audio. All of them seem to be targeted at playing and recording audio. So I wonder what the big differences are between these? Audio Toolbox, Audio Unit, AV Foundation, and Core Audio. Or did I miss a guide that gives a good overview of all these? 回答1: Core Audio is the lowest-level of all the frameworks and also the oldest. Audio Toolbox is just above Core Audio and provides many different APIs that make it easier to deal with sound but still

How to use an Audio Unit on the iPhone

泪湿孤枕 提交于 2019-12-03 00:41:00
I'm looking for a way to change the pitch of recorded audio as it is saved to disk, or played back (in real time). I understand Audio Units can be used for this. The iPhone offers limited support for Audio Units (for example it's not possible to create/use custom audio units, as far as I can tell), but several out-of-the-box audio units are available, one of which is AUPitch. How exactly would I use an audio unit (specifically AUPitch)? Do you hook it into an audio queue somehow? Is it possible to chain audio units together (for example, to simultaneously add an echo effect and a change in

Playing audio with controls in iOS

梦想与她 提交于 2019-12-03 00:25:29
I've made an app with tab bar,nav bar and table view . In the table view you can choose to listen to some audio. New view opens and there I have some controls like: play,pause,volume slider, progress slider, label with the current time. It works, but not perfect. I can play the audio, I can pause the audio, I can also use the slider to skip forward or back. But now: When I hit the Back button on the navbar, the song keeps playing. That's ok, but when I go back to the view again, the timer and the slider reset themselves. I can't pause the song, just need to wait util it stops playing. Also,

Playing audio from a continuous stream of data (iOS)

。_饼干妹妹 提交于 2019-12-02 21:10:33
Been banging my head against this problem all morning. I have setup a connection to a datasource which returns audio data (It is a recording device, so there is no set length on the data. the data just streams in. Like, if you would open a stream to a radio) and I have managed to receive all the packets of data in my code. Now I just need to play it. I want to play the data that is coming in, so I do not want to queue a few minutes or anything, I want to use the data I am recieving at that exact moment and play it. Now I been searching all morning finding different examples but none were

how to read VBR audio in novacaine (as opposed to PCM)

最后都变了- 提交于 2019-12-02 20:52:36
问题 The creator of novacaine offered example code where audio data is read from a a file and fed to a ring buffer. When the file reader is created though, the output is forced to be PCM: - (id)initWithAudioFileURL:(NSURL *)urlToAudioFile samplingRate:(float)thisSamplingRate numChannels:(UInt32)thisNumChannels { ... // We're going to impose a format upon the input file // Single-channel float does the trick. _outputFormat.mSampleRate = self.samplingRate; _outputFormat.mFormatID =

Extracting Amplitude Data from Linear PCM on the iPhone

北慕城南 提交于 2019-12-02 18:32:10
I'm having difficulty extracting amplitude data from linear PCM on the iPhone stored in a audio.caf. My questions are: Linear PCM stores amplitude samples as 16-bit values. Is this correct? How is amplitude stored in packets returned by AudioFileReadPacketData()? When recording mono linear PCM, isn't each sample, (in one frame, in one packet) just an array for SInt16? What is the byte order (big endian vs. little endian)? What does each step in linear PCM amplitude mean physically? When linear PCM is recorded on the iPhone, is the center point 0 (SInt16) or 32768 (UInt16)? What do the max min