core-audio

Audio Unit Graph pause and repeating

浪尽此生 提交于 2019-12-04 02:15:57
问题 I've been trying to implement Audio Unit Graph using Apple provided code: IphoneMixerEQGraphTest. So far I have encountered two problems: 1.) I can't find any way to pause playing 2.) Provided example repeats endlessly. Is There a way I can disable repeat or or stop it - using a timer, intercept some kind of buffer value, when playing is almost at the end? Thank You. 回答1: NO REPEATING FUNCTIONALITY So far I have found a way to deal with audio stopping - although it's not pretty: I load same

Reading audio buffer data with AudioQueue

荒凉一梦 提交于 2019-12-04 01:27:33
问题 I am attempting to read audio data via AudioQueue. When I do so, I can verify that the bit depth of the file is 16-bit. But when I get the actual sample data, I'm only seeing values from -128 to 128. But I'm also seeing suspicious looking interleaved data, which makes me pretty sure that I'm just not reading the data correctly. So to begin with, I can verify that the source file is 44100, 16-bit, mono wav file. My buffer is allocated thusly: char *buffer= NULL; buffer = malloc(BUFFER_SIZE);

Split CMSampleBufferRef containing Audio

大兔子大兔子 提交于 2019-12-03 22:25:03
问题 I'am splitting the recording into different files while recording... The problem is, captureOutput video and audio sample buffers doesn't correspond 1:1 (which is logical) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection AUDIO START: 36796.833236847 | DURATION: 0.02321995464852608 | END: 36796.856456802 VIDEO START: 36796.842089239 | DURATION: nan | END: nan AUDIO START: 36796

kAudioDevicePropertyBufferFrameSize replacement for iOS

Deadly 提交于 2019-12-03 22:22:34
I was trying to set up an audio unit to render the music (instead of Audio Queue.. which was too opaque for my purposes).. iOS doesn't have this property kAudioDevicePropertyBufferFrameSize .. any idea how I can derive this value to set up the buffer size of my IO unit? I found this post interesting.. it asks about the possibility of using a combination of kAudioSessionProperty_CurrentHardwareIOBufferDuration and kAudioSessionProperty_CurrentHardwareOutputLatency audio session properties to determine that value.. but there is no answer.. any ideas? You can use the kAudioSessionProperty

IOS Swift read PCM Buffer

女生的网名这么多〃 提交于 2019-12-03 22:15:37
I have a project for Android reading a short[] array with PCM data from microphone Buffer for live analysis. I need to convert this functionality to iOS Swift. In Android it is very simple and looks like this.. import android.media.AudioFormat; import android.media.AudioRecord; ... AudioRecord recorder = new AudioRecord(MediaRecorder.AudioSource.DEFAULT, someSampleRate, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, AudioRecord.getMinBufferSize(...)); recorder.startRecording(); later I read the buffer with recorder.read(data, offset, length); //data is short[] (That's what i'm

Generating a tone in iOS with 16 bit PCM, AudioEngine.connect() throws AUSetFormat: error -10868

这一生的挚爱 提交于 2019-12-03 21:34:30
I have the following code for generating an audio tone of given frequency and duration. It's loosely based on this answer for doing the same thing on Android (thanks: @Steve Pomeroy): https://stackoverflow.com/a/3731075/973364 import Foundation import CoreAudio import AVFoundation import Darwin class AudioUtil { class func play(frequency: Int, durationMs: Int) -> Void { let sampleRateHz: Double = 8000.0 let numberOfSamples = Int((Double(durationMs) / 1000 * sampleRateHz)) let factor: Double = 2 * M_PI / (sampleRateHz/Double(frequency)) // Generate an array of Doubles. var samples = [Double]

How to convert AudioBufferList to CMSampleBuffer?

醉酒当歌 提交于 2019-12-03 18:05:25
问题 I have an AudioTapProcessor attached to AVPlayerItem. which will call static void tap_ProcessCallback(MTAudioProcessingTapRef tap, CMItemCount numberFrames, MTAudioProcessingTapFlags flags, AudioBufferList *bufferListInOut, CMItemCount *numberFramesOut, MTAudioProcessingTapFlags *flagsOut) when processing. I need to convert the AudioBufferList to CMSampleBuffer so I could use AVAssetWriterAudioInput.appendSampleBuffer to write it into a movie file. So how to convert AudioBufferList to

What is a correct way to manage AudioKit's lifecycle?

只谈情不闲聊 提交于 2019-12-03 16:19:26
I'm building an app that has to track input amplitude of users mic. AudioKit has a bunch of handy objects for my needs: AKAmplitudeTracker and so. I haven't found any viable info on how is it supposed to start AudioKit, begin tracking etc. For now all code related to AudioKit initialization is in viewDidLoad method of my root VC of audio recorder module. It is not correct, because random errors occur and I can't track whats wrong. Code below shows how I use AudioKit now. var silence: AKBooster! var tracker: AKAmplitudeTracker! var mic: AKMicrophone! ... override func viewDidLoad() { super

How do I check an MPMediaItem for MPMediaType of just audio?

允我心安 提交于 2019-12-03 14:59:47
I expect I need to do a bitwise comparison but I am unclear on how that is done in Objective-C syntax. The enum definition of MPMediaType is below. What I need to do is ensure the MPMediaItem is not video at all because AVAssetReader is choking on video files despite filtering to MPMediaTypeAnyAudio with my media query. How can I ensure the MPMediaItem is one of the only audio types? enum { // audio MPMediaTypeMusic = 1 << 0, MPMediaTypePodcast = 1 << 1, MPMediaTypeAudioBook = 1 << 2, MPMediaTypeAudioITunesU = 1 << 3, // available in iOS 5.0 MPMediaTypeAnyAudio = 0x00ff, // video (available in

Decode MP3 File from NSData

爷,独闯天下 提交于 2019-12-03 14:45:53
For my application, I need to decode an MP3 file which is stored in an NSData object. For security reasons, it is undesirable to write the NSData object to disk and re-open it using a System URL reference, even if its only locally stored for a few moments. I would like to take advantage Extended Audio File Services (or Audio File Services) to do this, but I'm having trouble getting a representation of the NSData, which exists only in memory, that can be read by these Audio File Services. Edit: I want to decode the MP3 data so I can get access to linear, PCM audio samples for manipulation.