core-audio

iOS: Bug in the simulator using AudioUnitRender

ⅰ亾dé卋堺 提交于 2019-12-11 08:06:37
问题 I have hit yet another iOS simulator bug. My question is, is there some workaround? Bug is this: Load apple's AurioTouch sample Project. and simply print out the number of frames getting received by the render callback (in aurioTouchAppDelegate.mm) static OSStatus PerformThru( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData) { printf( "%u, ", (unsigned int)inNumberFrames ); I get

IPhone audio: Volume faint ( but okay with headphones )

◇◆丶佛笑我妖孽 提交于 2019-12-11 07:45:38
问题 I'm getting unexpected audio behaviour: problem -- iPhone device volume is very faint, but if I plug in headphones the volume is perfect firstly, start app on iPhone the with nothing plugged in. Audio works, but it is so faint, really as if it was on minimum volume. but it is on maximum volume. now I plug in headphones. full volume. great! unplug headphones. Go to (2) It doesn't matter whether I start with the headphones plugged in or not. It seems to be an unrelated problem EDIT: this

Accelerate framework vDSP, FFT framing

懵懂的女人 提交于 2019-12-11 07:15:52
问题 I'm trying to implement FFT calculation, using Apple's vDSP, on a recorded audio file (let's assume it's a mono PCM). I've did a research here and I've found following topics quite useful: Using the apple FFT and accelerate Framework Extracting precise frequencies from FFT Bins using phase change between frames Reading audio with Extended Audio File Services (ExtAudioFileRead) For example, we configured FFT with frame_size N = 1024 samples, log2n=10: m_setupReal = vDSP_create_fftsetup(LOG_2N,

implementing Queue Services in AV Audio Recorder using Swift

蹲街弑〆低调 提交于 2019-12-11 06:24:59
问题 Is it possible to create a buffer concept similar to AudioQueue services in AVRecorder Framework. In my application , i need to capture the Audio buffer and send it over the Internet. The server connection part is done, but i wanted to know if there is a way to record the voice continuously in the foreground, and pass this audio buffer by buffer at the background to the server using Swift . Comments are appreciated. 回答1: AVAudioRecorder records to a file, so you can't easily use it to stream

Stop AudioUnit speech

别等时光非礼了梦想. 提交于 2019-12-11 05:02:12
问题 I'm implementing a speech synthesizer using Audio Unit, based on the Core Audio examples. Everything works as expected, except that StopSpeech and StopSpeechAt appear to do nothing. Here are the speak and stop methods: void Synthesizer::speak( const string &text ) { mIsSpeaking = true; mLastText = text; NSString *ns_text = [NSString stringWithCString:text.c_str() encoding:[NSString defaultCStringEncoding]]; CFStringRef cf_text = (__bridge CFStringRef)ns_text; CheckError( SpeakCFString(

How to use AVAudioRecorder data and convert it to raw audio for C++?

独自空忆成欢 提交于 2019-12-11 04:45:39
问题 I've been looking up on where to get microphone input in mac and the AVAudioRecorder class from Objective C turned up. I've managed to record audio into a file, but how can I use AVAudioRecorder and then convert it's Core Audio Format data into raw audio to use in C++ code, for example PocketSphinx? Thanks 回答1: The Audio Queue API and the RemoteIO Audio Unit will allow getting raw audio sample buffers from the microphone on iOS devices. 来源: https://stackoverflow.com/questions/13328357/how-to

OSStatus error -50 (invalid parameters) AudioQueueNewInput recording audio on iOS

∥☆過路亽.° 提交于 2019-12-11 03:08:43
问题 I've been trawling the internet for ages trying to find the cause of this error but I'm stuck. I've been following the Apple Developer documentation for using Audio Services to record audio and I keep getting this error whatever I do. I can record audio fine using AVAudioRecorder into any format but my end game is to obtain a normalised array of floats from the input data in order to apply an FFT to it (sorry for the noob phrasing I'm very new to audio programming). Here's my code: - (void

Swift 3 LPCM Audio Recorder | Error: kAudioFileInvalidPacketOffsetError

房东的猫 提交于 2019-12-11 02:18:30
问题 The below recorder works only the first time, if you tried recording a second time it gives the error 'kAudioFileInvalidPacketOffsetError' when trying to AudioFileWritePackets. Any idea why this is happening? Thank you in advance Repository located here Recorder import UIKit import CoreAudio import AudioToolbox class SpeechRecorder: NSObject { static let sharedInstance = SpeechRecorder() // MARK:- properties @objc enum Status: Int { case ready case busy case error } internal struct

AudioOutputUnitStart very slow

早过忘川 提交于 2019-12-11 01:39:48
问题 I have a code that plays mono audio event (short beeps at various frequencies). I create an AudioOutputUnit, stop it, and whenever I need to play the audio. I start it. When I've played it for the required time, I stop it. Sounds simple enough. However, AudioOutputUnitStart will take usually 180ms to return on my iPhone 4S (with iOS 5.1), this is way too much. Here is the creation/initialisation of the AudioOutputUnit void createAOU() { m_init = false; // find the default playback output unit

AKAudioPlayer: No sound in speakers, only with headphones

谁都会走 提交于 2019-12-11 00:26:01
问题 Using AudioKit for sound management, I noticed an issue (bug?) with this very simple piece of code. import AudioKit class MainViewController: UIViewController { var audioFile: AKAudioFile? var audioPlayer: AKAudioPlayer? override func viewDidLoad() { super.viewDidLoad() } @IBAction func onPlayButtonClick(_ sender: Any) { do { audioFile = try AKAudioFile(forReading: Bundle.main.url(forResource: "3e", withExtension: "mp3")!) audioPlayer = try AKAudioPlayer(file: audioFile!) AudioKit.output =