accelerate-framework

Drawing histogram of CGImage in Swift 3

血红的双手。 提交于 2019-12-22 08:12:36
问题 I have a problem with vImageHistogramCalculation_ARGB8888 method while trying to convert library from Swift 2 to Swift 3 version. The problem is that the method accepts "histogram" argument only as UnsafeMutablePointer<UnsafeMutablePointer<T>?> but Swift 3 construction let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>(mutating: rgba) return unwrapped value, so I can't cast it to properly type. The compiler error is: : Cannot invoke initializer for type

DFT result in Swift is different than that of MATLAB

巧了我就是萌 提交于 2019-12-20 02:34:40
问题 import Cocoa import Accelerate let filePath = Bundle.main.path(forResource: "sinusoid", ofType: "txt") let contentData = FileManager.default.contents(atPath: filePath!) var content = NSString(data: contentData!, encoding: String.Encoding.utf8.rawValue) as? String var idx = content?.characters.index(of: "\n") idx = content?.index(after: idx!) repeat { //let fromIndex = index(from: ) content = content?.substring(from: idx!) idx = content?.characters.index(of: "\n") idx = content?.index(after:

Using Apple's Accelerate framework, FFT, Hann windowing and Overlapping

巧了我就是萌 提交于 2019-12-20 02:29:05
问题 I'm trying to setup FFT for a project and really didn't get a clear picture on things... Basically, I am using Audio Units to get the data from the device's microphone. I then want to do FFT on that data. This is what I understand so far: I need to setup a circular buffer for my data. On each filled buffer, I apply a Hann window then do an FFT . However, I still need some help on overlapping. To get more precise results, I understand I need to use this expecially since I am using windowing.

Spectrogram from AVAudioPCMBuffer using Accelerate framework in Swift

霸气de小男生 提交于 2019-12-18 11:33:00
问题 I'm trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the audio buffer. I'd like to convert the signal in the buffer to a [Float:Float] dictionary where the key represents the frequency and the value represents the magnitude of the audio on the corresponding frequency. I tried using Apple's Accelerate framework but the results I get seem dubious. I'm sure it's just in the way I'm converting the signal. I

performance of NumPy with different BLAS implementations

落花浮王杯 提交于 2019-12-17 19:38:09
问题 I'm running an algorithm that is implemented in Python and uses NumPy. The most computationally expensive part of the algorithm involves solving a set of linear systems (i.e. a call to numpy.linalg.solve() . I came up with this small benchmark: import numpy as np import time # Create two large random matrices a = np.random.randn(5000, 5000) b = np.random.randn(5000, 5000) t1 = time.time() # That's the expensive call: np.linalg.solve(a, b) print time.time() - t1 I've been running this on: My

Using the Apple FFT and Accelerate Framework

社会主义新天地 提交于 2019-12-16 20:14:30
问题 Has anybody used the Apple FFT for an iPhone app yet or know where I might find a sample application as to how to use it? I know that Apple has some sample code posted, but I'm not really sure how to implement it into an actual project. 回答1: I just got the FFT code working for an iPhone project: create a new project delete all the files except for main.m and xxx_info.plist going to project settings and search for pch and stop it from trying to load a .pch (seeing as we have just deleted it)

how to check if vDSP function runs scalar or SIMD on neon

夙愿已清 提交于 2019-12-14 03:55:53
问题 Im currently using some functions from the vDSP framework, especially the vDSP_conv and I'm wondering if there is any way to check if the function invokes scalar mode or is processed SIMD on the neon processor. The documentation of the function mentions some criteria for power-pc-architecture which have to be fulfilled or scalar mode is invoked. Now i neither know if these criteria apply for the iphone as well nor how to check if my function invokes scalar mode or runs properly on neon. is

Accelerate Framework FFT vDSPztoc split real form to split real vector

别来无恙 提交于 2019-12-12 18:29:36
问题 I am implementing an accelerometer-based FFT in iOS using the Accelerate Framework, but the one thing that I'm still a bit confused about is this part: /* The output signal is now in a split real form. Use the function * vDSP_ztoc to get a split real vector. */ vDSP_ztoc(&A, 1, (COMPLEX *) obtainedReal, 2, nOver2); What does the final array look like? I'm confused as to the distinction between "split real form" and "split real vector". I might have some understanding of what it means, but I

Reimplement vDSP_deq22 for Biquad IIR Filter by hand

家住魔仙堡 提交于 2019-12-12 08:09:48
问题 I'm porting a filterbank that currently uses the Apple-specific (Accelerate) vDSP function vDSP_deq22 to Android (where Accelerate is not available). The filterbank is a set of bandpass filters that each return the RMS magnitude for their respective band. Currently the code (ObjectiveC++, adapted from NVDSP) looks like this: - (float) filterContiguousData: (float *)data numFrames:(UInt32)numFrames channel:(UInt32)channel { // Init float to store RMS volume float rmsVolume = 0.0f; // Provide

Auto-correlating the cepstrum

*爱你&永不变心* 提交于 2019-12-11 12:06:20
问题 I'm trying to detect some echoes in sound coming from the microphone. The echoes will be periodic and at one of two possible offsets. I've heard I need to auto-correlate the cepstrum of the signal in order to detect the presence of these echoes. Can you provide code using the Accelerate framework that shows how to detect echoes in the audio data? 回答1: I'm not entirely sure why you'd auto correlate the cepstrum. Auto correlation, though, gives you a representation that is related to the