avfoundation

Can't attach new metadata to captured image

对着背影说爱祢 提交于 2019-12-24 03:19:28
问题 I am trying to attach some of my own fields to an image I capture. I seem to be able to change existing EXIF entries, but I can't add new ones, either within the EXIF dictionary or as a separate dictionary added to the image. When I make my additions, I can see them as part of the image data, but they never get saved to the image file. However, when I change existing EXIF entries, those do get saved to the image file. I have already studied: https://stackoverflow.com/a/5294574/86020 http:/

AVAssetExportSession no audio (iPhone), works on iPad

試著忘記壹切 提交于 2019-12-24 02:23:29
问题 We're trying to take an existing video with audio (.mov) and make a more email friendly version. Seems pretty straightforward and the code below does just what we need ... almost. On an iPad2 (4.3.3) it works in debug & release builds all of the time. On the iPhone 4 (4.3.3) or 4th gen iPod Touch there's no audio. From time to time, no obvious correlation as to what triggers it, it will start working on the iPhone. Delete the app, rebuild/install, and it no longer works. AVURLAsset* asset =

iOS Ignoring enqueueSampleBuffer because status is failed

允我心安 提交于 2019-12-24 01:53:41
问题 When I restart app from here: https://github.com/zweigraf/face-landmarking-ios picture from camera doesn't appear and printing error: "Ignoring enqueueSampleBuffer because status is failed". The problem is probably in captureOutput from SessionHandler.swift 回答1: I find a solution! Thanks to Why does AVSampleBufferDisplayLayer fail with Operation Interrupted (-11847)? If you had similar problem you need to set AVSampleBufferDisplayLayer each time app entering foreground. Like this: /

Caching/playing AVPlayer video at same time

孤者浪人 提交于 2019-12-24 00:45:50
问题 My code borrows heavily from this question: AVPlayer stalling on large video files using resource loader delegate and the code that question mentions, here: https://gist.github.com/anonymous/83a93746d1ea52e9d23f My problem though is that, even though my video DOES download, and I can progressively track it, it never plays. my code is nearly identical to the questions above, except it's in a tableview cell. It's a lot of code (about 100 lines), so I just created a gist for simplicity: https:/

Split CMTimeRange into multiple CMTimeRange chunks

淺唱寂寞╮ 提交于 2019-12-24 00:24:55
问题 Lets assume I have a CMTimeRange constructed from start time zero, and duration of 40 seconds. I want to split this CMTimeRange into multiple chunks by a X seconds divider. So the total duration of the chunks will be the same duration as the original duration, and each startTime will reflect the endTime of of the previous chunk. The last chunk will be the modulus of the left over seconds. For example, for video of 40 seconds, and divider of 15 seconds per chunk: First CMTimeRange - start time

How to play MPEG-DASH with AVPlayer?

纵饮孤独 提交于 2019-12-23 23:50:39
问题 I was wondering what could be approach for AVPlayer to stream DASH (Dynamic Adaptive Streaming over HTTP). I saw this AVFoundation (AVPlayer) supported formats? No .vob or .mpg containers? but it looks AVFoundation has no support for DASH. This is the sample link https://d28ny1s9kzd6a.cloudfront.net/shark+video/shark.mpd 回答1: You could look into the following Github project or simply use it licence is MIT https://github.com/Viblast/ios-player-sdk 来源: https://stackoverflow.com/questions

How to apply chroma key filter with any color to live camera feed ios?

混江龙づ霸主 提交于 2019-12-23 19:58:46
问题 Basically I want to apply chroma key filter to ios live camera feed but I want user to pick the color which will be replaced by another color. I found some examples using green screen but I don't know how to replace color dynamically instead of just green color. Any idea how can I achieve that with best performance? 回答1: You've previously asked about my GPUImage framework, so I assume that you're familiar with it. Within that framework are two filters, a GPUImageChromaKeyFilter and a

Instruments reporting memory leak whenever AVSpeechSynthesizer is used to read text

送分小仙女□ 提交于 2019-12-23 19:29:48
问题 Everytime I use AVSpeechSynthesizer to speak text Instruments reports a memory leak in the AXSpeechImplementation library. Here's the code I'm using to make the call: AVSpeechUtterance *speak = [AVSpeechUtterance speechUtteranceWithString:text]; speak.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"]; speak.rate = AVSpeechUtteranceMaximumSpeechRate * .2; [m_speechSynth speakUtterance:speak]; Here's the link to the Instruments screenshot http://imageshack.com/a/img690/7993/b9w5.png

Why does Swift's AVPlayer loads the playerItem for twice on one play?

六眼飞鱼酱① 提交于 2019-12-23 18:24:21
问题 I'm using AVFoundation's AVPlayer for streaming external mp3 files. I have a counter on the back-end that counts how many times a file loaded. The only client for this service is only me and whenever I trigger to play the AVPlayer, the counter increases two which means AVPlayer makes the request twice. Is there a reason for this, or how can I prevent that from happening? Here is my code: @IBAction func listen(sender: UIButton) { let urlstring = "http://api.server.com/endpoint-to-mp3" let url

How Do I Get Reliable Timing for my Audio App?

荒凉一梦 提交于 2019-12-23 16:06:39
问题 I have an audio app in which all of the sound generating work is accomplished by pure data (using libpd). I've coded a special sequencer in swift which controls the start/stop playback of multiple sequences, played by the synth engines in pure data. Until now, I've completely avoided using Core Audio or AVFoundation for any aspect of my app, because I know nothing about them, and they both seem to require C or Objective C coding, which I know nearly nothing about. However, I've been told from