avfoundation

AVURLAsset not loading video on documents folder, even using fileURLWithPath

安稳与你 提交于 2019-12-07 06:05:38
问题 been struggling with this for the past couple of hours, hopefully someone has run into it before I download a file that from a server to my documents folder File is there and valid (checked with iExplorer on device and the local directory of the simulator). Moved each file to my desktop and plays without problems. The strange thing is that the exact same code works without issues when the file (the same video) is added to the bundled. code: print("video url string : \(video.urlString)") //

Save audio stream to mp3 file (iOS)

人走茶凉 提交于 2019-12-07 04:12:57
问题 I have an AVSpeechSynthesizer which converts text to speech, but i've encountered a problem. I don't know how to save the audio file that it generates to a music file, which I would quite like to be able to do! So here's my question, how do you save the AVSpeechSynthesizer output and if this isn't possible, can I us AVFoundation, CoreMedia or other public API to capture the output of the speakers, but before it has come out? Thanks! 回答1: Unfortunately no, there is no public API available to

iOS - CoreImage - Add an effect to partial of image

核能气质少年 提交于 2019-12-07 04:11:58
问题 I just have a look on CoreImage framework on iOS 5, found that it's easy to add an effect to whole image. I wonder if possible to add an effect on special part of image (a rectangle). for example add gray scale effect on partial of image/ I look forward to your help. Thanks, Huy 回答1: Watch session 510 from the WWDC 2012 videos. They present a technique how to apply a mask to a CIImage . You need to learn how to chain the filters together. In particular take a look at: CICrop ,

AVAudioSession alternative on OSX to get audio driver sample rate

倖福魔咒の 提交于 2019-12-07 04:06:39
问题 on IOS you can use [[AVAudioSession sharedInstance] sampleRate]; to retrieve the current sample rate used by the audio driver. AVAudioSession does not exist on OSX, so I am wondering how to achieve the same thing on OSX, as I could not find much on the topic. Thanks 回答1: Okay, after some more in depth research Audio Hardware Services seems to do the trick on OSX. Here is some example code: //get the default output device AudioObjectPropertyAddress addr; UInt32 size; AudioDeviceID deviceID = 0

WARNING: under normal conditions, _fillInQueueWithExtraSpace:ignoreExistingItems: should not be re-entered

谁说我不能喝 提交于 2019-12-07 03:47:46
问题 This is my class that managed my video: #import "video.h" #import <MediaPlayer/MediaPlayer.h> @interface video() { MPMoviePlayerController* videoView; } @end @implementation video static video *sharedSingleton = nil; + (video *)sharedSingleton { @synchronized([video class]) { if (!sharedSingleton) sharedSingleton = [[super allocWithZone:NULL] init]; return sharedSingleton; } return nil; } - (id)init { self = [super init]; CGRect dimVideo = CGRectMake(0, 0, 472, 400); NSURL* videoPath = [

precise timing with AVMutableComposition

你说的曾经没有我的故事 提交于 2019-12-07 03:01:18
问题 I'm trying to use AVMutableComposition to play a sequence of sound files at precise times. When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds: AVMutableComposition *composition = [[AVMutableComposition alloc] init]; NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES}; for (NSInteger i = 0;

AVCaptureSession barcode scan

元气小坏坏 提交于 2019-12-07 02:17:19
问题 I'm currently working with AVCaptureSession and AVCaptureMetadataOutput . It works perfectly, but I just want to know how to indicate to scan and analyze metadata objects only on a specific region of the AVCaptureVideoPreviewLayer ? 回答1: Here is a sample of code from a project I have that may help you on the right track // where 'self.session' is previously setup AVCaptureSession // setup metadata capture AVCaptureMetadataOutput *metadataOutput = [[AVCaptureMetadataOutput alloc] init]; [self

AVPlayer's video and audio become out of sync after pausing and then resuming video

南笙酒味 提交于 2019-12-07 02:16:52
问题 I'm using AVPlayer to play videos in my app. Video playback always works perfectly, except for when you pause/resume the video. If the user presses the pause button, and then resumes the video, sometimes the audio will be ahead of the video. The video resumes at the correct location, but the audio is ahead. It's as if when you press pause, the audio keeps running. When they press the pause button, all I am doing is calling the pause method of the AVPlayer , and I have also tried setting it's

Capturing iSight image using AVFoundation on Mac

心不动则不痛 提交于 2019-12-07 01:15:38
问题 I previously had this code to capture a single image from a Mac's iSight camera using QTKit: - (NSError*)takePicture { BOOL success; NSError* error; captureSession = [QTCaptureSession new]; QTCaptureDevice* device = [QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeVideo]; success = [device open: &error]; if (!success) { return error; } QTCaptureDeviceInput* captureDeviceInput = [[QTCaptureDeviceInput alloc] initWithDevice: device]; success = [captureSession addInput:

Getting Slow Motion meta data from captured video in iOS

血红的双手。 提交于 2019-12-07 00:10:37
问题 We have a video app that is importing video from the user's camera roll. Our problem is importing slo-mo video taken with the native Camera app. We can recognise that there is a higher than normal frame rate (e.g. 120 or 240 fps). What we can't find is the meta information that specifies when the video drops into slow motion and when it speeds up again. Does anyone know where this information is kept, and/or how to get at it? Is it in the file itself, or stored in a separate meta file