avfoundation

how to export video asset via AVAssetExportSession in portrait mode

て烟熏妆下的殇ゞ 提交于 2020-01-27 06:38:04
问题 when i export a video asset via AVAssetExportSession the result file is in landspace mode. (file grabbed via itune->apps->file sharing->my app). how can i export the video asset in portrait mode (rotate it)? 回答1: The video coming from the iPhone capture device are always landscape orientated whatever the device orientation is when capturing. If you want to rotate your video, the 'simple' solution is to assign a transform to the video track of the exported session. Create 2 mutable tracks in

How to encode self-delimited opus in iOS

Deadly 提交于 2020-01-25 07:20:07
问题 I can record opus using AVAudioRecorder as following: let opusRecordingSettings = [AVFormatIDKey: kAudioFormatOpus, AVSampleRateKey: 16000.0, AVNumberOfChannelsKey: 1] as [String: Any] do { try audioRecordingSession.setCategory(.playAndRecord, mode: .default) try audioRecordingSession.setActive(true) audioRecorder = try AVAudioRecorder(url: fileUrl(), settings: opusRecordingSettings) audioRecorder.delegate = self audioRecorder.prepareToRecord() audioRecorder.record() } catch _ { } // ... ...

Join 2 different labels for text to speech conversion (swift3)

倖福魔咒の 提交于 2020-01-24 19:35:09
问题 Using the speech to text feature I can easily get one label to be spoken. But I want utterance2 to be joined to utterance. I want utterance to be spoken first then when it is finished for utterance2 to be spoken right after. let utterance = AVSpeechUtterance(string: dptext.text!) let utterance2 = AVSpeechUtterance(string: dptext2.text!) let synthesizer = AVSpeechSynthesizer() synthesizer.speak(utterance) 回答1: I think the simplest way to handle this situation is to combine the two string with

AVAudioPlayerNode doesn't play sound

烂漫一生 提交于 2020-01-24 09:20:10
问题 I'm trying to generate sound with code below. Everthing is fine there is no error. But when I executed this code, there is no sound. How can I fix this problem ? By the way, I'm using this example : http://www.tmroyal.com/playing-sounds-in-swift-audioengine.html var ae:AVAudioEngine var player:AVAudioPlayerNode? var mixer:AVAudioMixerNode var buffer:AVAudioPCMBuffer ae = AVAudioEngine() player = AVAudioPlayerNode() mixer = ae.mainMixerNode; buffer = AVAudioPCMBuffer(pcmFormat: player!

Replace Part of Pixel Buffer with White Pixels in iOS

天大地大妈咪最大 提交于 2020-01-22 12:53:26
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

Replace Part of Pixel Buffer with White Pixels in iOS

╄→尐↘猪︶ㄣ 提交于 2020-01-22 12:51:27
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

Replace Part of Pixel Buffer with White Pixels in iOS

人走茶凉 提交于 2020-01-22 12:50:19
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

AVMutableCompositionTrack - insertTimeRange - insertEmptyTimeRange issue

℡╲_俬逩灬. 提交于 2020-01-21 07:24:27
问题 I have a strange problem: I want to generate a new sound file out of two soundfiles and silence. sound1: 2 seconds long + silence: 2 seconds silence + sound2: 2 seconds long When I try the code below, I get a 6 seconds long soundfile with all the parts, but in a different order! The order is: sound1, sound2, silence I am not able to put this silence in the middle of this composition (also not at the beginning). Is this a typical behavior or do I something wrong? Here is the code for putting

AVMutableCompositionTrack - insertTimeRange - insertEmptyTimeRange issue

 ̄綄美尐妖づ 提交于 2020-01-21 07:24:09
问题 I have a strange problem: I want to generate a new sound file out of two soundfiles and silence. sound1: 2 seconds long + silence: 2 seconds silence + sound2: 2 seconds long When I try the code below, I get a 6 seconds long soundfile with all the parts, but in a different order! The order is: sound1, sound2, silence I am not able to put this silence in the middle of this composition (also not at the beginning). Is this a typical behavior or do I something wrong? Here is the code for putting

Swift Video Resizer AVAsset

匆匆过客 提交于 2020-01-21 05:21:07
问题 I have this code that resizes a video from 1280 x 720 to 640 x 360 But i want a resize with no crop. Is there a way to do a full resize the don't crop ? Here's the code class func resizer(inputURL : NSURL , completion: (outPutURL : NSURL?) -> Void ){ let videoAsset = AVAsset(URL: inputURL) as AVAsset let clipVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo).first! as AVAssetTrack let composition = AVMutableComposition() composition.addMutableTrackWithMediaType(AVMediaTypeVideo,