Take a Video with ARKIT

ぐ巨炮叔叔 提交于 2019-12-10 10:20:57

问题


Hello Community,

I try to build a App with Swift 4 and the great upcoming ARKit-Framework but I am stuck. I need to take a Video with the Framework or at least a UIImage-sequence but I dont know how.

This is what I've tried:

In ARKit you have a session which tracks your world. This session has a capturedImage instance where you can get the current Image. So I createt a Timer which appends the capturedImage every 0.1s to a List. This would work for me but If I start the Timer by clicking a "start"-button, the camera starts to lag. Its not about the Timer i guess because If I invalidate the Timer by clicking a "stop"-button the camera is fluent again.

Is there a way to solve the lags or even a better way?

Thanks


回答1:


Use a custom renderer.

Render the scene using the custom renderer, then get texture from the custom renderer, finally covert that to a CVPixelBufferRef

- (void)viewDidLoad {
    [super viewDidLoad];

    self.rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    self.bytesPerPixel = 4;
    self.bitsPerComponent = 8;
    self.bitsPerPixel = 32;
    self.textureSizeX = 640;
    self.textureSizeY = 960;

    // Set the view's delegate
    self.sceneView.delegate = self;

    // Show statistics such as fps and timing information
    self.sceneView.showsStatistics = YES;

    // Create a new scene
    SCNScene *scene = [SCNScene scene];//[SCNScene sceneNamed:@"art.scnassets/ship.scn"];

    // Set the scene to the view
    self.sceneView.scene = scene;

    self.sceneView.preferredFramesPerSecond = 30;

    [self setupMetal];
    [self setupTexture];
    self.renderer.scene = self.sceneView.scene;

}

- (void)setupMetal
{
    if (self.sceneView.renderingAPI == SCNRenderingAPIMetal) {
        self.device = self.sceneView.device;
        self.commandQueue = [self.device newCommandQueue];
        self.renderer = [SCNRenderer rendererWithDevice:self.device options:nil];
    }
    else {
        NSAssert(nil, @"Only Support Metal");
    }
}

- (void)setupTexture
{
    MTLTextureDescriptor *descriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatBGRA8Unorm_sRGB width:self.textureSizeX height:self.textureSizeY mipmapped:NO];
    descriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageRenderTarget;

    id<MTLTexture> textureA = [self.device newTextureWithDescriptor:descriptor];
    self.offscreenTexture = textureA;
}

- (void)renderer:(id <SCNSceneRenderer>)renderer willRenderScene:(SCNScene *)scene atTime:(NSTimeInterval)time
{
    [self doRender];
}

- (void)doRender
{
    if (self.rendering) {
        return;
    }
    self.rendering = YES;
    CGRect viewport = CGRectMake(0, 0, self.textureSizeX, self.textureSizeY);

    id<MTLTexture> texture = self.offscreenTexture;

    MTLRenderPassDescriptor *renderPassDescriptor = [MTLRenderPassDescriptor new];
    renderPassDescriptor.colorAttachments[0].texture = texture;
    renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
    renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(0, 1, 0, 1.0);
    renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;

    id<MTLCommandBuffer> commandBuffer = [self.commandQueue commandBuffer];

    self.renderer.pointOfView = self.sceneView.pointOfView;

    [self.renderer renderAtTime:0 viewport:viewport commandBuffer:commandBuffer passDescriptor:renderPassDescriptor];

    [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> _Nonnull bf) {
        [self.recorder writeFrameForTexture:texture];
        self.rendering = NO;
    }];

    [commandBuffer commit];
}

Then in the recorder, set up the AVAssetWriterInputPixelBufferAdaptor with AVAssetWriter. And convert the texture to CVPixelBufferRef:

- (void)writeFrameForTexture:(id<MTLTexture>)texture {
    CVPixelBufferPoolRef pixelBufferPool = self.assetWriterPixelBufferInput.pixelBufferPool;
    CVPixelBufferRef pixelBuffer;
    CVReturn status = CVPixelBufferPoolCreatePixelBuffer(nil, pixelBufferPool, &pixelBuffer);
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *pixelBufferBytes = CVPixelBufferGetBaseAddress(pixelBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer);
    MTLRegion region = MTLRegionMake2D(0, 0, texture.width, texture.height);
    [texture getBytes:pixelBufferBytes bytesPerRow:bytesPerRow fromRegion:region mipmapLevel:0];

    [self.assetWriterPixelBufferInput appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
    CVPixelBufferRelease(pixelBuffer);
}

Make sure the custom renderer and the adaptor share the same pixel encoding.

I tested this for the default ship.scn and it and it only consume 30% CPU compared to almost 90% compared to use snapshot method for every frame. And this will not pop up a permission dialog.




回答2:


I was able to use ReplayKit to do exactly that.

To see what ReplayKit is like

On your iOS device, go to Settings -> Control Center -> Customize Controls. Move "Screen Recording" to the "Include" section, and swipe up to bring up Control Center. You should now see the round Screen Recording icon, and you'll notice that when you press it, iOS starts to record your screen. Tapping the blue bar will end recording and save the video to Photos.

Using ReplayKit, you can make your app invoke the screen recorder and capture your ARKit content.

How-to

To start recording:

RPScreenRecorder.shared().startRecording { error in
    // Handle error, if any
}

To stop recording:

RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
    // Do things
})

After you're done recording, .stopRecording gives you an optional RPPreviewViewController, which is

An object that displays a user interface where users preview and edit a screen recording created with ReplayKit.

So in our example, you can present previewVc if it isn't nil

RPScreenRecorder.shared().stopRecording(handler: { (previewVc, error) in
    if let previewVc = previewVc {
        previewVc.delegate = self
        self.present(previewVc, animated: true, completion: nil)
    }
})

You'll be able to edit and save the vide right from the previewVc, but you might want to make self (or someone) the RPPreviewViewControllerDelegate, so you can easily dismiss the previewVc when you're finished.

extension MyViewController: RPPreviewViewControllerDelegate {
    func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
        // Called when the preview vc is ready to be dismissed
    }
}

Caveats

You'll notice that startRecording will record "the app display", so if any view you have (buttons, labels, etc) will be recorded as well. I found it useful to hide the controls while recording and let my users know that tapping the screen stops recording, but I've also read about others having success putting their essential controls on a separate UIWindow.

Excluding views from recording

The separate UIWindow trick works. I was able to make an overlay window where I had my a record button and a timer and these weren't recorded.

let overlayWindow = UIWindow(frame: view.frame)
let recordButton = UIButton( ... )
overlayWindow.backgroundColor = UIColor.clear

The UIWindow will be hidden by default. So when you want to show your controls, you must set isHidden to false.

Best of luck to you!




回答3:


I have released an open source framework taking care of this. https://github.com/svtek/SceneKitVideoRecorder

It works by getting the drawables from scene views metal layer.

You can attach a display link to get your renderer called as the screen refreshes:

displayLink = CADisplayLink(target: self, selector: #selector(updateDisplayLink))
displayLink?.add(to: .main, forMode: .commonModes)

And then grab the drawable from metal layer by:

let metalLayer = sceneView.layer as! CAMetalLayer
let nextDrawable = metalLayer.nextDrawable()

Be wary that nextDrawable() call expends the drawables. You should call this as less as possible and do so in an autoreleasepool{} so the drawable gets released properly and replaced with a new one.

Then you should read the MTLTexture from the drawable to a pixel buffer which you can append to AVAssetWriter to create a video.

let destinationTexture = currentDrawable.texture
destinationTexture.getBytes(...)

With these in mind the rest is pretty straightforward video recording on iOS/Cocoa.

You can find all these implemented in the repo I've shared above.




回答4:


I had a similar need and wanted to record the ARSceneView in the app internally, and without ReplayKit so that I can manipulate the video that is generated from the recording. I ended up using this project: https://github.com/lacyrhoades/SceneKit2Video . The project is made to render a SceneView to a video, but you can configure it to accept ARSceneViews. It works pretty well, and you can choose to get an imagefeed instead of the video using the delegate function if you like.



来源:https://stackoverflow.com/questions/45326277/take-a-video-with-arkit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!