How to record video from ARKit?

孤街浪徒 提交于 2019-12-20 10:19:33

问题


Now I'm testing ARKit/SceneKit implementation. The basic rendering to the screen is kinda working so then I wanna try recording what I see on the screen into a video.

Just for recording of Scene Kit I found this Gist:

//
//  ViewController.swift
//  SceneKitToVideo
//
//  Created by Lacy Rhoades on 11/29/16.
//  Copyright © 2016 Lacy Rhoades. All rights reserved.
//

import SceneKit
import GPUImage
import Photos

class ViewController: UIViewController {

    // Renders a scene (and shows it on the screen)
    var scnView: SCNView!

    // Another renderer
    var secondaryRenderer: SCNRenderer?

    // Abducts image data via an OpenGL texture
    var textureInput: GPUImageTextureInput?

    // Recieves image data from textureInput, shows it on screen
    var gpuImageView: GPUImageView!

    // Recieves image data from the textureInput, writes to a file
    var movieWriter: GPUImageMovieWriter?

    // Where to write the output file
    let path = NSTemporaryDirectory().appending("tmp.mp4")

    // Output file dimensions
    let videoSize = CGSize(width: 800.0, height: 600.0)

    // EAGLContext in the sharegroup with GPUImage
    var eaglContext: EAGLContext!

    override func viewDidLoad() {
        super.viewDidLoad()

        let group = GPUImageContext.sharedImageProcessing().context.sharegroup
        self.eaglContext = EAGLContext(api: .openGLES2, sharegroup: group )
        let options = ["preferredRenderingAPI": SCNRenderingAPI.openGLES2]

        // Main view with 3d in it
        self.scnView = SCNView(frame: CGRect.zero, options: options)
        self.scnView.preferredFramesPerSecond = 60
        self.scnView.eaglContext = eaglContext
        self.scnView.translatesAutoresizingMaskIntoConstraints = false
        self.view.addSubview(self.scnView)

        // Secondary renderer for rendering to an OpenGL framebuffer
        self.secondaryRenderer = SCNRenderer(context: eaglContext, options: options)

        // Output of the GPUImage pipeline
        self.gpuImageView = GPUImageView()
        self.gpuImageView.translatesAutoresizingMaskIntoConstraints = false
        self.view.addSubview(self.gpuImageView)

        self.setupConstraints()

        self.setupScene()

        self.setupMovieWriter()

        DispatchQueue.main.async {
            self.setupOpenGL()
        }
    }

    func setupConstraints() {
        let relativeWidth: CGFloat = 0.8

        self.view.addConstraint(NSLayoutConstraint(item: self.scnView, attribute: .width, relatedBy: .equal, toItem: self.view, attribute: .width, multiplier: relativeWidth, constant: 0))
        self.view.addConstraint(NSLayoutConstraint(item: self.scnView, attribute: .centerX, relatedBy: .equal, toItem: self.view, attribute: .centerX, multiplier: 1, constant: 0))

        self.view.addConstraint(NSLayoutConstraint(item: self.gpuImageView, attribute: .width, relatedBy: .equal, toItem: self.view, attribute: .width, multiplier: relativeWidth, constant: 0))
        self.view.addConstraint(NSLayoutConstraint(item: self.gpuImageView, attribute: .centerX, relatedBy: .equal, toItem: self.view, attribute: .centerX, multiplier: 1, constant: 0))

        self.view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-(==30.0)-[scnView]-(==30.0)-[gpuImageView]", options: [], metrics: [:], views: ["gpuImageView": gpuImageView, "scnView": scnView]))

        let videoRatio = self.videoSize.width / self.videoSize.height
        self.view.addConstraint(NSLayoutConstraint(item: self.scnView, attribute: .width, relatedBy: .equal, toItem: self.scnView, attribute: .height, multiplier: videoRatio, constant: 1))
        self.view.addConstraint(NSLayoutConstraint(item: self.gpuImageView, attribute: .width, relatedBy: .equal, toItem: self.gpuImageView, attribute: .height, multiplier: videoRatio, constant: 1))
    }

    override func viewDidAppear(_ animated: Bool) {
        self.cameraBoxNode.runAction(
            SCNAction.repeatForever(
                SCNAction.rotateBy(x: 0.0, y: -2 * CGFloat.pi, z: 0.0, duration: 8.0)
            )
        )

        self.scnView.isPlaying = true

        Timer.scheduledTimer(withTimeInterval: 5.0, repeats: false, block: {
            timer in
            self.startRecording()
        })
    }

    var scene: SCNScene!
    var geometryNode: SCNNode!
    var cameraNode: SCNNode!
    var cameraBoxNode: SCNNode!
    var imageMaterial: SCNMaterial!
    func setupScene() {
        self.imageMaterial = SCNMaterial()
        self.imageMaterial.isDoubleSided = true
        self.imageMaterial.diffuse.contentsTransform = SCNMatrix4MakeScale(-1, 1, 1)
        self.imageMaterial.diffuse.wrapS = .repeat
        self.imageMaterial.diffuse.contents = UIImage(named: "pano_01")

        self.scene = SCNScene()

        let sphere = SCNSphere(radius: 100.0)
        sphere.materials = [imageMaterial!]
        self.geometryNode = SCNNode(geometry: sphere)
        self.geometryNode.position = SCNVector3Make(0.0, 0.0, 0.0)
        scene.rootNode.addChildNode(self.geometryNode)

        self.cameraNode = SCNNode()
        self.cameraNode.camera = SCNCamera()
        self.cameraNode.camera?.yFov = 72.0
        self.cameraNode.position = SCNVector3Make(0, 0, 0)
        self.cameraNode.eulerAngles = SCNVector3Make(0.0, 0.0, 0.0)

        self.cameraBoxNode = SCNNode()
        self.cameraBoxNode.addChildNode(self.cameraNode)
        scene.rootNode.addChildNode(self.cameraBoxNode)

        self.scnView.scene = scene
        self.scnView.backgroundColor = UIColor.darkGray
        self.scnView.autoenablesDefaultLighting = true
    }

    func setupMovieWriter() {
        let _ = FileUtil.mkdirUsingFile(path)
        let _ = FileUtil.unlink(path)
        let url = URL(fileURLWithPath: path)
        self.movieWriter = GPUImageMovieWriter(movieURL: url, size: self.videoSize)
    }

    let glRenderQueue = GPUImageContext.sharedContextQueue()!
    var outputTexture: GLuint = 0
    var outputFramebuffer: GLuint = 0
    func setupOpenGL() {
        self.glRenderQueue.sync {
            let context = EAGLContext.current()
            if context != self.eaglContext {
                EAGLContext.setCurrent(self.eaglContext)
            }

            glGenFramebuffers(1, &self.outputFramebuffer)
            glBindFramebuffer(GLenum(GL_FRAMEBUFFER), self.outputFramebuffer)

            glGenTextures(1, &self.outputTexture)
            glBindTexture(GLenum(GL_TEXTURE_2D), self.outputTexture)
        }

        // Pipe the texture into GPUImage-land
        self.textureInput = GPUImageTextureInput(texture: self.outputTexture, size: self.videoSize)

        let rotate = GPUImageFilter()
        rotate.setInputRotation(kGPUImageFlipVertical, at: 0)
        self.textureInput?.addTarget(rotate)
        rotate.addTarget(self.gpuImageView)

        if let writer = self.movieWriter {
            rotate.addTarget(writer)
        }

        // Call me back on every render
        self.scnView.delegate = self
    }

    func renderToFramebuffer(atTime time: TimeInterval) {
        self.glRenderQueue.sync {
            let context = EAGLContext.current()
            if context != self.eaglContext {
                EAGLContext.setCurrent(self.eaglContext)
            }

            objc_sync_enter(self.eaglContext)

            let width = GLsizei(self.videoSize.width)
            let height = GLsizei(self.videoSize.height)

            glBindFramebuffer(GLenum(GL_FRAMEBUFFER), self.outputFramebuffer)
            glBindTexture(GLenum(GL_TEXTURE_2D), self.outputTexture)

            glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGBA, width, height, 0, GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), nil)

            glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MAG_FILTER), GL_LINEAR)
            glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_MIN_FILTER), GL_LINEAR)
            glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_S), GL_CLAMP_TO_EDGE)
            glTexParameteri(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_WRAP_T), GL_CLAMP_TO_EDGE)

            glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), self.outputTexture, 0)

            glViewport(0, 0, width, height)

            glClear(GLbitfield(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT))

            self.secondaryRenderer?.render(atTime: time)

            self.videoBuildingQueue.sync {
                self.textureInput?.processTexture(withFrameTime: CMTime(seconds: time, preferredTimescale: 100000))
            }

            objc_sync_exit(self.eaglContext)
        }

    }

    func startRecording() {
        self.startRecord()
        Timer.scheduledTimer(withTimeInterval: 24.0, repeats: false, block: {
            timer in
            self.stopRecord()
        })
    }

    let videoBuildingQueue = DispatchQueue.global(qos: .default)

    func startRecord() {
        self.videoBuildingQueue.sync {
            //inOrientation: CGAffineTransform(scaleX: 1.0, y: -1.0)
            self.movieWriter?.startRecording()
        }
    }

    var renderStartTime: TimeInterval = 0

    func stopRecord() {
        self.videoBuildingQueue.sync {
            self.movieWriter?.finishRecording(completionHandler: {
                self.saveFileToCameraRoll()
            })
        }
    }

    func saveFileToCameraRoll() {
        assert(FileUtil.fileExists(self.path), "Check for file output")

        DispatchQueue.global(qos: .utility).async {
            PHPhotoLibrary.shared().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: URL(fileURLWithPath: self.path))
            }) { (done, err) in
                if err != nil {
                    print("Error creating video file in library")
                    print(err.debugDescription)
                } else {
                    print("Done writing asset to the user's photo library")
                }
            }
        }
    }

}

extension ViewController: SCNSceneRendererDelegate {
    func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
        self.secondaryRenderer?.scene = scene
        self.secondaryRenderer?.pointOfView = (renderer as! SCNView).pointOfView
        self.renderToFramebuffer(atTime: time)
    }
}

but this doesn't render the image from device camera.

So I started searching for the way to do that as well. So far I found the way to grab the captured image as CVImageBufferRef by accessing ARFrame. And Apple's GLCameraRipple example seems like help me getting the OpenGL texture out of it.

But my question is how to draw it in the rendering loop. This might be obvious for OpenGL-experienced ones but I only have very little knowledge about OpenGL so I cannot figure out how to add that to the above code.


回答1:


You can record everything seen on the screen (or live stream it to services like Twitch, for that matter) using ReplayKit, ARKit and SceneKit content included.

(As Apple pointed out at WWDC, ReplayKit is actually the basis for the Control Center screen recording feature in iOS 11.)




回答2:


Don't know if you have managed to answer this by now or not, but lacyrhoades, the person who wrote the class you referenced, has released another project on github that seems to do what you're asking for. I've used it and it manages to record the SceneView with AR Objects as well as the camera input. You can find it through this link:

https://github.com/lacyrhoades/SCNKit2Video

If you want to use it with AR though, you have to configure the ARSceneView to the project he's make, as his one just runs a SceneView, not one with AR.

Hope it helps.




回答3:


I just found this framework, called ARVideoKit, and it seems to be easy to implement plus they have more features such as capturing GIFs and Live Photos.

The framework official repo is: https://github.com/AFathi/ARVideoKit/

To install it, you'd have to clone the repo and drag the .framework file into your project's embedded binary.

Then the implementation is pretty simple:

  1. import ARVideoKit in your UIViewController class

  2. Create a RecordAR? variable

    var videoRec:RecordAR?

  3. Initialize your variable in viewDidLoad

    videoRec = RecordAR(ARSpriteKit:sceneView)

  4. Prepare RecordAR in viewWillAppear

    videoRec.prepare(configuration)

  5. Begin recording a video

    videoRec.record()

  6. Stop and export to camera roll!

    videoRec.stopAndExport()

Take a look at the framework's documentation, it supports more features to use!

You can find their documentation here: https://github.com/AFathi/ARVideoKit/wiki

Hope that helped!




回答4:


If you can keep your device connected to your Mac, it is really easy to just use QuickTime Player to record screen (and sound) from your iOS device.

In QuickTime choose new Movie Recording in the File menu, then in the record dialog near the big red record button there is a little dropdown arrow, where you can pick audio input and video input. Choose your i-device there and you're set to go.



来源:https://stackoverflow.com/questions/44792778/how-to-record-video-from-arkit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!