How to Draw an Image in an NSOpenGLView with Swift?

久未见 提交于 2019-12-12 21:12:22

问题


Basically, I want to create an ImageView which uses OPenGL for rendering. My eventual plan is to use this as a base for a video player with CIFilters.

I followed a tutorial which emphasized on using OpenGL technology to take advantage of GPU. The tutorial was for iOS. I mapped it to Cocoa.

I have no idea where I am failing, but all I get is a blank screen.

Here is the View.

import Cocoa
import OpenGL.GL3

class CoreImageView: NSOpenGLView {
    var coreImageContext: CIContext?
    var image: CIImage? {
        didSet {
            display()
        }
    }

    override init?(frame frameRect: NSRect, pixelFormat format: NSOpenGLPixelFormat?) {
        //Bad programming - Code duplication
        let attrs: [NSOpenGLPixelFormatAttribute] = [
            UInt32(NSOpenGLPFAAccelerated),
            UInt32(NSOpenGLPFAColorSize), UInt32(32),
            UInt32(NSOpenGLPFAOpenGLProfile),
            UInt32( NSOpenGLProfileVersion3_2Core),
            UInt32(0)
        ]
        let pf = NSOpenGLPixelFormat(attributes: attrs)
        super.init(frame: frameRect, pixelFormat: pf)
    }

    required init?(coder: NSCoder) {
        fatalError("init(coder:) has not been implemented")
    }

    override init(frame: CGRect) {
        super.init(frame: frame)
        initialize()
    }

    //Bad programming - Code duplication
    func defaultPixelFormat()->NSOpenGLPixelFormat?{
        let attrs: [NSOpenGLPixelFormatAttribute] = [
            UInt32(NSOpenGLPFAAccelerated),
            UInt32(NSOpenGLPFAColorSize), UInt32(32),
            UInt32(NSOpenGLPFAOpenGLProfile),
            UInt32( NSOpenGLProfileVersion3_2Core),
            UInt32(0)
        ]
        return NSOpenGLPixelFormat(attributes: attrs)
    }

    func initialize(){

        guard let pf = defaultPixelFormat() else {
            Swift.print("pixelFormat could not be constructed")
            return
        }
        self.pixelFormat = pf

        guard let context = NSOpenGLContext(format: pf, share: nil) else {
            Swift.print("context could not be constructed")
            return
        }
        self.openGLContext = context

        if let cglContext = context.cglContextObj {
            coreImageContext = CIContext(cglContext: cglContext, pixelFormat: pixelFormat?.cglPixelFormatObj, colorSpace: nil, options: nil)
        }else{
            Swift.print("cglContext could not be constructed")
            coreImageContext = CIContext(options: nil)
        }
    }

    //--------------------------

    override func draw(_ dirtyRect: NSRect) {
        if let img = image {
            let scale = self.window?.screen?.backingScaleFactor ?? 1.0
            let destRect = bounds.applying(CGAffineTransform(scaleX: scale, y: scale))
            coreImageContext?.draw(img, in: destRect, from: img.extent)
        }
    }

}

Any help is appreciated. Complete project is here (XCode 8) and here(Xcode 7)


回答1:


I might suggest checking out Simon's Core Image helper on this -- he has this thing on his github which basically tells core image to render via the GPU using an OpenGLES 2.0 context. It was really helpful for me when I was trying to figure out how to render via GPU -- its a really good idea to not transfer to the CPU to render because that transfer takes a long time (relatively).

https://github.com/FlexMonkey/CoreImageHelpers



来源:https://stackoverflow.com/questions/39080469/how-to-draw-an-image-in-an-nsopenglview-with-swift

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!