metal

Blend Mode in Metal

别等时光非礼了梦想. 提交于 2021-02-20 06:30:24
问题 These are the two the blend-mode i used in OpenGL what is the conversion to the metal in IOS glEnable(GL_BLEND); glBlendFuncSeparate(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA,GL_ONE,GL_ONE_MINUS_SRC_ALPHA); glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE, GL_SRC_ALPHA, GL_ONE); 回答1: You configure blending on your render pipeline descriptor . I believe the equivalent configurations for your GL code are: // glEnable(GL_BLEND) renderPipelineDescriptor.colorAttachments[0].isBlendingEnabled = true //

Very slow framerate with AVFoundation and Metal in MacOS

送分小仙女□ 提交于 2021-02-17 05:56:47
问题 I'm trying to adapt Apple's AVCamFilter sample to MacOS. The filtering appears to work, but rendering the processed image through Metal gives me a framerate of several seconds per frame. I've tried different approaches, but have been stuck for a long time. This is the project AVCamFilterMacOS - Can anyone with better knowledge of AVFoundation with Metal tell me what's wrong? I've been reading the documentation and practicing getting the unprocessed image to display, as well as rendering other

Very slow framerate with AVFoundation and Metal in MacOS

眉间皱痕 提交于 2021-02-17 05:56:15
问题 I'm trying to adapt Apple's AVCamFilter sample to MacOS. The filtering appears to work, but rendering the processed image through Metal gives me a framerate of several seconds per frame. I've tried different approaches, but have been stuck for a long time. This is the project AVCamFilterMacOS - Can anyone with better knowledge of AVFoundation with Metal tell me what's wrong? I've been reading the documentation and practicing getting the unprocessed image to display, as well as rendering other

stdatomic.h not found, for use in swift & metal compute shader

青春壹個敷衍的年華 提交于 2021-02-08 10:10:55
问题 I'm trying to use a struct with an atomic_int for use in a metal compute shader. However, it says I need to import #include "stdatomic.h" - but every time I try, it can't find the file. #include "stdatomic.h" // 'stdatomic.h' file not found I'm trying to build my application for macOS Catalina struct Fitness { atomic_int weight; // Declaration of 'atomic_int' must be imported from module 'Darwin.C.stdatomic' before it is required ...others... }; I have tried placing a copy of stdatomic.h into

How to create a MTLTexture backed by a CVPixelBuffer

落花浮王杯 提交于 2021-02-07 13:22:51
问题 What's the correct way to generate a MTLTexture backed by a CVPixelBuffer? I have the following code, but it seems to leak: func PixelBufferToMTLTexture(pixelBuffer:CVPixelBuffer) -> MTLTexture { var texture:MTLTexture! let width = CVPixelBufferGetWidth(pixelBuffer) let height = CVPixelBufferGetHeight(pixelBuffer) let format:MTLPixelFormat = .BGRA8Unorm var textureRef : Unmanaged<CVMetalTextureRef>? let status = CVMetalTextureCacheCreateTextureFromImage(nil, videoTextureCache!

Is drawing to an MTKView or CAMetalLayer required to take place on the main thread?

笑着哭i 提交于 2021-02-07 04:17:01
问题 It's well known that updating the user interface in AppKit or UIKit is required to take place on the main thread. Does Metal have the same requirement when it comes to presenting a drawable ? In a layer-hosted NSView that I've been playing around with, I've noticed that I can call [CAMetalLayer nextDrawable] from a dispatch_queue that is not the main_queue . I can then update that drawable's texture as usual and present it. This appears to work properly, but I find that rather suspicious.

Is drawing to an MTKView or CAMetalLayer required to take place on the main thread?

ぐ巨炮叔叔 提交于 2021-02-07 04:16:24
问题 It's well known that updating the user interface in AppKit or UIKit is required to take place on the main thread. Does Metal have the same requirement when it comes to presenting a drawable ? In a layer-hosted NSView that I've been playing around with, I've noticed that I can call [CAMetalLayer nextDrawable] from a dispatch_queue that is not the main_queue . I can then update that drawable's texture as usual and present it. This appears to work properly, but I find that rather suspicious.

Rendering Terrain Dynamically with Argument Buffers : Understanding why the particle buffer is not overwritten by the GPU inflight

六月ゝ 毕业季﹏ 提交于 2021-01-29 22:18:31
问题 I am looking through an Apple demo project that is associated with the 2017 WWDC video entitled "Introducing Metal 2" where the developers demonstrate the use of argument buffers. The project is linked here on the page titled "Rendering Terrain Dynamically with Argument Buffers" on the Apple developer website. Here, they synchronize resource writes by the CPU to prevent race conditions with a dispatch_semaphore_t , signaling it when the command buffer finishes executing on the GPU and waiting

Rendering Terrain Dynamically with Argument Buffers : Understanding why the particle buffer is not overwritten by the GPU inflight

懵懂的女人 提交于 2021-01-29 21:39:59
问题 I am looking through an Apple demo project that is associated with the 2017 WWDC video entitled "Introducing Metal 2" where the developers demonstrate the use of argument buffers. The project is linked here on the page titled "Rendering Terrain Dynamically with Argument Buffers" on the Apple developer website. Here, they synchronize resource writes by the CPU to prevent race conditions with a dispatch_semaphore_t , signaling it when the command buffer finishes executing on the GPU and waiting

Swift: Casting int tuples to custom type containing float vectors

巧了我就是萌 提交于 2021-01-29 12:53:49
问题 Both original answers to this questions are satisfactory but come to the solution in slightly different ways. I opted for the one that I found simpler to implement I'm attempting to translate some ObjectiveC, from this apple metal doc/example, and metal code into swift but having some trouble with this bit: here is the typedef I'm using, which is necessary so that the metal shaders can computer my vertex data (the float vectors from simd.h are significant): #include <simd/simd.h> typedef