metal

MTKView Drawing Performance

无人久伴 提交于 2019-12-03 17:39:08
问题 What I am Trying to Do I am trying to show filters on a camera feed by using a Metal view: MTKView . I am closely following the method of Apple's sample code - Enhancing Live Video by Leveraging TrueDepth Camera Data (link). What I Have So Far Following code works great (mainly interpreted from above-mentioned sample code) : class MetalObject: NSObject, MTKViewDelegate { private var metalBufferView : MTKView? private var metalDevice = MTLCreateSystemDefaultDevice() private var

Metal file as part of an iOS framework

大憨熊 提交于 2019-12-03 17:26:31
问题 I am trying to create a framework that works with METAL Api (iOS). I am pretty new to this platform and I would like to know how to build the framework to work with .metal files (I am building a static lib, not dynamic). Should they be a part of the .a file, or as a resource files in the framework bundle? Or is there an other way to do that? Thanks. Update: For those who tackle this - I ended up following warrenm's 1's suggested option - converted the .metal file into a string and calling

iOS12 is causing an error on Metal Command Buffer execution, render is glitchy or doesn't occur

戏子无情 提交于 2019-12-03 17:06:46
问题 We have an app that uses Metal to render. This app works correctly on devices running iOS11. When using the same app on devices running iOS12, we started getting glitches and sometimes hangs in the rendering. We also tried recompiling for iOS12 and are getting the same bad behavior. On the console we are getting the following different messages: 2018-09-22 09:22:29.508576-0500 OurApp [1286:84481] Execution of the command buffer was aborted due to an error during execution. Discarded (victim

SceneKit - Get the rendered scene from a SCNView as a MTLTexture without using a separate SCNRenderer

你。 提交于 2019-12-03 16:09:56
My SCNView is using Metal as the rendering API and I would like to know if there's a way to grab the rendered scene as a MTLTexture without having to use a separate SCNRenderer ? Performance drops when I'm trying to both display the scene via the SCNView and re-rendering the scene offscreen to a MTLTexture via a SCNRenderer (I'm trying to grab the output every frame). SCNView gives me access to the MTLDevice , MTLRenderCommandEncoder , and MTLCommandQueue that it uses, but not to the underlying MTLRenderPassDescriptor that I would need in order to get the MTLTexture (via renderPassDescriptor

Compute sum of array values in parallel with metal swift

非 Y 不嫁゛ 提交于 2019-12-03 13:05:50
问题 I am trying to compute sum of large array in parallel with metal swift. Is there a god way to do it? My plane was that I divide my array to sub arrays, compute sum of one sub arrays in parallel and then when parallel computation is finished compute sum of sub sums. for example if I have array = [a0,....an] I divide array in sub arrays : array_1 = [a_0,...a_i], array_2 = [a_i+1,...a_2i], .... array_n/i = [a_n-1, ... a_n] sums for this arrays is computed in parallel and I get sum_1, sum_2, sum

UIImage created from MTKView results in color/opacity differences

╄→гoц情女王★ 提交于 2019-12-03 09:15:18
When I capture the contents of an MTKView into a UIImage, the resulting image looks qualitatively different, as shown below: The code I use to generate the UIImage is as follows: let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, kCIContextOutputPremultiplied: true, kCIContextUseSoftwareRenderer: false] as [String : Any] let lastDrawableDisplayed = self.currentDrawable! // needed to hold the last drawable presented to screen drawingUIView.image = UIImage(ciImage: CIImage(mtlTexture: lastDrawableDisplayed.texture, options: kciOptions)!) Since I don't modify

Creating a custom SCNGeometry polygon plane with SCNGeometryPrimitiveType polygon crash/error

僤鯓⒐⒋嵵緔 提交于 2019-12-03 08:31:42
I'm trying to create a custom SCNGeometry in the form of a plane with custom shape, which could be placed in an ARKit session. I'm using the option SCNGeometryPrimitiveTypePolygon in the following method which seems to work fine: extension SCNGeometry { static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry { var indices: [Int32] = [Int32(vertices.count)] var index: Int32 = 0 for _ in vertices { indices.append(index) index += 1 } let vertexSource = SCNGeometrySource(vertices: vertices) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout<Int32>.size) let element =

Metal file as part of an iOS framework

不想你离开。 提交于 2019-12-03 06:19:02
I am trying to create a framework that works with METAL Api (iOS). I am pretty new to this platform and I would like to know how to build the framework to work with .metal files (I am building a static lib, not dynamic). Should they be a part of the .a file, or as a resource files in the framework bundle? Or is there an other way to do that? Thanks. Update: For those who tackle this - I ended up following warrenm's 1's suggested option - converted the .metal file into a string and calling newLibraryWithSource:options:error: . Although it is not the best in performance it allowed me to ship

Compute sum of array values in parallel with metal swift

柔情痞子 提交于 2019-12-03 03:35:35
I am trying to compute sum of large array in parallel with metal swift. Is there a god way to do it? My plane was that I divide my array to sub arrays, compute sum of one sub arrays in parallel and then when parallel computation is finished compute sum of sub sums. for example if I have array = [a0,....an] I divide array in sub arrays : array_1 = [a_0,...a_i], array_2 = [a_i+1,...a_2i], .... array_n/i = [a_n-1, ... a_n] sums for this arrays is computed in parallel and I get sum_1, sum_2, sum_3, ... sum_n/1 at the end just compute sum of sub sums. I create application which run my metal shader,

Screen tearing and camera capture with Metal

↘锁芯ラ 提交于 2019-12-02 19:34:33
To avoid writing to a constant buffer from both the gpu and cpu at the same time, Apple recommends using a triple-buffered system with the help of a semaphore to prevent the cpu getting too far ahead of the gpu (this is fine and covered in at least three Metal videos now at this stage). However, when the constant resource is an MTLTexture and the AVCaptureVideoDataOutput delegate runs separately than the rendering loop (CADisplaylink), how can a similar triple-buffered system (as used in Apple’s sample code MetalVideoCapture) guarantee synchronization? Screen tearing (texture tearing) can be