metal

Metal emulate geometry shaders using compute shaders

北战南征 提交于 2020-01-02 03:40:09
问题 I'm trying to implement voxel cone tracing in Metal. One of the steps in the algorithm is to voxelize the geometry using a geometry shader. Metal does not have geometry shaders so I was looking into emulating them using a compute shader. I pass in my vertex buffer into the compute shader, do what a geometry shader would normally do, and write the result to an output buffer. I also add a draw command to an indirect buffer. I use the output buffer as the vertex buffer for my vertex shader. This

Metal SCNProgram - can't render a SpriteKit scene that has video content

不问归期 提交于 2020-01-01 12:08:10
问题 I'm (desperately) trying to use a video as texture in a SCNScene with some fancy shader modifiers. I'd like to use a SCNProgram for that part. I've just taken the one from here: #include <metal_stdlib> using namespace metal; #include <SceneKit/scn_metal> struct MyNodeBuffer { float4x4 modelTransform; float4x4 modelViewTransform; float4x4 normalTransform; float4x4 modelViewProjectionTransform; }; typedef struct { float3 position [[ attribute(SCNVertexSemanticPosition) ]]; float2 texCoords [[

SceneKit - Get the rendered scene from a SCNView as a MTLTexture without using a separate SCNRenderer

左心房为你撑大大i 提交于 2020-01-01 05:35:09
问题 My SCNView is using Metal as the rendering API and I would like to know if there's a way to grab the rendered scene as a MTLTexture without having to use a separate SCNRenderer ? Performance drops when I'm trying to both display the scene via the SCNView and re-rendering the scene offscreen to a MTLTexture via a SCNRenderer (I'm trying to grab the output every frame). SCNView gives me access to the MTLDevice , MTLRenderCommandEncoder , and MTLCommandQueue that it uses, but not to the

SceneKit - Get the rendered scene from a SCNView as a MTLTexture without using a separate SCNRenderer

蹲街弑〆低调 提交于 2020-01-01 05:35:07
问题 My SCNView is using Metal as the rendering API and I would like to know if there's a way to grab the rendered scene as a MTLTexture without having to use a separate SCNRenderer ? Performance drops when I'm trying to both display the scene via the SCNView and re-rendering the scene offscreen to a MTLTexture via a SCNRenderer (I'm trying to grab the output every frame). SCNView gives me access to the MTLDevice , MTLRenderCommandEncoder , and MTLCommandQueue that it uses, but not to the

How to render each pixel of a bitmap texture to each native physical pixel of the screen on macOS?

我怕爱的太早我们不能终老 提交于 2019-12-31 04:00:50
问题 As modern macOS devices choose to use a scaled HiDPI resolution by default, bitmap images get blurred on screen. Is there a way to render a bitmap pixel by pixel to the true native physical pixels of the display screen? Any CoreGraphics, OpenGL, or metal API that would allow this without change the display mode of the screen? If you are thinking of those convertXXXXToBacking and friends, stop. Here is the explanation for you. A typical 13 in MacBook pro now has native 2560x1600 pixel

Texture Brush (Drawing Application ) Using Metal

杀马特。学长 韩版系。学妹 提交于 2019-12-30 07:49:43
问题 I am trying to implement a metal-backed drawing application where brushstrokes are drawn on an MTKView by textured square repeatedly along a finger position. I am drawing this with alpha 0.2. When the squares are overlapped the color is added. How can I draw with alpha 0.2. 回答1: I think you need to draw the brush squares to a separate texture, initially cleared to transparent, without blending. Then draw that whole texture to your view with blending. If you draw the brush squares directly to

Memory write performance - GPU CPU Shared Memory

╄→尐↘猪︶ㄣ 提交于 2019-12-29 04:29:36
问题 I'm allocating both input and output MTLBuffer using posix_memalign according to the shared GPU/CPU documentation provided by memkite. Aside: it is easier to just use latest API than muck around with posix_memalign let metalBuffer = self.metalDevice.newBufferWithLength(byteCount, options: .StorageModeShared) My kernel function operates on roughly 16 million complex value structs and writes out an equal number of complex value structs to memory. I've performed some experiments and my Metal

Metal Shading Language - multidimensional array

空扰寡人 提交于 2019-12-25 09:09:01
问题 How can I convert this: const vec3 GDFVectors[19] = vec3[]( normalize(vec3(1, 0, 0)), normalize(vec3(0, 1, 0)), normalize(vec3(0, 0, 1)), normalize(vec3(1, 1, 1 )), normalize(vec3(-1, 1, 1)), normalize(vec3(1, -1, 1)), normalize(vec3(1, 1, -1)), normalize(vec3(0, 1, PHI+1)), normalize(vec3(0, -1, PHI+1)), normalize(vec3(PHI+1, 0, 1)), normalize(vec3(-PHI-1, 0, 1)), normalize(vec3(1, PHI+1, 0)), normalize(vec3(-1, PHI+1, 0)), normalize(vec3(0, PHI, 1)), normalize(vec3(0, -PHI, 1)), normalize

Inefficient parsing of Vertex and Index data, looking for more efficient methods

狂风中的少年 提交于 2019-12-25 08:49:21
问题 I wrote a method to parse an array containing vertex data. The goal of the method was to produce a new array of unique vertices and a new index from that data. This is the struct I used to store the vertices in the array. struct Vertex: Hashable { var x, y, z, nx, ny, nz, s, t: Float var hashValue: Int { return "\(self.x),\(self.y),\(self.z),\(self.nx),\(self.ny),\(self.nz),\(self.s),\(self.t),".hashValue } static func ==(lhs: Vertex, rhs: Vertex) -> Bool { return lhs.hashValue == rhs

detectMultiScale(…) internal principle? [closed]

我们两清 提交于 2019-12-25 00:05:24
问题 Closed . This question needs details or clarity. It is not currently accepting answers. Want to improve this question? Add details and clarify the problem by editing this post. Closed 5 days ago . This is an algorithm question on the detectMultiScale(...) function from the opencv library. I need help to understand what opencv's detectMulitScale() function exactly does. I have understood from reading the C++ code that the source image is scaled with several scales based on scaleFactor and size