metal

why is metal shader gradient lighter as a SCNProgram applied to a SceneKit Node than it is as a MTKView?

人走茶凉 提交于 2019-11-29 07:32:50
I have a gradient, generated by a Metal fragment shader that I've applied to a SCNNode defined by a plane geometry. It looks like this: When I use the same shader applied to a MTKView rendered in an Xcode playground, the colors are darker. What is causing the colors to be lighter in the Scenekit version? Here is the Metal shader and the GameViewController. Shader: #include <metal_stdlib> using namespace metal; #include <SceneKit/scn_metal> struct myPlaneNodeBuffer { float4x4 modelTransform; float4x4 modelViewTransform; float4x4 normalTransform; float4x4 modelViewProjectionTransform; float2x3

Capture Metal MTKView as Movie in realtime?

岁酱吖の 提交于 2019-11-28 23:51:57
What is the most efficient way to capture frames from a MTKView ? If possible, I would like to save a .mov file from the frames in realtime. Is it possible to render into an AVPlayer frame or something? It is currently drawing with this code (based on @warrenm PerformanceShaders project ): func draw(in view: MTKView) { _ = inflightSemaphore.wait(timeout: DispatchTime.distantFuture) updateBuffers() let commandBuffer = commandQueue.makeCommandBuffer() commandBuffer.addCompletedHandler{ [weak self] commandBuffer in if let strongSelf = self { strongSelf.inflightSemaphore.signal() } } // Dispatch

Memory write performance - GPU CPU Shared Memory

北战南征 提交于 2019-11-28 22:44:28
I'm allocating both input and output MTLBuffer using posix_memalign according to the shared GPU/CPU documentation provided by memkite. Aside: it is easier to just use latest API than muck around with posix_memalign let metalBuffer = self.metalDevice.newBufferWithLength(byteCount, options: .StorageModeShared) My kernel function operates on roughly 16 million complex value structs and writes out an equal number of complex value structs to memory. I've performed some experiments and my Metal kernel 'complex math section' executes in 0.003 seconds (Yes!), but writing the result to the buffer takes

Inconsistent SceneKit framerate

情到浓时终转凉″ 提交于 2019-11-28 09:14:44
I'm seeing very inconsistent frame rates in the SceneKit starter project. Sometimes it runs constantly at 60 fps (12ms rendering, 6ms metal flush), and sometimes it runs constantly at 40 fps (20ms rendering, 6ms metal flush), no more, no less. The frame rate changes randomly when I reopen the app, and will stay at that frame rate until the next reopen. I tried switching to OpenGL ES, and while it seems to fix it in the starter project, I still see those drops in my real app. The starter project is unmodified (rotating ship), and I'm testing it on Xcode 7.0 and iPad Mini 4 running iOS 9.0.1. I

SKEffectNode combined with CIFilter runs out of memory

爷,独闯天下 提交于 2019-11-28 06:09:42
问题 I tried to combine a SKEffectNode with a CIFilter and a child SKSpriteNode and while its seems to work for a few moments, the result is that all device memory is consumed and my iPad Retina (A7 GPU) just reboots. I also sometime see "Message from debugger: Terminated due to memory issue" printed to the debugger log. The full source is on github at SKEffectNodeFiltered. I am creating the filter like so: // Pixelate CoreImage filter CIFilter *pixellateFilter = [CIFilter filterWithName:@

Metal MTLTexture replaces semi-transparent areas with black when alpha values that aren't 1 or 0

不想你离开。 提交于 2019-11-28 03:01:53
问题 While using Apple's texture importer, or my own, a white soft-edged circle drawn in software (with a transparent bg) or in Photoshop (saved as a PNG) when rendered will have its semi-transparent colors replaced with black when brought into Metal. Below is a screen grab from Xcode's Metal debugger, you can see the texture before being sent to shaders. Image located here (I'm not high ranked enough to embed) In Xcode, finder, and when put into an UIImageView, the source texture does not have

Metal - Resize video buffer before passing to custom Kernel filter

余生颓废 提交于 2019-11-28 02:17:39
Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers). Let's assume we have a 4K video and a custom video composition with a 1080p output size, that applies an advanced filter on the video buffers. Obviously, we don't need to filter the video in its original size, doing so we'll probably terminate the app with a memory warning (true story). This is the video-filtering pipeline: Getting the buffer in 4K (as CIImage ) --> Apply filter on the CIImage --> the filter applies the CIKernel Metal filter function on the CIImage --> Return the filtered CIImage to

Is this code drawing at the point or pixel level? How to draw retina pixels?

℡╲_俬逩灬. 提交于 2019-11-27 15:18:14
Consider this admirable script which draws a (circular) gradient, https://github.com/paiv/AngleGradientLayer/blob/master/AngleGradient/AngleGradientLayer.m int w = CGRectGetWidth(rect); int h = CGRectGetHeight(rect); and then angleGradient(data, w, h .. and the it loops over all those for (int y = 0; y < h; y++) for (int x = 0; x < w; x++) { basically setting the color *p++ = color; But wait - wouldn't this be working by points, not pixels ? How, really, would you draw to the physical pixels on dense screens? Is it a matter of: Let's say the density is 4 on the device. Draw just as in the

Inconsistent SceneKit framerate

做~自己de王妃 提交于 2019-11-27 02:46:45
问题 I'm seeing very inconsistent frame rates in the SceneKit starter project. Sometimes it runs constantly at 60 fps (12ms rendering, 6ms metal flush), and sometimes it runs constantly at 40 fps (20ms rendering, 6ms metal flush), no more, no less. The frame rate changes randomly when I reopen the app, and will stay at that frame rate until the next reopen. I tried switching to OpenGL ES, and while it seems to fix it in the starter project, I still see those drops in my real app. The starter

Metal - Resize video buffer before passing to custom Kernel filter

限于喜欢 提交于 2019-11-26 23:40:41
问题 Within our iOS app, we are using custom filters using Metal (CIKernel/CIColorKernel wrappers). Let's assume we have a 4K video and a custom video composition with a 1080p output size, that applies an advanced filter on the video buffers. Obviously, we don't need to filter the video in its original size, doing so we'll probably terminate the app with a memory warning (true story). This is the video-filtering pipeline: Getting the buffer in 4K (as CIImage ) --> Apply filter on the CIImage -->