metal-performance-shaders

MPSImageHistogramEqualization throws assertion that offset must be < [buffer length]

怎甘沉沦 提交于 2020-06-29 03:50:42
问题 I'm trying to do histogram equalization using MPSImageHistogramEqualization on iOS but it ends up throwin an assertion I do not understand. Here is my code: // Calculate Histogram var histogramInfo = MPSImageHistogramInfo( numberOfHistogramEntries: 256, histogramForAlpha: false, minPixelValue: vector_float4(0,0,0,0), maxPixelValue: vector_float4(1,1,1,1)) let calculation = MPSImageHistogram(device: self.mtlDevice, histogramInfo: &histogramInfo) let bufferLength = calculation.histogramSize

MTLBuffer allocation + CPU/GPU synchronisation

二次信任 提交于 2019-12-23 04:47:45
问题 I am using a metal performance shader( MPSImageHistogram ) to compute something in an MTLBuffer that I grab, perform computations, and then display via MTKView . The MTLBuffer output from the shader is small (~4K bytes). So I am allocating a new MTLBuffer object for every render pass, and there are atleast 30 renders per second for every video frame. calculation = MPSImageHistogram(device: device, histogramInfo: &histogramInfo) let bufferLength = calculation.histogramSize(forSourceFormat:

How to convert a MTLTexture to CVpixelBuffer to write into an AVAssetWriter?

∥☆過路亽.° 提交于 2019-12-19 09:25:07
问题 I have a requirement to apply filters on the live video and I'm trying to do it in Metal. But I have encountered problem with converting the MTLTexture into CVPixelBuffer after encoding the filter into destination filter. Reference (https://github.com/oklyc/MetalCameraSample-master-2) Here are my codes. if let pixelBuffer = pixelBuffer { CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.init(rawValue: 0)) let region = MTLRegionMake2D(0, 0, Int(currentDrawable.layer.drawableSize

Continuously train CoreML model after shipping

那年仲夏 提交于 2019-12-18 11:26:55
问题 In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand. Is there any way to add training data to my trained model after shipping? EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to

MPSImageIntegral returning all zeros

孤人 提交于 2019-12-11 15:22:02
问题 I am trying to use MPSImageIntegral to calculate the sum of some elements in an MTLTexture . This is what I'm doing: std::vector<float> integralSumData; for(int i = 0; i < 10; i++) integralSumData.push_back((float)i); MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float width:(integralSumData.size()) height:1 mipmapped:NO]; textureDescriptor.usage = MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite; id<MTLTexture>

How do you synchronize a Metal Performance Shader with an MTLBlitCommandEncoder?

两盒软妹~` 提交于 2019-12-06 02:47:24
问题 I'm trying to better understand the synchronization requirements when working with Metal Performance Shaders and an MTLBlitCommandEncoder . I have an MTLCommandBuffer that is set up as follows: Use MTLBlitCommandEncoder to copy a region of Texture A into Texture B. Texture A is larger than Texture B. I'm extracting a "tile" from Texture A and copying it into Texture B. Use an MPSImageBilinearScale metal performance shader with Texture B as the source texture and a third texture, Texture C, as

MTKView Drawing Performance

无人久伴 提交于 2019-12-03 17:39:08
问题 What I am Trying to Do I am trying to show filters on a camera feed by using a Metal view: MTKView . I am closely following the method of Apple's sample code - Enhancing Live Video by Leveraging TrueDepth Camera Data (link). What I Have So Far Following code works great (mainly interpreted from above-mentioned sample code) : class MetalObject: NSObject, MTKViewDelegate { private var metalBufferView : MTKView? private var metalDevice = MTLCreateSystemDefaultDevice() private var

Metal RGB to YUV conversion compute shader

孤者浪人 提交于 2019-12-02 08:53:43
问题 I am trying to write a Metal compute shader for converting from RGB to YUV, but am getting build errors. typedef struct { float3x3 matrix; float3 offset; } ColorConversion; // Compute kernel kernel void kernelRGBtoYUV(texture2d<half, access::sample> inputTexture [[ texture(0) ]], texture2d<half, access::write> textureY [[ texture(1) ]], texture2d<half, access::write> textureCbCr [[ texture(2) ]], constant ColorConversion &colorConv [[ buffer(0) ]], uint2 gid [[thread_position_in_grid]]) { //

Metal RGB to YUV conversion compute shader

混江龙づ霸主 提交于 2019-12-02 03:19:33
I am trying to write a Metal compute shader for converting from RGB to YUV, but am getting build errors. typedef struct { float3x3 matrix; float3 offset; } ColorConversion; // Compute kernel kernel void kernelRGBtoYUV(texture2d<half, access::sample> inputTexture [[ texture(0) ]], texture2d<half, access::write> textureY [[ texture(1) ]], texture2d<half, access::write> textureCbCr [[ texture(2) ]], constant ColorConversion &colorConv [[ buffer(0) ]], uint2 gid [[thread_position_in_grid]]) { // Make sure we don't read or write outside of the texture if ((gid.x >= inputTexture.get_width()) || (gid

Continuously train CoreML model after shipping

匆匆过客 提交于 2019-11-30 03:43:14
In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand. Is there any way to add training data to my trained model after shipping? EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to my server, re-generate the trained model and download it into the app? Seems like it would work, but