coreml

Custom layer with two parameters function on Core ML

倖福魔咒の 提交于 2021-02-19 07:33:09
问题 Thanks to this great article(http://machinethink.net/blog/coreml-custom-layers/), I understood how to write converting using coremltools and Lambda with Keras custom layer. But, I cannot understand on the situation, function with two parameters. #python def scaling(x, scale): return x * scale Keras layer is here. #python up = conv2d_bn(mixed, K.int_shape(x)[channel_axis], 1, activation=None, use_bias=True, name=name_fmt('Conv2d_1x1')) x = Lambda(scaling, # HERE !! output_shape=K.int_shape(up)

Combining CoreML and ARKit

不问归期 提交于 2021-02-06 09:24:34
问题 I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website. I am starting from the standard template for ARKit (Xcode 9 beta 3) Instead of intanciating a new camera session, I reuse the session that has been started by the ARSCNView. At the end of my viewDelegate, I write: sceneView.session.delegate = self I then extend my viewController to conform to the ARSessionDelegate protocol (optional protocol) // MARK: ARSessionDelegate extension

coremltools: how to properly use NeuralNetworkMultiArrayShapeRange?

浪尽此生 提交于 2021-01-29 16:28:12
问题 I have a PyTorch network and I want to deploy it to iOS devices. In short, I fail to add flexibility to the input tensor shape in CoreML. The network is a convnet that takes an RGB image (stored as a tensor) as an input and returns an RGB image of the same size. Using PyTorch, I can input images of any size I want, for instance a tensor of size (1, 3, 300, 300) for a 300x300 image. To convert the PyTorch model to a CoreML model, I first convert it to an ONNX model using torch.onnx.export .

Does CoreML work with the swift package manager?

为君一笑 提交于 2021-01-28 20:10:59
问题 Can I use the CoreML framework in a Swift package manager executable? Or is it limited to iOS and OSX apps ? 回答1: As far as I know, Core ML currently only runs on iOS or macOS. Apple isn't in the habit of making their frameworks available for other platforms. ;-) 来源: https://stackoverflow.com/questions/53055956/does-coreml-work-with-the-swift-package-manager

Can't convert Core ML model to Onnx (then to Tensorflow Lite)

帅比萌擦擦* 提交于 2021-01-27 19:51:26
问题 I'm trying to convert a trained Core ML model to TensorFlow Lite. I find I need convert it to Onnx first. The problems is that I get errors. I've tried with different versions of python, onnxmltools, winmltools and it doesn't seems to work. I also tried docker image of onnx ecosystem with same result. Can any one help me with it? Thanks in advance. Script I used: import coremltools import onnxmltools input_coreml_model = '../model.mlmodel' output_onnx_model = '../model.onnx' coreml_model =

ML Build error for Catalyst (Xcode 12 GM)

烂漫一生 提交于 2021-01-02 06:08:15
问题 Anyone else having issues with the GM release with ML models and has a solution for this? I get the following error: Type 'MLModel' has no member '__loadContents' I have cleaned the Project + deleted derived data (this is a generated file that is put into the derived data folder) I notice that the method should not be there for mac OS 10.15 which I use, but it there for some reason. I also noticed that this API is still in beta while the GM is a production build? https://developer.apple.com

ML Build error for Catalyst (Xcode 12 GM)

岁酱吖の 提交于 2021-01-02 06:07:21
问题 Anyone else having issues with the GM release with ML models and has a solution for this? I get the following error: Type 'MLModel' has no member '__loadContents' I have cleaned the Project + deleted derived data (this is a generated file that is put into the derived data folder) I notice that the method should not be there for mac OS 10.15 which I use, but it there for some reason. I also noticed that this API is still in beta while the GM is a production build? https://developer.apple.com

Convert [MLMultiArray] to Float?

こ雲淡風輕ζ 提交于 2020-12-15 05:26:27
问题 I have an MLMultiArray which is a result of an ML Model. I need to convert it to Float so that I can further store it in Realm. Below is an example of one of the MLMultiArray. The result from the ML Model contains 120 of the same vectors so its an array of MLMultiArrays i.e Array of Float32 1 x 128 matrices. Float32 1 x 128 matrix [4.476562,1.179688,0.07141113,6.976562,-0.2858887,-7.378906,0.6445312,3.695312,1.399414,2.486328,-3.988281,-0.2636719,1.000977,-4.480469,-7.832031,1.59082,0.8515625

MLModel works with MultiArray output but cannot successfully change the output to an image

旧城冷巷雨未停 提交于 2020-12-13 11:16:52
问题 I have converted a Keras model to a MLModel using coremltools 4.0 with limited success. It works but only if I use an MLMultiArray for the output and covert to an image. Converting to an image takes magnitudes longer than inferencing; making it unusable. If I try to change the MLModel spec to use images for output I get this error running prediction: Failed to convert output Identity to image: NSUnderlyingError=0x2809bad00 {Error Domain=com.apple.CoreML Code=0 "Invalid array shape ( 2048,

MLModel works with MultiArray output but cannot successfully change the output to an image

▼魔方 西西 提交于 2020-12-13 11:16:24
问题 I have converted a Keras model to a MLModel using coremltools 4.0 with limited success. It works but only if I use an MLMultiArray for the output and covert to an image. Converting to an image takes magnitudes longer than inferencing; making it unusable. If I try to change the MLModel spec to use images for output I get this error running prediction: Failed to convert output Identity to image: NSUnderlyingError=0x2809bad00 {Error Domain=com.apple.CoreML Code=0 "Invalid array shape ( 2048,