coreml

How to apply iOS VNImageHomographicAlignmentObservation warpTransform?

懵懂的女人 提交于 2019-12-01 09:24:00
I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping. I'm calling this to get the transform: class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation : CGImagePropertyOrientation, completion:(matrix_float3x3?)-> ()) { let registrationSequenceReqHandler =

Converting UIImage to MLMultiArray for Keras Model

我只是一个虾纸丫 提交于 2019-12-01 08:40:24
In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format. Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input. Matthijs Hollemans In your Core ML conversion script you can supply the parameter image_input_names='data' where data is the name of your input. Now Core ML will treat this input as

How to apply iOS VNImageHomographicAlignmentObservation warpTransform?

谁说我不能喝 提交于 2019-12-01 06:49:05
问题 I'm testing Apple's Vision Alignment API and have questions regarding VNHomographicImageRegistrationRequest. Has anyone got it to work? I can get the warpTransform out of it, but I've yet to see a matrix that makes sense, meaning, I'm unable to get a result that warps the image back onto the source image. I'm using Opencv warpPerspective to handle the warping. I'm calling this to get the transform: class func homography(_ cgImage0 : CGImage!, _ cgImage1 : CGImage!, _ orientation :

Error installing coremltools

雨燕双飞 提交于 2019-11-30 15:32:50
I am looking at Core ML Apple iOS framework. I have read that to install coremltools to create own models. I have installed python sudo python /Users/administrator/Downloads/get-pip.py As per document coreml installation I have downloaded coremltool file. and then trying to install coremltools https://pypi.python.org/pypi/coremltools When I installed coremltools on my mac, i got the following error. Please suggest me to solve this error. so that i can work on coremltools MyMacbook:~ administrator$ pip install -U /Users/administrator/Downloads/coremltools-0.3.0-py2.7-none-any.whl Processing .

How To Get CoreML In Pure Playground Files

我怕爱的太早我们不能终老 提交于 2019-11-30 09:54:19
问题 I am using a .playground file and I can't seem to add my CoreML model to it. I drag it into the Resources folder and this is my code: func predict(image: CGImage) { let model = try! VNCoreMLModel(for: Inceptionv3().model) let request = VNCoreMLRequest(model: model, completionHandler: results) let handler = VNSequenceRequestHandler() try! handler.perform([request], on: image) } However, I get the error saying: Use of Undeclared Type Inceptionv3 Can someone please help me out? 回答1: The compiler

Continuously train CoreML model after shipping

匆匆过客 提交于 2019-11-30 03:43:14
In looking over the new CoreML API, I don't see any way to continue training the model after generating the .mlmodel and bundling it in your app. This makes me think that I won't be able to perform machine learning on my user's content or actions because the model must be entirely trained beforehand. Is there any way to add training data to my trained model after shipping? EDIT: I just noticed you could initialize a generated model class from a URL, so perhaps I can post new training data to my server, re-generate the trained model and download it into the app? Seems like it would work, but

How To Get CoreML In Pure Playground Files

前提是你 提交于 2019-11-29 16:22:18
I am using a .playground file and I can't seem to add my CoreML model to it. I drag it into the Resources folder and this is my code: func predict(image: CGImage) { let model = try! VNCoreMLModel(for: Inceptionv3().model) let request = VNCoreMLRequest(model: model, completionHandler: results) let handler = VNSequenceRequestHandler() try! handler.perform([request], on: image) } However, I get the error saying: Use of Undeclared Type Inceptionv3 Can someone please help me out? The compiler raises this error, because it cannot find a declaration of the class Inceptionv3, that you try to

How to create & train a neural model to use for Core ML [closed]

邮差的信 提交于 2019-11-28 23:15:26
Apple introduced Core ML. There are many third parties providing trained models. But what if I want to create a model myself? How can I do that and what tools & technologies can I use? Core ML doesn't provide a way to train your own models. You only can convert existing ones to Apple 'mlmodel' format. To create your own neural networks, use Caffe or Keras frameworks and then convert those models to CoreML format. For traditional machine learning algorithms Core ML is also compatible with Scikit-learn * and XGBoost . You can also train and run neural networks on iOS without Core ML, just use

Vision Framework with ARkit and CoreML

偶尔善良 提交于 2019-11-28 16:15:44
While I have been researching best practices and experimenting multiple options for an ongoing project(i.e. Unity3D iOS project in Vuforia with native integration, extracting frames with AVFoundation then passing the image through cloud-based image recognition), I have come to the conclusion that I would like to use ARkit, Vision Framework, and CoreML; let me explain. I am wondering how I would be able to capture ARFrames, use the Vision Framework to detect and track a given object using a CoreML model. Additionally, it would be nice to have a bounding box once the object is recognized with

Vision Framework with ARkit and CoreML

人走茶凉 提交于 2019-11-28 15:42:15
问题 While I have been researching best practices and experimenting multiple options for an ongoing project(i.e. Unity3D iOS project in Vuforia with native integration, extracting frames with AVFoundation then passing the image through cloud-based image recognition), I have come to the conclusion that I would like to use ARkit, Vision Framework, and CoreML; let me explain. I am wondering how I would be able to capture ARFrames, use the Vision Framework to detect and track a given object using a