Continuously train CoreML model after shipping

匆匆过客 提交于 2019-11-30 03:43:14

The .mlmodel file is compiled by Xcode into a .mlmodelc structure (which is actually a folder inside your app bundle).

Your app might be able to download a new .mlmodel from a server but I don't think you can run the Core ML compiler from inside your app.

Maybe it is possible for your app to download the compiled .mlmodelc data from a server, copy it into the app's Documents directory, and instantiate the model from that. Try it out. ;-)

(This assumes the App Store does not do any additional processing on the .mlmodelc data before it packages up your app and ships it to the user.)

Apple has recently added a new API for on-device model compilation. Now you can download your model and compile it on device

In order to dynamically update the model (without updating the whole app), you need to use MPS (Metal Performance Shader) directly instead of relying on .mlmodel, which must be bundled with the app.

It means you need to manually build the neural network by writing some Swift code (instead of using coremltools to converts existing models directly), and feed various weights for each layer, which is a little bit of work, but not a rocket science.

This is a good video to watch if you want to know more about MPS.

https://developer.apple.com/videos/play/wwdc2017/608/

Core ML supports inference but not training on device.


You can update the model by replacing it with a new one from a server, but that deserves its own question.

michaelxing

Now with iOS11 beta4, you can compile the model, download from server.

(Details)

CoreML 3 now supports on-device model personalization. You can improve your model for each user while keeping its data private.

https://developer.apple.com/machine-learning/core-ml/

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!