In Python, I trained an image classification model with keras to receive input as a [224, 224, 3] array and output a prediction (1 or 0). When I load the save the model and load it into xcode, it states that the input has to be in MLMultiArray format.
Is there a way for me to convert a UIImage into MLMultiArray format? Or is there a way for me change my keras model to accept CVPixelBuffer type objects as an input.
In your Core ML conversion script you can supply the parameter image_input_names='data' where data is the name of your input.
Now Core ML will treat this input as an image (CVPixelBuffer) instead of a multi-array.
When you convert the caffe model to MLModel, you need to add this line:
image_input_names = 'data'
Take my own transfer script as an example, the script should be like this:
import coremltools
coreml_model = coremltools.converters.caffe.convert(('gender_net.caffemodel',
'deploy_gender.prototxt'),
image_input_names = 'data',
class_labels = 'genderLabel.txt')
coreml_model.save('GenderMLModel.mlmodel')
And then your MLModel's input data will be CVPixelBufferRef instead of MLMultiArray. Transferring UIImage to CVPixelBufferRef would be an easy thing.
Did not tried this, but here is how its done for the FOOD101 sample
func preprocess(image: UIImage) -> MLMultiArray? {
let size = CGSize(width: 299, height: 299)
guard let pixels = image.resize(to: size).pixelData()?.map({ (Double($0) / 255.0 - 0.5) * 2 }) else {
return nil
}
guard let array = try? MLMultiArray(shape: [3, 299, 299], dataType: .double) else {
return nil
}
let r = pixels.enumerated().filter { $0.offset % 4 == 0 }.map { $0.element }
let g = pixels.enumerated().filter { $0.offset % 4 == 1 }.map { $0.element }
let b = pixels.enumerated().filter { $0.offset % 4 == 2 }.map { $0.element }
let combination = r + g + b
for (index, element) in combination.enumerated() {
array[index] = NSNumber(value: element)
}
return array
}
来源:https://stackoverflow.com/questions/44529869/converting-uiimage-to-mlmultiarray-for-keras-model