Image processing on the iPhone

安稳与你 提交于 2019-12-06 07:21:34

问题


I would like to apply image processing on pictures taken on the iPhone. This processing would involve 2D matrix convolutions etc.

I'm afraid that the performance with nested NSArrays would be pretty bad. What is the right way to manipulate pixel based images? Should I simply use C arrays allocated with malloc?


回答1:


Have you looked at the Quartz 2D engine available in the iPhone SDK? Or perhaps Core Graphics? Apple has a nice overview document describing all the different imaging technologies available on the iPhone. Unfortunately there isn't anything as nice as ImageKit on the iPhone yet.




回答2:


I suggest to use OpenCV image processing library since it contains well optimized algorithms almost for anything you want. OpenCV will be definitely faster than using manual processing with NSArray. But there are one major drawback - OpenCV library is written on C/C++, so you will have to convert your NSImage to native OpenCV image format to do processing. But it's really easy to google how to do this. I use OpenCV in my own iPhone project, here is small how-to post of building OpenCv for IPhone: http://computer-vision-talks.com/2010/12/building-opencv-for-ios/




回答3:


Yes, you would use a C array since that's how you get back the pixel data anyway.

As mentioned, you should look and see if you can use Quartz2D to do the manipulations you are interested in as it would probably perform better being hardware based. If not, just do your own over the array of pixels.




回答4:


The iPhone also supports OpenCL, and it's GPU has way more processing power than the CPU.



来源:https://stackoverflow.com/questions/2120447/image-processing-on-the-iphone

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!