Just a quick question.
Can this:
be done with image processing on an iOS device
Yes, although Core Graphics may not be the best way to do such filtering on an image. My recommendation would be to use an OpenGL ES 2.0 fragment shader. In fact, I just wrote one to do this:
This is the GPUImagePolkaDotFilter that I just added to my open source GPUImage framework . The easiest way to use it is to grab the framework and apply the filter to whatever you want (it's fast enough to run in real time on video).
If you'd just like to use the fragment shader, the following is my code for this:
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp float fractionalWidthOfPixel;
uniform highp float aspectRatio;
uniform highp float dotScaling;
void main()
{
highp vec2 sampleDivisor = vec2(fractionalWidthOfPixel, fractionalWidthOfPixel / aspectRatio);
highp vec2 samplePos = textureCoordinate - mod(textureCoordinate, sampleDivisor) + 0.5 * sampleDivisor;
highp vec2 textureCoordinateToUse = vec2(textureCoordinate.x, (textureCoordinate.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp vec2 adjustedSamplePos = vec2(samplePos.x, (samplePos.y * aspectRatio + 0.5 - 0.5 * aspectRatio));
highp float distanceFromSamplePoint = distance(adjustedSamplePos, textureCoordinateToUse);
lowp float checkForPresenceWithinDot = step(distanceFromSamplePoint, (fractionalWidthOfPixel * 0.5) * dotScaling);
gl_FragColor = vec4(texture2D(inputImageTexture, samplePos ).rgb * checkForPresenceWithinDot, 1.0);
}