I have been programming for two years on iOS and never on mac. I am working on a little utility for handling some simple image needs that I have in my iOS development. Anywa
Here's a simple example that draws a blue circle into an NSImage (I'm using ARC in this example, add retains/releases to taste)
NSSize size = NSMakeSize(50, 50);
NSImage* im = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size.width
pixelsHigh:size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[im addRepresentation:rep];
[im lockFocus];
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextClearRect(ctx, NSMakeRect(0, 0, size.width, size.height));
CGContextSetFillColorWithColor(ctx, [[NSColor blueColor] CGColor]);
CGContextFillEllipseInRect(ctx, NSMakeRect(0, 0, size.width, size.height));
[im unlockFocus];
[[im TIFFRepresentation] writeToFile:@"/Users/USERNAME/Desktop/foo.tiff" atomically:NO];
The main difference is that on OS X you first have to create the image, then you can begin drawing into it; on iOS you create the context, then extract the image from it.
Basically, lockFocus makes the current context be the image and you draw directly onto it, then use the image.
I'm not completely sure if this answers all of your question, but I think it's at least one part of it.