core-graphics

Drawing Text with Core Graphics

无人久伴 提交于 2019-12-02 19:44:03
I need to draw centered text to a CGContext. I started with a Cocoa approach. I created a NSCell with the text and tried to draw it thus: NSGraphicsContext* newCtx = [NSGraphicsContext graphicsContextWithGraphicsPort:bitmapContext flipped:true]; [NSGraphicsContext saveGraphicsState]; [NSGraphicsContext setCurrentContext:newCtx]; [pCell setFont:font]; [pCell drawWithFrame:rect inView:nil]; [NSGraphicsContext restoreGraphicsState]; But the CGBitmapContext doesn't seem to have the text rendered on it. Possibly because I have to pass nil for the inView: parameter. So I tried switching text

How Can I Record the Screen with Acceptable Performance While Keeping the UI Responsive?

痞子三分冷 提交于 2019-12-02 19:41:27
I'm looking for help with a performance issue in an Objective-C based iOS app. I have an iOS application that captures the screen's contents using CALayer's renderInContext method. It attempts to capture enough screen frames to create a video using AVFoundation. The screen recording is then combined with other elements for research purposes on usability. While the screen is being captured, the app may also be displaying the contents of a UIWebView, going out over the network to fetch data, etc... The content of the Web view is not under my control - it is arbitrary content from the Web. This

Drawing triangle/arrow on a line with CGContext

▼魔方 西西 提交于 2019-12-02 19:38:41
I am using the framework of route-me for working with locations. In this code the path between two markers(points) will be drawn as a line. My Question: "What code should I add if I want to add an arrow in the middle(or top) of the line, so that it points the direction" Thanks - (void)drawInContext:(CGContextRef)theContext { renderedScale = [contents metersPerPixel]; float scale = 1.0f / [contents metersPerPixel]; float scaledLineWidth = lineWidth; if(!scaleLineWidth) { scaledLineWidth *= renderedScale; } //NSLog(@"line width = %f, content scale = %f", scaledLineWidth, renderedScale);

CGContext: how do I erase pixels (e.g. kCGBlendModeClear) outside of a bitmap context?

社会主义新天地 提交于 2019-12-02 19:36:26
I'm trying to build an eraser tool using Core Graphics, and I'm finding it incredibly difficult to make a performant eraser - it all comes down to: CGContextSetBlendMode(context, kCGBlendModeClear) If you google around for how to "erase" with Core Graphics, almost every answer comes back with that snippet. The problem is it only (apparently) works in a bitmap context. If you're trying to implement interactive erasing, I don't see how kCGBlendModeClear helps you - as far as I can tell, you're more or less locked into erasing on and off-screen UIImage / CGImage and drawing that image in the

CGFont and CTFont functionality in portable Swift (e.g. Ubuntu, etc)?

三世轮回 提交于 2019-12-02 16:48:33
问题 Swift on macOS, the import Foundation statement is sufficient to link with CGFont, CTFont and related functions. import Foundation public struct FontMetric { let cgFont: CGFont private let ctFont: CTFont // ... However, for Swift on Ubuntu, the CGFont, CTFont and related functions cause "undeclared type" errors: FontMetric.swift:21:17: error: use of undeclared type 'CGFont' let cgFont: CGFont ^~~~~~ FontMetric.swift:24:25: error: use of undeclared type 'CTFont' private let ctFont: CTFont ^~~~

Changing color space on and image

隐身守侯 提交于 2019-12-02 16:43:16
问题 I'm creating a mask based on DeviceGray color space based image. What basically I want to do is to change all sorts of gray (beside black) pixels into white and leave black pixels as they are. So I want my image to be consisted with black and white pixels. Any idea how to achieve that using CoreGraphics means ? Please dont offer running all over the pixels in the loop 回答1: Use CGImageCreateWithMaskingColors and CGContextSetRGBFillColor together like this: CGImageRef myMaskedImage; const

UIImage decompression causing scrolling lag

…衆ロ難τιáo~ 提交于 2019-12-02 16:38:18
I have this app with a full screen tableView that displays a bunch of tiny images. Those images are pulled from the web, processed on a background thread, and then saved to disk using something like: dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ UIGraphicsBeginImageContextWithOptions(rect.size, YES, 0); // code that adds some glosses, shadows, etc UIImage *output = UIGraphicsGetImageFromCurrentImageContext(); NSData* cacheData = UIImagePNGRepresentation(output); [cacheData writeToFile:thumbPath atomically:YES]; dispatch_async(dispatch_get_main_queue(), ^{

Problem with CGImageDestination and file naming

此生再无相见时 提交于 2019-12-02 15:39:27
问题 I am capturing images from the camera, using AVCapture as I have need of speed and the standard kit stuff is way too slow. I have problem whereby the file that is being output (an animated GIF) is having it's file name mangles by the CGImageDestination functions... When I output the NSURL (cast to a CFURLRef) to the log I get the path/filename I intended: 2011-09-04 20:40:25.914 Mover[3558:707] Path as string:.../Documents/91B2C5E8-F925-47F3-B539-15185F640828-3558-000003327A227485.gif However

How can I change the saturation of an UIImage?

冷暖自知 提交于 2019-12-02 14:32:36
I have an UIImage and want to shift it's saturation about +10%. Are there standard methods or functions that can be used for this? There's a CoreImage filter for this. CIColorControls Just set the inputSaturation to < 1.0 to desaturate or > 1.0 to increase saturation... eg. Here's a method I've added in a category on UIImage to desaturate an image. -(UIImage*) imageDesaturated { CIContext *context = [CIContext contextWithOptions:nil]; CIImage *ciimage = [CIImage imageWithCGImage:self.CGImage]; CIFilter *filter = [CIFilter filterWithName:@"CIColorControls"]; [filter setValue:ciimage forKey

3d text effect in iOS

两盒软妹~` 提交于 2019-12-02 14:19:27
I want to render some text on one of my screens that has a 3dish look to it. I am using UIKit and standard views controllers etc. The effect will look something like this: Can this be done somehow with UIKit & iOS? Ordinarily I would just use a static png however, the text is dynamic and updates based on user data The following code might not be perfect, but it should be a good starting point. Basically you draw the font twice, slightly changing the size and the offset. Depending on the font and size you're dealing with you're probably have to play a bit with fontSize , fontSizeDelta and