core-graphics

NSRect vs CGRect: y-axis inversion

半世苍凉 提交于 2021-01-27 02:47:23
问题 So I'm trying to convert an NSRect to a CGRect. I used NSRectToCGRect() which copies everything over finely, but does not take into account y-origin axis. The problem: A CGRect origin of (0,0) is the top left. NSRect origin (0,0) is bottom left. Thus an NSRect (0,0,100,100) box is positioned at the bottom left of your screen, while a CGRect (0,0,100,100) box is placed at the top left of your screen. I have a hack that fixes the y-origin through basic math: fixedOriginY = screenHeight - NSRect

NSRect vs CGRect: y-axis inversion

做~自己de王妃 提交于 2021-01-27 02:47:17
问题 So I'm trying to convert an NSRect to a CGRect. I used NSRectToCGRect() which copies everything over finely, but does not take into account y-origin axis. The problem: A CGRect origin of (0,0) is the top left. NSRect origin (0,0) is bottom left. Thus an NSRect (0,0,100,100) box is positioned at the bottom left of your screen, while a CGRect (0,0,100,100) box is placed at the top left of your screen. I have a hack that fixes the y-origin through basic math: fixedOriginY = screenHeight - NSRect

NSRect vs CGRect: y-axis inversion

試著忘記壹切 提交于 2021-01-27 02:46:10
问题 So I'm trying to convert an NSRect to a CGRect. I used NSRectToCGRect() which copies everything over finely, but does not take into account y-origin axis. The problem: A CGRect origin of (0,0) is the top left. NSRect origin (0,0) is bottom left. Thus an NSRect (0,0,100,100) box is positioned at the bottom left of your screen, while a CGRect (0,0,100,100) box is placed at the top left of your screen. I have a hack that fixes the y-origin through basic math: fixedOriginY = screenHeight - NSRect

How to reconstruct grayscale image from intensity values?

空扰寡人 提交于 2021-01-04 12:37:12
问题 It is commonly required to get the pixel data from an image or reconstruct that image from pixel data. How can I take an image, convert it to an array of pixel values and then reconstruct it using the pixel array in Swift using CoreGraphics ? The quality of the answers to this question have been all over the place so I'd like a canonical answer. 回答1: Get pixel values as an array This function can easily be extended to a color image. For simplicity I'm using grayscale, but I have commented the

CGPath copy lineJoin and miterLimit has no apparent affect

会有一股神秘感。 提交于 2020-12-29 11:57:13
问题 I am offsetting a CGPath using copy(strokingWithWidth:lineCap:lineJoin:miterLimit:transform‌​:). The problem is the offset path introduces all kinds of jagged lines that seem to be the result of a miter join. Changing the miterLimit to 0 has no effect, and using a bevel line join also makes no difference. In this image there is the original path (before applying strokingWithWidth ), an offset path using miter join, and an offset path using bevel join. Why doesn't using bevel join have any

CGPath copy lineJoin and miterLimit has no apparent affect

眉间皱痕 提交于 2020-12-29 11:56:49
问题 I am offsetting a CGPath using copy(strokingWithWidth:lineCap:lineJoin:miterLimit:transform‌​:). The problem is the offset path introduces all kinds of jagged lines that seem to be the result of a miter join. Changing the miterLimit to 0 has no effect, and using a bevel line join also makes no difference. In this image there is the original path (before applying strokingWithWidth ), an offset path using miter join, and an offset path using bevel join. Why doesn't using bevel join have any

How do I read an image from file for use with the `PyObjC Vision` framework?

喜夏-厌秋 提交于 2020-12-14 23:50:14
问题 I am trying to detect and decode barcodes from a library of images. In most cases pyzbar simply works (see code here). However, in some cases, my iPhone can decode the QR code but zbar fails. As I am on a mac, I can make use of the same Vision framework that the iPhone uses and there are even python wrappers to the macOS ObjC frameworks. I tried using Quartz.CGImageSourceCreateWithURL but that returns a None no matter what I pass it. def read_image(path): imageSrc = Quartz

How do I read an image from file for use with the `PyObjC Vision` framework?

孤者浪人 提交于 2020-12-14 23:40:57
问题 I am trying to detect and decode barcodes from a library of images. In most cases pyzbar simply works (see code here). However, in some cases, my iPhone can decode the QR code but zbar fails. As I am on a mac, I can make use of the same Vision framework that the iPhone uses and there are even python wrappers to the macOS ObjC frameworks. I tried using Quartz.CGImageSourceCreateWithURL but that returns a None no matter what I pass it. def read_image(path): imageSrc = Quartz

Comparing two CGPoints for equality: returning not equal for two objects that output same point?

扶醉桌前 提交于 2020-12-01 11:48:10
问题 According to this question, using == and != should let you check for equality between two CGPoint objects. However, the code below fails to consider two CGPoint objects as equal even though they output the same value. What is the right way to check equality among CGPoint objects? Code: let boardTilePos = boardLayer.convert(boardTile.position, from: boardTile.parent!) let shapeTilePos = boardLayer.convert(tile.position, from: tile.parent!) print("board tile pos: \(boardTilePos). active tile