cvpixelbuffer

Bounding Box from VNDetectRectangleRequest is not correct size when used as child VC

帅比萌擦擦* 提交于 2021-02-15 07:08:28
问题 I am trying to use VNDetectRectangleRequest from Apple's Vision framework to automatically grab a picture of a card. However when I convert the points to draw the rectangle, it is misshapen and does not follow the rectangle is it should. I have been following this article pretty closely One major difference is I am embedding my CameraCaptureVC in another ViewController so that the card will be scanned only when it is in this smaller window. Below is how I set up the camera vc in the parent vc

Bounding Box from VNDetectRectangleRequest is not correct size when used as child VC

夙愿已清 提交于 2021-02-15 07:05:26
问题 I am trying to use VNDetectRectangleRequest from Apple's Vision framework to automatically grab a picture of a card. However when I convert the points to draw the rectangle, it is misshapen and does not follow the rectangle is it should. I have been following this article pretty closely One major difference is I am embedding my CameraCaptureVC in another ViewController so that the card will be scanned only when it is in this smaller window. Below is how I set up the camera vc in the parent vc

How do I convert a CVPixelBuffer / CVImageBuffer to Data?

五迷三道 提交于 2021-02-10 15:57:14
问题 My camera app captures a photo, enhances it in a certain way, and saves it. To do so, I get the input image from the camera in the form of a CVPixelBuffer (wrapped in a CMSampleBuffer). I perform some modifications on the pixel buffer, and I then want to convert it to a Data object. How do I do this? Note that I don't want to convert the pixel buffer / image buffer to a UIImage or CGImage since those don't have metadata (like EXIF). I need a Data object. How do I get one from a CVPixelBuffer

How do I convert a CVPixelBuffer / CVImageBuffer to Data?

南笙酒味 提交于 2021-02-10 15:55:12
问题 My camera app captures a photo, enhances it in a certain way, and saves it. To do so, I get the input image from the camera in the form of a CVPixelBuffer (wrapped in a CMSampleBuffer). I perform some modifications on the pixel buffer, and I then want to convert it to a Data object. How do I do this? Note that I don't want to convert the pixel buffer / image buffer to a UIImage or CGImage since those don't have metadata (like EXIF). I need a Data object. How do I get one from a CVPixelBuffer

Converting TrueDepth data to grayscale image produces distorted image

梦想的初衷 提交于 2021-01-29 07:37:35
问题 I'm getting the depth data from the TrueDepth camera, and converting it to a grayscale image. (I realize I could pass the AVDepthData to a CIImage constructor, however, for testing purposes, I want to make sure my array is populated correctly, therefore manually constructing an image would ensure that is the case.) I notice that when I try to convert the grayscale image, I get weird results. Namely, the image appears in the top half, and the bottom half is distorted (sometimes showing the

Replace Part of Pixel Buffer with White Pixels in iOS

天大地大妈咪最大 提交于 2020-01-22 12:53:26
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

Replace Part of Pixel Buffer with White Pixels in iOS

╄→尐↘猪︶ㄣ 提交于 2020-01-22 12:51:27
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

Replace Part of Pixel Buffer with White Pixels in iOS

人走茶凉 提交于 2020-01-22 12:50:19
问题 I am using the iPhone camera to capture live video and feeding the pixel buffer to a network that does some object recognition. Here is the relevant code: (I won't post the code for setting up the AVCaptureSession etc. as this is pretty standard.) - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection { CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); OSType

Convert ARFrame's captured image to UIImage orientation issue

安稳与你 提交于 2020-01-13 11:12:31
问题 I want to detect ball and have AR model interact with it. I used opencv for ball detection and send center of ball which I can use in hitTest to get coordinates in sceneView . I have been converting CVPixelBuffer to UIImage using following function: static func convertToUIImage(buffer: CVPixelBuffer) -> UIImage?{ let ciImage = CIImage(cvPixelBuffer: buffer) let temporaryContext = CIContext(options: nil) if let temporaryImage = temporaryContext.createCGImage(ciImage, from: CGRect(x: 0, y: 0,

How to draw text into CIImage?

牧云@^-^@ 提交于 2020-01-06 06:52:13
问题 How to draw into CIImage (or maybe into CVPixelBuffer , but I guess it easier to add text to CIImage )? not to UIImage I record video (.mp4 file) using AVAssetWriter and CMSampleBuffer (from video, audio inputs). While recording I want to add text on the video frames, I'm already converting CMSampleBuffer to CIImage , now I need somehow add text to CIImage and convert back to CVPixelBuffer . I didn't really find any simple examples in Swift how to add (draw) text to CIImage or add anything