image-processing

YCbCr Video Input STM32F746

爷,独闯天下 提交于 2020-05-23 11:43:22
问题 I am working on STM32F746 based custom board which is integrated with LCD and a ADV7180 video decoder IC. I Configured the ADV7180 to run in the free run mode. Getting the Camera data using DCMI to a specified buffer. I am trying to Convert the YCbCr 4:2:2 data to the RBG data. I am getting the Line Events. . From the Live events I am executing the below piece of code to convert it to the RGB and then load the it to the LCD using ARGB888. LCD_FRAME_BUFFER 0xC0000000 LCD_FRAME_BUFFER_LAYER1

How to extract only characters from image?

这一生的挚爱 提交于 2020-05-23 08:51:11
问题 I have this type of image from that I only want to extract the characters. After binarization, I am getting this image img = cv2.imread('the_image.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) thresh = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 11, 9) Then find contours on this image. (im2, cnts, _) = cv2.findContours(thresh.copy(), cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) cnts = sorted(cnts, key=cv2.contourArea, reverse=True) for contour in cnts[:2000

Calculating sharpness of an image

感情迁移 提交于 2020-05-23 03:29:26
问题 I found on the internet that laplacian method is quite good technique to compute the sharpness of a image. I was trying to implement it in opencv 2.4.10. How can I get the sharpness measure after applying the Laplacian function? Below is the code: Mat src_gray, dst; int kernel_size = 3; int scale = 1; int delta = 0; int ddepth = CV_16S; GaussianBlur( src, src, Size(3,3), 0, 0, BORDER_DEFAULT ); /// Convert the image to grayscale cvtColor( src, src_gray, CV_RGB2GRAY ); /// Apply Laplace

Image segmentation using segment seeds watershed in android

旧巷老猫 提交于 2020-05-20 10:53:08
问题 I am developing a mobile application that performs the segmentation of wound images. I would like to use the watershed for segmentation of images with the use of the mark of the region of interest made by the user. The user draws around the region of the wound so that the coordinates can be passed to the seeds of the watershed. In this link I saw how this is done using the mouse click, I would like someone to help me do this for android. Because I have no idea how to do it. In the case of my

Image segmentation using segment seeds watershed in android

烂漫一生 提交于 2020-05-20 10:53:08
问题 I am developing a mobile application that performs the segmentation of wound images. I would like to use the watershed for segmentation of images with the use of the mark of the region of interest made by the user. The user draws around the region of the wound so that the coordinates can be passed to the seeds of the watershed. In this link I saw how this is done using the mouse click, I would like someone to help me do this for android. Because I have no idea how to do it. In the case of my

Image segmentation using segment seeds watershed in android

眉间皱痕 提交于 2020-05-20 10:52:07
问题 I am developing a mobile application that performs the segmentation of wound images. I would like to use the watershed for segmentation of images with the use of the mark of the region of interest made by the user. The user draws around the region of the wound so that the coordinates can be passed to the seeds of the watershed. In this link I saw how this is done using the mouse click, I would like someone to help me do this for android. Because I have no idea how to do it. In the case of my

Red dot coordinates detection

…衆ロ難τιáo~ 提交于 2020-05-17 07:46:07
问题 I'm trying to follow movement of a part using red dots. I tried with white dots and thresholding before, but there is too much reflection from the smartphone I'm using. The plan is to recognize a dot as a contour, find the center and fill the array with the coordinates of all contour centers for further calculation. The code is posted bellow, it recognizes the correct number of dots, but I get the division by zero error. Does anyone know what I'm doing wrong? Image:https://imgur.com/a/GLXGCPP

how to detect irregular circles in python with opencv

时间秒杀一切 提交于 2020-05-17 07:42:55
问题 I want to create a vision system for the detection of defect in SMD capacitors, the defect is called "pinhole" and they are small holes in the surface of the chip that are generated at the time of construction. my objective is to create an algorithm that is able of detecting these holes and with this, discard the chips that have this defect For the moment I have created two codes: the first one converts the original image to a binary image so that I can clear the circles, the code and the

how to crop area of an image inside a rectangle or a squre?

梦想与她 提交于 2020-05-17 07:07:34
问题 First of all I take the picture and then I draw a rectangle over it. Now I just want to crop the image inside the rectangle. I tried drawing contours but that didn't work out in my case. I am stuck on it. import cv2 import numpy as np img = cv2.imread("C:/Users/hp/Desktop/segmentation/abc.jpg", 0); h, w = img.shape[:2] kernel = np.ones((15,15),np.uint8) e = cv2.erode(img,kernel,iterations = 2) d = cv2.dilate(e,kernel,iterations = 1) ret, th = cv2.threshold(d, 150, 255, cv2.THRESH_BINARY_INV)

Merging perspective corrected image with transparent background template image using PILLOW [PIL, Python]

陌路散爱 提交于 2020-05-17 03:08:09
问题 Problem: I have multiple book cover images. I made a template of "book"-like template with a 3D perspective. And all I have to do now its take each of book cover images, correct a perspective (its always constant, because the template is always unchanged) and merge my perspective corrected image with the template (background/canvas). For easier understanding - here is an example created in Adobe Photoshop: With red arrows I tried to show vertex points of the original cover image (before