image-processing

split image on the basis of color

巧了我就是萌 提交于 2021-01-01 08:30:48
问题 I have obtained an image after applying k-means with clusters = 3. Now I want to obtain 3 separate images on the basis of colours obtained after k-means. For example, consider the attached image. Now I need one image such that it contains only the blue square. One having the letter v and one with just the background Is there any possible way to do that using OpenCV and python. 回答1: The most general and simplest way to do it is using the three unique gray colors for each region. (Although I

How to extract account number in cheque/check images

倖福魔咒の 提交于 2020-12-31 17:53:05
问题 I am working on a task to extract the account number from cheque images . My current approach can be divided into 2 steps Localize account number digits (Printed digits) Perform OCR using OCR libraries like Tesseract OCR The second step is straight forward assuming we have properly localized the account number digits I tried to localize account number digits using OpenCV contours methods and using MSER (Maximally stable extremal regions) but didn’t get useful results. It’s difficult to

How to map optical flow field (float) to pixel data (char) for image warping?

眉间皱痕 提交于 2020-12-30 20:40:44
问题 I've been playing with the optical flow functions in OpenCV and am stuck. I've successfully generated X and Y optical flow fields/maps using the Farneback method, but I don't know how to apply this to the input image coordinates to warp the images. The resulting X and Y fields are of 32bit float type (0-1.0), but how does this translate to the coordinates of the input and output images? For example, 1.0 of what? The width of the image? The difference between the two? Plus, I'm not sure what

How to map optical flow field (float) to pixel data (char) for image warping?

心不动则不痛 提交于 2020-12-30 20:16:53
问题 I've been playing with the optical flow functions in OpenCV and am stuck. I've successfully generated X and Y optical flow fields/maps using the Farneback method, but I don't know how to apply this to the input image coordinates to warp the images. The resulting X and Y fields are of 32bit float type (0-1.0), but how does this translate to the coordinates of the input and output images? For example, 1.0 of what? The width of the image? The difference between the two? Plus, I'm not sure what

How to map optical flow field (float) to pixel data (char) for image warping?

假装没事ソ 提交于 2020-12-30 20:15:55
问题 I've been playing with the optical flow functions in OpenCV and am stuck. I've successfully generated X and Y optical flow fields/maps using the Farneback method, but I don't know how to apply this to the input image coordinates to warp the images. The resulting X and Y fields are of 32bit float type (0-1.0), but how does this translate to the coordinates of the input and output images? For example, 1.0 of what? The width of the image? The difference between the two? Plus, I'm not sure what

How to map optical flow field (float) to pixel data (char) for image warping?

人盡茶涼 提交于 2020-12-30 20:14:06
问题 I've been playing with the optical flow functions in OpenCV and am stuck. I've successfully generated X and Y optical flow fields/maps using the Farneback method, but I don't know how to apply this to the input image coordinates to warp the images. The resulting X and Y fields are of 32bit float type (0-1.0), but how does this translate to the coordinates of the input and output images? For example, 1.0 of what? The width of the image? The difference between the two? Plus, I'm not sure what

How to map optical flow field (float) to pixel data (char) for image warping?

旧时模样 提交于 2020-12-30 20:09:57
问题 I've been playing with the optical flow functions in OpenCV and am stuck. I've successfully generated X and Y optical flow fields/maps using the Farneback method, but I don't know how to apply this to the input image coordinates to warp the images. The resulting X and Y fields are of 32bit float type (0-1.0), but how does this translate to the coordinates of the input and output images? For example, 1.0 of what? The width of the image? The difference between the two? Plus, I'm not sure what

How to extract the layers from an image (jpg,png,etc)

百般思念 提交于 2020-12-30 07:00:46
问题 Given an image such as the CakePHP logo, how can this image be converted back into a PSD with the layers. As a human, I can easily work out how to translate this back to a PSD with layers. I can tell that the background is a circular shape with star edges. So the circular star part is at the back, the cake image is on top of this and the words CakePHP is over all of these two images. I can use Photoshop/Gimp tools to separate these images into three images and fill in the areas in-between.

How to extract the layers from an image (jpg,png,etc)

我是研究僧i 提交于 2020-12-30 06:59:51
问题 Given an image such as the CakePHP logo, how can this image be converted back into a PSD with the layers. As a human, I can easily work out how to translate this back to a PSD with layers. I can tell that the background is a circular shape with star edges. So the circular star part is at the back, the cake image is on top of this and the words CakePHP is over all of these two images. I can use Photoshop/Gimp tools to separate these images into three images and fill in the areas in-between.

How to find branch point from binary skeletonize image

末鹿安然 提交于 2020-12-30 03:49:00
问题 I use Python OpenCV to skeletonize an image like this: and I want to find the branch point of the skeleton I have no idea how to do it. Any thoughts? 回答1: The main idea here is to look in the neighborhood. You can use a 8-conected neighborhood for every pixels ([1 1 1; 1 1 1; 1 1 1], and the center is the pixel for which the neighborhood being explored!). At every branch point, the degree of the pixel will be > 2, while regular pixels will have a degree of 2 (i.e., connected to 2 pixels in