image-processing

Difficulty in detecting the outer circle with cv2.HoughCircles

半城伤御伤魂 提交于 2020-02-24 11:51:18
问题 I am trying to detect the outer boundary of the circular object in the images below: I tried OpenCV's Hough Circle, but the code is not working for every image. I also tried to adjust parameters such as minRadius and maxRadius in Hough Circle but its not working on every image. The aim is to detect the object from the image and crop it. Expected output: Source code: import imutils import cv2 import numpy as np from matplotlib import pyplot as plt image = cv2.imread("path to the image i have

Identify text data in image to read mm/dd, description and amount using opencv python

送分小仙女□ 提交于 2020-02-24 11:15:07
问题 import re import cv2 import pytesseract from pytesseract import Output from PIL import Image from pytesseract import image_to_string img = cv2.imread('/home/cybermakarov/Desktop/1.Chase Bank-page-002.jpg') d = pytesseract.image_to_data(img, output_type=Output.DICT) keys = list(d.keys()) date_pattern = '^(0[1-9]|[12]|[1-9]|3[02])/' Description_pattern='([0-9]+\/[0-9]+)|([0-9]+)|([0-9\,\.]+)' n_boxes = len(d['text']) for i in range(n_boxes): if int(d['conf'][i]) > 60: if re.match(description

Identify text data in image to read mm/dd, description and amount using opencv python

此生再无相见时 提交于 2020-02-24 11:14:08
问题 import re import cv2 import pytesseract from pytesseract import Output from PIL import Image from pytesseract import image_to_string img = cv2.imread('/home/cybermakarov/Desktop/1.Chase Bank-page-002.jpg') d = pytesseract.image_to_data(img, output_type=Output.DICT) keys = list(d.keys()) date_pattern = '^(0[1-9]|[12]|[1-9]|3[02])/' Description_pattern='([0-9]+\/[0-9]+)|([0-9]+)|([0-9\,\.]+)' n_boxes = len(d['text']) for i in range(n_boxes): if int(d['conf'][i]) > 60: if re.match(description

How to use apply_ufunc with numpy.digitize for each image along time dimension of xarray.DataArray?

雨燕双飞 提交于 2020-02-24 05:44:20
问题 I've rephrased my earlier question substantially for clarity. Per Ryan's suggestion on a separate channel, numpy.digitize looks is the right tool for my goal. I have of an xarray.DataArray of shape x, y, and time. I've trying to puzzle out what values I should supply to the apply_ufunc function's 'input_core_dims' and 'output_core_dims' arguments in order to apply numpy.digitize to each image in the time series. Intuitively, I want the output dimensions to be ['time', 'x', 'y']. I think the

Remove black header section of image using Python OpenCV

*爱你&永不变心* 提交于 2020-02-22 07:45:08
问题 I need to remove the blackened section in multiple parts of image using Python CV. I tried with denoising which doesn't give satisfactory results. Eg. I need to remove the blackened part in Table Header (below image) and convert the header background to white with contents as black. Can anyone help me with choosing the correct library or solution to overcome this? 回答1: Here's a modified version of @eldesgraciado's approach to filter the dotted pattern using a morphological hit or miss

Why is the structuring element asymmetric in OpenCV?

99封情书 提交于 2020-02-21 05:22:48
问题 Why is the structuring element asymmetric in OpenCV? cv2.getStructuringElement(cv2.MORPH_ELLIPSE, ksize=(4,4)) returns array([[0, 0, 1, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], dtype=uint8) Why isn't it array([[0, 1, 1, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 1, 1, 0]], dtype=uint8) instead? Odd-sized structuring elements are also asymmetric with respect to 90-degree rotations: array([[0, 0, 1, 0, 0], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [0, 0, 1, 0, 0]], dtype=uint8) What's

Why is the structuring element asymmetric in OpenCV?

£可爱£侵袭症+ 提交于 2020-02-21 05:21:33
问题 Why is the structuring element asymmetric in OpenCV? cv2.getStructuringElement(cv2.MORPH_ELLIPSE, ksize=(4,4)) returns array([[0, 0, 1, 0], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], dtype=uint8) Why isn't it array([[0, 1, 1, 0], [1, 1, 1, 1], [1, 1, 1, 1], [0, 1, 1, 0]], dtype=uint8) instead? Odd-sized structuring elements are also asymmetric with respect to 90-degree rotations: array([[0, 0, 1, 0, 0], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [1, 1, 1, 1, 1], [0, 0, 1, 0, 0]], dtype=uint8) What's

Fingerprint scanner using camera of android device

99封情书 提交于 2020-02-20 09:16:07
问题 I m trying to implement, capture finger image and then scan that image and get the biometric fingerprints from that image and then finaly sending that image to server. Basically i dont have idea to work on image processing of these. so i tried Onyx SDK and the problem solved. but its a trail version. Now i need to know what are the proces undergoes inorder to get biometic image of finger, like cropping, inverting, contrasting, etc . Can anyone tell me the steps to undergone for image

Fingerprint scanner using camera of android device

大憨熊 提交于 2020-02-20 09:15:07
问题 I m trying to implement, capture finger image and then scan that image and get the biometric fingerprints from that image and then finaly sending that image to server. Basically i dont have idea to work on image processing of these. so i tried Onyx SDK and the problem solved. but its a trail version. Now i need to know what are the proces undergoes inorder to get biometic image of finger, like cropping, inverting, contrasting, etc . Can anyone tell me the steps to undergone for image

Tensorflow: Simple 3D Convnet not learning

回眸只為那壹抹淺笑 提交于 2020-02-19 06:56:45
问题 I am trying to create a simple 3D U-net for image segmentation , just to learn how to use the layers. Therefore I do a 3D convolution with stride 2 and then a transpose deconvolution to get back the same image size. I am also overfitting to a small set (test set) just to see if my network is learning. I created the same net in Keras and it works just fine. Now I want to create in tensorflow but I been having trouble with it. The cost changes slightly but no matter what I do (reduce learning