image-processing

unable to use Trained Tensorflow model

风格不统一 提交于 2020-02-19 05:07:21
问题 I am new to Deep Learning and Tensorflow. I retrained a pretrained tensorflow inceptionv3 model as saved_model.pb to recognize different type of images but when I tried to use the fie with below code. with tf.Session() as sess: with tf.gfile.FastGFile("tensorflow/trained/saved_model.pb",'rb') as f: graph_def = tf.GraphDef() tf.Graph.as_graph_def() graph_def.ParseFromString(f.read()) g_in=tf.import_graph_def(graph_def) LOGDIR='/log' train_writer=tf.summary.FileWriter(LOGDIR) train_writer.add

unable to use Trained Tensorflow model

百般思念 提交于 2020-02-19 05:07:07
问题 I am new to Deep Learning and Tensorflow. I retrained a pretrained tensorflow inceptionv3 model as saved_model.pb to recognize different type of images but when I tried to use the fie with below code. with tf.Session() as sess: with tf.gfile.FastGFile("tensorflow/trained/saved_model.pb",'rb') as f: graph_def = tf.GraphDef() tf.Graph.as_graph_def() graph_def.ParseFromString(f.read()) g_in=tf.import_graph_def(graph_def) LOGDIR='/log' train_writer=tf.summary.FileWriter(LOGDIR) train_writer.add

Find the coordinates in an image where a specified colour is detected (Python)

北城以北 提交于 2020-02-16 09:53:31
问题 I'm trying to make a program which takes in an image and looks throughout the image to find a colour, lets say blue, and give out the coordinates of that point in the image which has that colour. 回答1: In order to do so, you need a few pieces of information, including the height and width of the image in pixels, as well as the colormap of the image. I have done something similar to this before, and I used PIL (Pillow) to extract the color values of each individual pixel. Using this method, you

Detect image orientation angle based on text direction

拜拜、爱过 提交于 2020-02-14 10:20:40
问题 I am working on a OCR task to extract information from multiple ID proof documents. One challenge is the orientation of the scanned image. The need is to fix the orientation of the scanned image of PAN, Aadhaar, Driving License or any ID proof. Already tried all suggested approaches on Stackoverflow and other forums such as OpenCV minAreaRect, Hough Lines Transforms, FFT, homography, tesseract osd with psm 0. None are working. The logic should return the angle of the text direction - 0, 90

Detect image orientation angle based on text direction

ε祈祈猫儿з 提交于 2020-02-14 10:17:47
问题 I am working on a OCR task to extract information from multiple ID proof documents. One challenge is the orientation of the scanned image. The need is to fix the orientation of the scanned image of PAN, Aadhaar, Driving License or any ID proof. Already tried all suggested approaches on Stackoverflow and other forums such as OpenCV minAreaRect, Hough Lines Transforms, FFT, homography, tesseract osd with psm 0. None are working. The logic should return the angle of the text direction - 0, 90

Detect image orientation angle based on text direction

 ̄綄美尐妖づ 提交于 2020-02-14 10:16:10
问题 I am working on a OCR task to extract information from multiple ID proof documents. One challenge is the orientation of the scanned image. The need is to fix the orientation of the scanned image of PAN, Aadhaar, Driving License or any ID proof. Already tried all suggested approaches on Stackoverflow and other forums such as OpenCV minAreaRect, Hough Lines Transforms, FFT, homography, tesseract osd with psm 0. None are working. The logic should return the angle of the text direction - 0, 90

Hough circle detection accuracy very low

筅森魡賤 提交于 2020-02-14 00:42:55
问题 I am trying to detect a circular shape from an image which appears to have very good definition. I do realize that part of the circle is missing but from what I've read about the Hough transform it doesn't seem like that should cause the problem I'm experiencing. Input: Output: Code: // Read the image Mat src = Highgui.imread("input.png"); // Convert it to gray Mat src_gray = new Mat(); Imgproc.cvtColor(src, src_gray, Imgproc.COLOR_BGR2GRAY); // Reduce the noise so we avoid false circle

Background image cleaning for OCR

和自甴很熟 提交于 2020-02-12 01:55:52
问题 Through tesseract-OCR I am trying to extract text from the following images with a red background. I have problems extracting the text in boxes B and D because there are vertical lines. How can I clean the background like this: input: output: some idea? The image without boxes: 回答1: Here are two methods to clean the image using Python OpenCV Method #1: Numpy thresholding Since the vertical lines, horizontal lines, and the background are in red we can take advantage of this and use Numpy

Background image cleaning for OCR

╄→гoц情女王★ 提交于 2020-02-12 01:54:18
问题 Through tesseract-OCR I am trying to extract text from the following images with a red background. I have problems extracting the text in boxes B and D because there are vertical lines. How can I clean the background like this: input: output: some idea? The image without boxes: 回答1: Here are two methods to clean the image using Python OpenCV Method #1: Numpy thresholding Since the vertical lines, horizontal lines, and the background are in red we can take advantage of this and use Numpy

Extract individual field from table image to excel with OCR

倾然丶 夕夏残阳落幕 提交于 2020-02-11 19:40:31
问题 I have scanned images which have tables as shown in this image: I am trying to extract each box separately and perform OCR but when I try to detect horizontal and vertical lines and then detect boxes it's returning the following image: And when I try to perform other transformations to detect text (erode and dilate) some remains of lines are still coming along with text like below: I cannot detect text only to perform OCR and proper bounding boxes aren't being generated like below: I cannot