image-recognition

Use Processing and Opencv For android development?

假如想象 提交于 2019-12-06 07:25:42
Does anyone know if you can use the OpenCV library within the processing Android template ? I want to do some image recognition/comparison for these devices within Processing. What are the means to do such? and does anyone have an example of source code for it? Thanks! jesses.co.tt The OpenCV library that you would normally use ( http://ubaa.net/shared/processing/opencv/ ) will not work for Android. However, Android has it's own port of OpenCV, which is available here: http://opencv.org/platforms/android.html That being said, it is a little tricky to setup itself vis-a-vis Processing,

Looking for a little python machine learning advice

大城市里の小女人 提交于 2019-12-06 06:48:15
问题 I'm interested in having a dabble with Python and machine learning/automatic data entry. However as my research has progressed I realise there are so many different techniques each with there own strengths. I've decided i might get further if i learn in the opposite direction. I.e. pick a problem/task and learn by solving/completing it. I occasionally have to data process invoices that are faxed, I'm hoping to make a program that can enter these for me once I've scanned then in. The faxes

Image recognition for text in React Native

大兔子大兔子 提交于 2019-12-06 05:56:10
问题 This may be a crazy question but I've seen done with apps. Is there any kind of API that can be used to recognition the text within an image (the way chase recognizes numbers on a check) OR, is there an API that can be used to search (lets say google) for information based off an image? Example would be if I took a picture of a business logo, google will search for a business listing that fits that logo? I know crazy question but I want to know if it can even be done. If it can, can it be

comparing images programmatically - lib or class [closed]

老子叫甜甜 提交于 2019-12-06 03:33:40
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 6 years ago . My objective is to supply 2 image files, and get a true/false response as to whether these 2 files could be the same (within an

How do I find the connected components in a binary image?

两盒软妹~` 提交于 2019-12-06 02:14:55
问题 I am looking for an algorithm to find all connected components in my binary image. If we think of the image like a matrix, it looks like: [ 0 0 0 0 ... 0 0 0 0 ... 0 1 1 1 ... 0 1 0 1 ... 0 1 0 0 ... ... ] I would like to find the all the ones that are touching (diagonally, as well). In this example, there is only one component - but there may be hundreds of unique components in an image. Image => ALGORITHM => [ [(x,y)], ... ] # list of lists of coordinates (each shape is a list) I have

Can I use openCV to compare two faces on two different images?

不羁的心 提交于 2019-12-06 01:32:44
I am very new in openCV, I saw it could figure out the face and return a rectangle to indicate the face. I am wondering whether there is anyway for openCV to access two images, with which contains one face, and I expect openCV to return the possibility of whether those two people are the same. Thanks. OpenCV does not provide a full face recognition engine. You might want to check out this work: The One-Shot Similarity Kernel which proposes something similar to what you need. It also provides Matlab code. 来源: https://stackoverflow.com/questions/4559332/can-i-use-opencv-to-compare-two-faces-on

Creating a dataset from an image with Python for face recognition

半城伤御伤魂 提交于 2019-12-05 22:58:31
I am trying to code a face-recognition program in Python (I am going to apply k-nn algorithm to classify). First of all, I converted the images into greyscale and then I created a long column vector (by using Opencv's imagedata function) with the image's pixels (128x128= 16384 features total) So I got a dataset like the following (last column is the class label and I only showed first 7 features of the dataset instead of 16384). 176, 176, 175, 175, 177, 173, 178, 1 162, 161, 167, 162, 167, 166, 166, 2 But when I apply k-nn to this dataset, I get awkward results. Do I need to apply additional

Finding path obstacles in a 2D image

人盡茶涼 提交于 2019-12-05 21:40:40
what approach would you recommend for finding obstacles in a 2D image? Here are some key points I came up with till now: I doubt I can use object recognition based on "database of obstacles" search, since I don't know what might the obstruction look like. I assume color recognition might be problematic if the path does not differ a lot from the object itself. Possibly, adding one more camera and computing a 3D image (like a Kinect does) would work, but that would not run as smooth as I require. To illustrate the problem; robot can ride either left or right side of the pavement. In the

Image Preprocessing for OCR - Tessaract

旧时模样 提交于 2019-12-05 18:08:31
Obviously this image is pretty tough as it is low clarity and is not a real word. However, with this code, I'm detecting nothing close: import pytesseract from PIL import Image, ImageEnhance, ImageFilter image_name = 'NedNoodleArms.jpg' im = Image.open(image_name) im = im.filter(ImageFilter.MedianFilter()) enhancer = ImageEnhance.Contrast(im) im = enhancer.enhance(2) im = im.convert('1') im.save(image_name) text = pytesseract.image_to_string(Image.open(image_name)) print(text) outputs , Mdfiaodfiamms Any ideas here? The image my contrasting function produces is: Which looks decent? I don't have

Can anyone suggest good algorithms for CBIR?

北城以北 提交于 2019-12-05 15:54:46
Project: Content Based Image Retrieval - Semi-supervised (manual tagging is done on images while training) Description I have 1000000 images in the database. The training is manual (supervised) - title and tags are provided for each image. Example: coke.jpg Title : Coke Tags : Coke, Can Using the images and tags, I have to train the system. After training, when I give a new image (already in database/ completely new) the system should output the possible tags the image may belong to and display few images belonging to each tag. The system may also say no match found. Questions: 1) What is mean