object-detection

Queries for Object Detection

泄露秘密 提交于 2019-12-23 03:45:12
问题 I have started working on the Object Detection. After reading several papers related to object detection. I have concluded the main steps for the training and testing. Training: Image -> Object proposals -> Checking each proposal with GT IoU > 0.5 -> Feature extraction -> Train classifier. Testing: Test Image -> Object proposals -> Feature extraction -> Check with Train classifier. IoU: Intersection of Union GT: Ground truth Q1: Please correct me if I have mistake in understand training and

Import Error: Cannot Import name input_reader_pb2

早过忘川 提交于 2019-12-23 03:26:08
问题 I am using Tensorflow Object Detection API to train my object detection model. I accumulated the dataset, and am going through the this tutorial. Everything went fine until I tried to train my dataset. When I run the following line on terminal, python train.py --logtostderr \ --train_dir=training/ \ --pipeline_config_path=training/ssd_mobilenet_v1_coco.config I get the following error Traceback (most recent call last): File "legacy/train.py", line 49, in <module> from object_detection

What is the best suited method for detecting images using android device for markerless detection?

你说的曾经没有我的故事 提交于 2019-12-23 01:53:10
问题 I'm trying to create an android application for detecting objects from camera using openCV, I read the openCV reference and found there are many methods for image detection, my purpose is to create an application 1) App can detect the any object from database(set of objects that can be detected) on real time camera frame.(Speed of processing/detection is important) 2) The database of object images will be updated from time to time.(database preferably on an external server) - Does this mean I

Keras: MobileNet to localize image features

*爱你&永不变心* 提交于 2019-12-23 01:36:24
问题 I have a custom image-set where I am trying to localize 4 features in that image. Those values are x,y coordinates. I've ran some basic CNN's and those run fine. My goal now is to convert to MobileNet. I had trouble using Keras's built-in MobileNet & code... so I mimicked the structure with the appropriate layers. It seems that the base model is geared towards classification, where mine is really just trying to locate the 8 x,y coordinates. I've done my best to fit the differing output layers

Data Augmentation in Tensorflow Object Detection API

我只是一个虾纸丫 提交于 2019-12-22 14:48:20
问题 In config file, we are given the default Augmentation option as shown below. data_augmentation_options { random_horizontal_flip { } } But I wondered how it works with the bounding box(ground truth box) values given with the training images. so I looked at preprocessor.py, random_horizontal_flip() takes 'boxes=None' parameter. Since no argument is given in the config file, I assume this flip does not account bounding box when it does the random horizontal flip. My question is what arguments do

Data Augmentation in Tensorflow Object Detection API

主宰稳场 提交于 2019-12-22 14:47:31
问题 In config file, we are given the default Augmentation option as shown below. data_augmentation_options { random_horizontal_flip { } } But I wondered how it works with the bounding box(ground truth box) values given with the training images. so I looked at preprocessor.py, random_horizontal_flip() takes 'boxes=None' parameter. Since no argument is given in the config file, I assume this flip does not account bounding box when it does the random horizontal flip. My question is what arguments do

Data Augmentation in Tensorflow Object Detection API

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-22 14:47:23
问题 In config file, we are given the default Augmentation option as shown below. data_augmentation_options { random_horizontal_flip { } } But I wondered how it works with the bounding box(ground truth box) values given with the training images. so I looked at preprocessor.py, random_horizontal_flip() takes 'boxes=None' parameter. Since no argument is given in the config file, I assume this flip does not account bounding box when it does the random horizontal flip. My question is what arguments do

Open CV object detection : ORB_GPU detector and SURF_GPU descriptor extractor

☆樱花仙子☆ 提交于 2019-12-22 12:26:11
问题 I was just making a small experiment to play around with different detector/descriptor combinations. My code uses an ORB_GPU detector for detection of features and SURF_GPU descriptor for calculating the descriptors. I uses a BruteForceMatcher_GPU to match the descriptors and i am suing the knnMatch method to get the matches. The problem is I am getting a lot of unwanted matches, the code is literally matching every feature it could find in both the images. I am quite confused with this

Converting SSD to frozen graph in tensorflow. Which output node names must be used?

戏子无情 提交于 2019-12-22 10:57:03
问题 I trained SSD using TensorFlow Object Detection API as described here. It produces a ckpt, meta and index file. In order to run it on my images I tried to check the demo code. It requires that the model be converted to frozen graph. I tried to convert my model to a frozen inference graph as described here. In that program I have to provide output node names. I could not figure out the name of the node in the SSD model which must be used here. Please help. I tried 'num_detections:0',

A good way to identify cars at night in a video

浪子不回头ぞ 提交于 2019-12-22 10:28:37
问题 I'm trying to identify car contours at night in a video ( Video Link is the link and you can download it from HERE ). I know that object detection based on R-CNN or YOLO can do this job. However, I want something more simple and more faster beacause all I want is to identify moving cars in real-time . (And I don't have a decent GPU.) I can do it pretty well in the day time using the background subtruction method to find the contours of cars: Because the light condition in the day time is