object-detection

Threshold values for viola jones object detection

北战南征 提交于 2020-01-03 05:01:08
问题 I am trying to perform Adaboost training stated by Viola and Jones in their paper on rapid object detection. However, I do not understand how to get the threshold values that will classify the faces from non faces for each of the 160k features. Is this a threshold you set manually? or is this based on some kind of maths ? Can someone please explain the maths to me thanks a lot. 回答1: IMO, the best way to describe what happens during threshold assignment of the weak classifiers in every

how to do Object detection in opengl Android?

别等时光非礼了梦想. 提交于 2020-01-02 05:36:31
问题 I've started with OpenGl es for Android since 2 weeks and after trying 3D examples I'm stuckup at obect detection. Basically mapping between x,y coordinates of screen to x,y,z of 3d space and vice a versa. I came across : GLU.gluProject(objX, objY, objZ, model, modelOffset, project, projectOffset, view, viewOffset, win, winOffset); GLU.gluUnProject(winX, winY, winZ, model, modelOffset, project, projectOffset, view, viewOffset, obj, objOffset); but i failed to understand that How do I use them

How to continouosly evaluate a tensorflow object detection model in parallel to training with model_main

本秂侑毒 提交于 2020-01-01 17:27:21
问题 I successfully trained an object detection model with custom examples using train.py and eval.py . Running both programms in parallel I was able to visualize training and evaluation metrics in tensorboard during training. However both programs were moved to the legacy folder and model_main.py seems to be the preferred way to run training and evaluation (by executing only a single process). However when I start model_main.py with the following pipeline.config : train_config { batch_size: 1 num

Vehicle segmentation and tracking

匆匆过客 提交于 2020-01-01 06:08:48
问题 I've been working on a project for some time, to detect and track (moving) vehicles in video captured from UAV's, currently I am using an SVM trained on bag-of-feature representations of local features extracted from vehicle and background images. I am then using a sliding window detection approach to try and localise vehicles in the images, which I would then like to track. The problem is that this approach is far to slow and my detector isn't as reliable as I would like so I'm getting quite

Output score , class and id Extraction using TensorFlow object detection

ε祈祈猫儿з 提交于 2019-12-31 06:59:45
问题 How can I extract the output scores for objects , object class ,object id detected in images , generated by the Tensorflow Model for Object Detection ? I want to store all these details into individual variables so that later they can be stored in a database . Using the same code as found in this link https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb Please Help me out with the solution to this problem . I've Tried print(str(output_dict

Return coordinates that passes threshold value for bounding boxes Google's Object Detection API

陌路散爱 提交于 2019-12-31 03:55:25
问题 Does anyone know how to get bounding box coordinates which only passes threshold value? I found this answer (here's a link), so I tried using it and done the following: vis_util.visualize_boxes_and_labels_on_image_array( image, np.squeeze(boxes), np.squeeze(classes).astype(np.int32), np.squeeze(scores), category_index, use_normalized_coordinates=True, line_thickness=1, min_score_thresh=0.80) for i,b in enumerate(boxes[0]): ymin = boxes[0][i][0]*height xmin = boxes[0][i][1]*width ymax = boxes

OpenCV cvFindContours - how do I separate components of a contour

孤街浪徒 提交于 2019-12-28 11:57:05
问题 I've been playing around with OpenCV and with alot of trial and error have managed to learn how to detect circles (coins) in a photo. Everything is working great, except when I place coins directly next to each other (as seen below, ignore the fact that the 2nd image is upside down). It seems because the coins are so close together cvFindContours think they are the same object. My question is how can I separate these contours into their separate objects, or get a list of contours that are

segmentation fault in findContours

为君一笑 提交于 2019-12-25 07:05:26
问题 I am getting segmentation fault error during run time for the below code to find contours. I have referred this post on this form but didn't help me much. I got to know there are some issues with findContours This is another issue of findContours. Please check both the links and help me to resolve this error. I don't know why I am getting segmentation fault error. #include "opencv2/objdetect/objdetect.hpp" #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include

Reduce number of objects a pretrained Tensorflow model detects

混江龙づ霸主 提交于 2019-12-25 01:37:31
问题 I am using this code for object detection and it outputs 100 boxes even though in most pictures there are 0-5 objects. The detection takes 5 seconds on a 250X250 image. Would cutting the number of objects to be detected speed up the process, and if yes is there a way to do it? 回答1: Logically it would, i cant say exactly by how much. You can retrain the model to only fewer objects that you are interested in. For training you will also specify objects interested in a file with extension *.pbtxt

How to output the following result to terminal

南楼画角 提交于 2019-12-25 01:17:49
问题 I am using code position calculation. So how could I show the output the result in the terminal? #include <ros/ros.h> #include <std_msgs/Float32MultiArray.h> #include <opencv2/opencv.hpp> #include <QTransform> #include <geometry_msgs/Point.h> #include <std_msgs/Int16.h> #include <find_object_2d/PointObjects.h> #include <find_object_2d/Point_id.h> #define dZ0 450 #define alfa 40 #define h 310 #define d 50 #define PI 3.14159265 void objectsDetectedCallback(const std_msgs::Float32MultiArray& msg