object-detection

How to train and evaluate simultaneously in Object Detection API ?

风格不统一 提交于 2019-12-22 08:09:10
问题 I want to have train/evaluate the ssd_mobile_v1_coco on my own dataset at the same time in Object Detection API . However, when I simply try to do so, I am faced with GPU memory being nearly full and thus the evaluation script fails to start. Here are the commands I use for training and then evaluation: Training script is called in one terminal pane like this : python3 train.py \ --logtostderr \ --train_dir=training_ssd_mobile_caltech \ --pipeline_config_path=ssd_mobilenet_v1_coco_2017_11_17

TensorFlow object detection training error with TPU

ⅰ亾dé卋堺 提交于 2019-12-22 07:51:22
问题 I'm following along with Google's object detection on a TPU post and have hit a wall when it comes to training. Looking at the job logs, I can see that ml-engine runs a ton of pip installs for various packages, provisions a TPU, and then submits the following: Running command: python -m object_detection.model_tpu_main --model_dir=gs://{MY_BUCKET}/train --tpu_zone us-central1 --pipeline_config_path=gs://{MY_BUCKET}/data/pipeline.config --job-dir gs://{MY_BUCKET}/train It then errors with:

How to detect only objects of a specific category in tensorflow object detection

≡放荡痞女 提交于 2019-12-21 21:09:09
问题 The object detection notebook demonstrates, how models pretrained on the COCO dataset can be used to detect objects on test images. However, the models in the notebook return boxes for detected objects of all categories in the COCO set. How can I use the code to return boxes for objects of only one category? I.e. How can I get boxes for objects of which the model is sure that they are e.g. persons? 回答1: I have just implemented the solution myself. check the def filter_boxes function in the

How to detect horizontal lines in an image and obtain its y-coordinates using python and opencv?

大兔子大兔子 提交于 2019-12-21 21:00:23
问题 I am using the find contours method and then approximating a line by using fitline function. below is the code: img = cv2.imread('lines.jpg') imgray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY) ret,dst = cv2.threshold(imgray,127,255,0) im2,cnts, hierarchy =cv2.findContours(dst,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) rows,cols = img.shape[:2] [vx,vy,x,y] = cv2.fitLine(cnts[0], cv2.DIST_L2,0,0.01,0.01) lefty = int((-x*vy/vx) + y) righty = int(((cols-x)*vy/vx)+y) cv2.line(img,(cols-1,righty),(0,lefty),

Updating Tensorflow Object detection model with new images

大城市里の小女人 提交于 2019-12-21 11:57:38
问题 I have trained a faster rcnn model with a custom dataset using Tensorflow's Object Detection Api. Over time I would like to continue to update the model with additional images (collected weekly). The goal is to optimize for accuracy and to weight newer images over time. Here are a few alternatives: Add images to previous dataset and train a completely new model Add images to previous dataset and continue training previous model New dataset with just new images and continue training previous

Updating Tensorflow Object detection model with new images

ε祈祈猫儿з 提交于 2019-12-21 11:57:11
问题 I have trained a faster rcnn model with a custom dataset using Tensorflow's Object Detection Api. Over time I would like to continue to update the model with additional images (collected weekly). The goal is to optimize for accuracy and to weight newer images over time. Here are a few alternatives: Add images to previous dataset and train a completely new model Add images to previous dataset and continue training previous model New dataset with just new images and continue training previous

Perform multi-scale training (yolov2)

末鹿安然 提交于 2019-12-21 06:38:11
问题 I am wondering how the multi-scale training in YOLOv2 works. In the paper, it is stated that: The original YOLO uses an input resolution of 448 × 448. ith the addition of anchor boxes we changed the resolution to 416×416. However, since our model only uses convolutional and pooling layers it can be resized on the fly . We want YOLOv2 to be robust to running on images of different sizes so we train this into the model. Instead of fixing the input image size we change the network every few

Tensorflow Object Detection API

我们两清 提交于 2019-12-20 19:33:04
问题 I decided to take a dip into ML and with a lot of trial and error was able to create a model using TS' inception. To take this a step further, I want to use their Object Detection API. But their input preparation instructions, references the use of Pascal VOC 2012 dataset but I want to do the training on my own dataset. Does this mean I need to setup my datasets to either Pascal VOC or Oxford IIT format? If yes, how do I go about doing this? If no (my instinct says this is the case) , what

Tensorflow Object Detection API

空扰寡人 提交于 2019-12-20 19:32:04
问题 I decided to take a dip into ML and with a lot of trial and error was able to create a model using TS' inception. To take this a step further, I want to use their Object Detection API. But their input preparation instructions, references the use of Pascal VOC 2012 dataset but I want to do the training on my own dataset. Does this mean I need to setup my datasets to either Pascal VOC or Oxford IIT format? If yes, how do I go about doing this? If no (my instinct says this is the case) , what

SSD anchors in Tensorflow detection API

萝らか妹 提交于 2019-12-20 10:56:07
问题 I want to train an SSD detector on a custom dataset of N by N images. So I dug into Tensorflow object detection API and found a pretrained model of SSD300x300 on COCO based on MobileNet v2. When looking at the config file used for training: the field anchor_generator looks like this: (which follows the paper) anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.2 max_scale: 0.9 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.33 } }