How to run eval.py job for tensorflow object detection models

陌路散爱 提交于 2019-11-28 22:33:27

问题


I have trained an object detector using tensorflow's object detection API on Google Colab. After researching on the internet for most of the day, I haven't been able to find a tutorial about how to run an evaluation for my model, so I can get metrics like mAP.

I figured out that I have to use the eval.py from the models/research/object_detection folder, but I'm not sure which parameters should I pass to the script.

Shortly, what I've done so far is, generated the labels for the test and train images and stored them under the object_detection/images folder. I have also generated the train.record and test.record files, and I have written the labelmap.pbtxt file. I am using the faster_rcnn_inception_v2_coco model from the tensorflow model zoo, so I have configured the faster_rcnn_inception_v2_coco.config file, and stored it in the object_detection/training folder. The training process ran just fine and all the checkpoints are stored also in the object_detection/training folder.

Now that I have to evaluate the model, I ran the eval.py script like this:

!python eval.py --logtostderr --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config --checkpoint_dir=training/ --eval_dir=eval/

Is this okay? Because this started running fine, but when I opened tensorboard there were only two tabs, namely images and graph, but no scalars. Also, I ran tensorboard with logdir=eval.

I am new to tensorflow, so any kind of help is welcome. Thank you.


回答1:


The setup looks good. I had to wait a long time for the Scalars tab to load/show up alongside the other two - like 10 minutes after the evaluation job finished.

But at the end of the evaluation job, it prints in the console all the scalar metrics that will be displayed in the Scalars tab:

Accumulating evaluation results...
DONE (t=1.57s).
Average Precision  (AP) @[ IoU=0.50:0.95 | area=   all | maxDets=100 ] = 0.434
Average Precision  (AP) @[ IoU=0.50      | area=   all | maxDets=100 ] = 0.693
Average Precision  (AP) @[ IoU=0.75      | area=   all | maxDets=100 ] = 0.470
Average Precision  (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000

etc.

If you want to use the new model_main.py script instead of legacy/eval.py, you can call it like

python model_main.py --alsologtostderr --run_once --checkpoint_dir=/dir/with/checkpoint/at/one/timestamp --model_dir=eval/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config 

Note that this new API would require the optimizer field in train_config, which is probably already in your pipeline.config since you're using the same for both training and evaluation.




回答2:


For those who are looking to run the new model_main.py in evaluation mode only. There is a flag in the parameter that you can set that does just that. That flag is checkpoint_dir, if you set it equal to a folder containing past training checkpoints, the model will run in evaluation only.

Hope I can help a few that missed it like myself! Cheers,



来源:https://stackoverflow.com/questions/50951181/how-to-run-eval-py-job-for-tensorflow-object-detection-models

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!