Skip to content

Commit 92d076b

Browse files
committed
Added README and MTurk html
1 parent a40157c commit 92d076b

File tree

4 files changed

+473
-37
lines changed

4 files changed

+473
-37
lines changed

README.md

+50
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
Automatic Number (License) Plate Recognition
2+
============================================
3+
4+
##### mturk.html:
5+
Defines the web interface that will be used by the MTurk workers to label the images.
6+
Modified from [original](https://github.com/kyamagu/bbox-annotator)
7+
Use this html/js code with Amazon mechanical Turk. Instructions [here](https://blog.mturk.com/tutorial-annotating-images-with-bounding-boxes-using-amazon-mechanical-turk-42ab71e5068a)
8+
9+
##### genImageListForAWS.py
10+
Use this module to generate a csv file that can be uploaded to MTurk. You will need the csv file when you
11+
publish a batch of images for processing
12+
13+
##### inspectHITs.py:
14+
Once the batch has been completed by the workers you will need to download the results in csv file format,
15+
and approve or reject each HIT. This application will read the HIT results and overlay the bounding boxes
16+
and labels onto the images. A text box is provided for accepting or rejecting each HIT. Once complete, your
17+
accept/reject response will be added to the downloaded csv file, and the new csv file can be uploaded to MTurk
18+
19+
##### csvToPascalXml.py:
20+
Reads the csv file generated by inspectHITs.py, and generates PASCAL VOC style xml annotation files.
21+
One xml file for each image.
22+
23+
##### build_anpr_records.py:
24+
Reads a group of PASCAL VOC style xml annotation files, and combines with associated images to build a
25+
TFrecord dataset. Requires a predefined label map file that maps labels to integers
26+
27+
##### Train the object_detection model
28+
Now you can use tensorflow/models/research/object_detection to train the model
29+
It goes something like this. Assuming python virtualenv called tensorflow,
30+
a single GPU for training and CPU for eval:
31+
32+
cd tensorflow/models/research/object_detection
33+
34+
###### Training
35+
workon tensoflow
36+
python train.py --logtostderr --pipeline_config_path ../anpr/experiment_faster_rcnn/training/faster_rcnn_anpr.config --train_dir ../anpr/experiment_faster_rcnn/training
37+
38+
###### Eval
39+
If you are running the eval on CPU, then limit the number of images to evaluate by modifing your config file:
40+
130 eval_config: {
41+
131 num_examples: 5
42+
43+
New terminal
44+
workon tensoflow
45+
export CUDA_VISIBLE_DEVICES=""
46+
python eval.py --logtostderr --checkpoint_dir ../anpr/experiment_faster_rcnn/training --pipeline_config_path ../anpr/experiment_faster_rcnn/training/faster_rcnn_anpr.config --eval_dir ../anpr/experiment_faster_rcnn/evaluation
47+
48+
cd tensorflow/models/research
49+
workon tensoflow
50+
tensorboard --logdir anpr/experiment_faster_rcnn

genImageListForAWS.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
1+
# usage
2+
# python genImageListForAWS.py --image_dir SJ7STAR_images/2018_03_02 --output_file image_catalog.csv
23
from imutils import paths
34
import argparse
45
import os

0 commit comments

Comments
 (0)