|
| 1 | +# Fast R-CNN (ResNet50) |
| 2 | + |
| 3 | +This document has instructions for how to run FastRCNN for the |
| 4 | +following modes/platforms: |
| 5 | +* [FP32 inference](#fp32-inference-instructions) |
| 6 | + |
| 7 | +Benchmarking instructions and scripts for the Fast R-CNN ResNet50 model training and inference |
| 8 | +other platforms are coming later. |
| 9 | + |
| 10 | +## FP32 Inference Instructions |
| 11 | + |
| 12 | +1. Clone the `tensorflow/models` and `cocoapi` repositories: |
| 13 | + |
| 14 | +``` |
| 15 | +$ git clone [email protected]:tensorflow/models.git |
| 16 | +$ cd models |
| 17 | +$ git clone https://github.com/cocodataset/cocoapi.git |
| 18 | +
|
| 19 | +``` |
| 20 | + |
| 21 | +The TensorFlow models repo will be used for running inference as well as |
| 22 | +converting the coco dataset to the TF records format. |
| 23 | + |
| 24 | +2. Download the 2017 validation |
| 25 | +[COCO dataset](http://cocodataset.org/#home) and annotations: |
| 26 | + |
| 27 | +``` |
| 28 | +$ mkdir val |
| 29 | +$ cd val |
| 30 | +$ wget http://images.cocodataset.org/zips/val2017.zip |
| 31 | +$ unzip val2017.zip |
| 32 | +$ cd .. |
| 33 | +
|
| 34 | +$ mkdir annotations |
| 35 | +$ cd annotations |
| 36 | +$ wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip |
| 37 | +$ unzip annotations_trainval2017.zip |
| 38 | +$ cd .. |
| 39 | +``` |
| 40 | + |
| 41 | +Since we are only using the validation dataset in this example, we will |
| 42 | +create an empty directory and empty annotations json file to pass as the |
| 43 | +train and test directories in the next step. |
| 44 | + |
| 45 | +``` |
| 46 | +$ mkdir empty_dir |
| 47 | +
|
| 48 | +$ cd annotations |
| 49 | +$ echo "{ \"images\": {}, \"categories\": {}}" > empty.json |
| 50 | +$ cd .. |
| 51 | +``` |
| 52 | + |
| 53 | +3. Now that you have the raw COCO dataset, we need to convert it to the |
| 54 | +TF records format in order to use it with the inference script. We will |
| 55 | +do this by running the `create_coco_tf_record.py` file in the TensorFlow |
| 56 | +models repo. |
| 57 | + |
| 58 | +Follow the steps below to navigate to the proper directory and point the |
| 59 | +script to the raw COCO dataset files that you have downloaded in step 2. |
| 60 | +The `--output_dir` is the location where the TF record files will be |
| 61 | +located after the script has completed. |
| 62 | + |
| 63 | +``` |
| 64 | +
|
| 65 | +# We are going to use an older version of the conversion script to checkout the git commit |
| 66 | +$ cd models |
| 67 | +$ git checkout 7a9934df2afdf95be9405b4e9f1f2480d748dc40 |
| 68 | +
|
| 69 | +$ cd research/object_detection/dataset_tools/ |
| 70 | +$ python create_coco_tf_record.py --logtostderr \ |
| 71 | + --train_image_dir="/home/myuser/coco/empty_dir" \ |
| 72 | + --val_image_dir="/home/myuser/coco/val/val2017" \ |
| 73 | + --test_image_dir="/home/myuser/coco/empty_dir" \ |
| 74 | + --train_annotations_file="/home/myuser/coco/annotations/empty.json" \ |
| 75 | + --val_annotations_file="/home/myuser/coco/annotations/instances_val2017.json" \ |
| 76 | + --testdev_annotations_file="/home/myuser/coco/annotations/empty.json" \ |
| 77 | + --output_dir="/home/myuser/coco/output" |
| 78 | +
|
| 79 | +$ ll /home/myuser/coco/output |
| 80 | +total 1598276 |
| 81 | +-rw-rw-r--. 1 myuser myuser 0 Nov 2 21:46 coco_testdev.record |
| 82 | +-rw-rw-r--. 1 myuser myuser 0 Nov 2 21:46 coco_train.record |
| 83 | +-rw-rw-r--. 1 myuser myuser 818336740 Nov 2 21:46 coco_val.record |
| 84 | +
|
| 85 | +# Go back to the main models directory and get master code |
| 86 | +$ cd /home/myuser/models |
| 87 | +$ git checkout master |
| 88 | +``` |
| 89 | + |
| 90 | +The `coco_val.record` file is what we will use in this inference example. |
| 91 | + |
| 92 | +4. Download the pre-trained model fast_rcnn_resnet50_fp32_coco_pretrained_model.tar.gz. |
| 93 | +The pre-trained model includes the checkpoint files and the Fast R-CNN ResNet50 model `pipeline.config`. |
| 94 | +Extract and check out its contents as shown: |
| 95 | +``` |
| 96 | +$ wget https://storage.cloud.google.com/intel-optimized-tensorflow/models/fast_rcnn_resnet50_fp32_coco_pretrained_model.tar.gz |
| 97 | +$ tar -xzvf fast_rcnn_resnet50_fp32_coco_pretrained_model.tar.gz |
| 98 | +$ ls -l fast_rcnn_resnet50_fp32_coco |
| 99 | +total 374848 |
| 100 | +-rw-r--r-- 1 myuser myuser 77 Nov 12 22:33 checkpoint |
| 101 | +-rw-r--r-- 1 myuser myuser 176914228 Nov 12 22:33 model.ckpt.data-00000-of-00001 |
| 102 | +-rw-r--r-- 1 myuser myuser 14460 Nov 12 22:33 model.ckpt.index |
| 103 | +-rw-r--r-- 1 myuser myuser 5675175 Nov 12 22:33 model.ckpt.meta |
| 104 | +-rwxr--r-- 1 myuser myuser 5056 Nov 12 22:33 mscoco_label_map.pbtxt |
| 105 | +-rwxr-xr-x 1 myuser myuser 3244 Nov 12 22:33 pipeline.config |
| 106 | +drwxr-xr-x 4 myuser myuser 128 Nov 12 22:30 saved_model |
| 107 | +
|
| 108 | +``` |
| 109 | +Make sure that the `eval_input_reader` section in the `pipeline.config` file has the mounted |
| 110 | +`coco_val.record` data and pre-trained model `mscoco_label_map.pbtxt` location. |
| 111 | + |
| 112 | +5. Clone the [intelai/models](https://github.com/intelai/models) repo. |
| 113 | +This repo has the launch script for running benchmarking. |
| 114 | + |
| 115 | +``` |
| 116 | +$ git clone [email protected]:IntelAI/models.git |
| 117 | +Cloning into 'models'... |
| 118 | +remote: Enumerating objects: 11, done. |
| 119 | +remote: Counting objects: 100% (11/11), done. |
| 120 | +remote: Compressing objects: 100% (7/7), done. |
| 121 | +remote: Total 11 (delta 3), reused 4 (delta 0), pack-reused 0 |
| 122 | +Receiving objects: 100% (11/11), done. |
| 123 | +Resolving deltas: 100% (3/3), done. |
| 124 | +``` |
| 125 | + |
| 126 | +6. Run the `launch_benchmark.py` script from the intelai/models repo |
| 127 | +, with the appropriate parameters including: the |
| 128 | +`coco_val.record` data location (from step 3), the pre-trained model |
| 129 | +`pipeline.config` file and the checkpoint location (from step 4, and the |
| 130 | +location of your `tensorflow/models` clone (from step 1). |
| 131 | + |
| 132 | +``` |
| 133 | +$ cd /home/myuser/models/benchmarks |
| 134 | +
|
| 135 | +$ python launch_benchmark.py \ |
| 136 | + --data-location /home/myuser/coco/output/ \ |
| 137 | + --model-source-dir /home/myuser/tensorflow/models \ |
| 138 | + --model-name fastrcnn \ |
| 139 | + --framework tensorflow \ |
| 140 | + --platform fp32 \ |
| 141 | + --mode inference \ |
| 142 | + --checkpoint /home/myuser/fast_rcnn_resnet50_fp32_coco \ |
| 143 | + --docker-image intelaipg/intel-optimized-tensorflow:latest-devel-mkl \ |
| 144 | + -- config-file=pipeline.config |
| 145 | +``` |
| 146 | + |
| 147 | +7. The log file is saved to: |
| 148 | +models/benchmarks/common/tensorflow/logs/benchmark_fastrcnn_inference.log |
| 149 | + |
| 150 | +The tail of the log output when the benchmarking completes should look |
| 151 | +something like this: |
| 152 | + |
| 153 | +``` |
| 154 | +Time spent : 172.880 seconds. |
| 155 | +Time spent per BATCH: 0.173 seconds. |
| 156 | +Received these standard args: Namespace(batch_size=-1, checkpoint='/checkpoints', config='/checkpoints/pipeline.config', data_location=/dataset, inference_only=True, num_cores=-1, num_inter_threads=1, num_intra_threads=28, single_socket=True, socket_id=0, verbose=True) |
| 157 | +Received these custom args: [] |
| 158 | +Initialize here. |
| 159 | +Run model here. numactl --cpunodebind=0 --membind=0 python object_detection/eval.py --num_inter_threads 1 --num_intra_threads 28 --pipeline_config_path /checkpoints/pipeline.config --checkpoint_dir /checkpoints --eval_dir /tensorflow-models/research/object_detection/log/eval |
| 160 | +``` |
| 161 | + |
0 commit comments