@@ -20,17 +20,17 @@ modes/precisions:
20
20
```
21
21
22
22
3 . Clone the [ tf_unet] ( https://github.com/jakeret/tf_unet ) repository,
23
- and then get [ PR #202 ] ( https://github.com/jakeret/tf_unet/pull/202 )
23
+ and then get [ PR #276 ] ( https://github.com/jakeret/tf_unet/pull/276 )
24
24
to get cpu optimizations:
25
25
26
26
```
27
27
$ git clone [email protected] :jakeret/tf_unet.git
28
28
29
29
$ cd tf_unet/
30
30
31
- $ git fetch origin pull/202 /head:cpu_optimized
31
+ $ git fetch origin pull/276 /head:cpu_optimized
32
32
From github.com:jakeret/tf_unet
33
- * [new ref] refs/pull/202 /head -> cpu_optimized
33
+ * [new ref] refs/pull/276 /head -> cpu_optimized
34
34
35
35
$ git checkout cpu_optimized
36
36
Switched to branch 'cpu_optimized'
@@ -60,7 +60,7 @@ modes/precisions:
60
60
--docker-image gcr.io/deeplearning-platform-release/tf-cpu.1-14 \
61
61
--checkpoint /home/<user>/unet_trained \
62
62
--model-source-dir /home/<user>/tf_unet \
63
- -- checkpoint_name=model.cpkt
63
+ -- checkpoint_name=model.ckpt
64
64
```
65
65
66
66
Note that the ` --verbose ` or ` --output-dir ` flag can be added to the above
@@ -75,4 +75,4 @@ modes/precisions:
75
75
Total samples/sec: 905.5344 samples/s
76
76
Ran inference with batch size 1
77
77
Log location outside container: {--output-dir value}/benchmark_unet_inference_fp32_20190201_205601.log
78
- ```
78
+ ```
0 commit comments