Skip to content

Commit 56789bd

Browse files
WafaaTMahathi-VatsalsramakintelTyler Titsworthsharvil10
authored
Sync with r2.12.1 (#1530)
* update driver version (#1429) * p0 ipex rn50 ATS-M (#1426) * add ipex stable diffusion * change base image P0 ITEX rn50 (#1431) * MaskRCNN ATS-M container (#1417) * p0 ipex stable diffusion (#1424) * yolov5 p0 ipex ATS-M (#1425) * itex atsm stable diffusion (#1418) * P0 ITEX Efficientnet B0,B3 (#1411) * EOLing docker builder files for workload containers (#1437) * removing dockerfiles directory * removed docker builder spec, partials * change precision to lowercase (#1456) * Update IPEX cpu baremetal instructions (#1451) * clean up ipex baremetal instructions * update horovod version in docs (#1458) * Remove all software.intel.com links (#1381) * Corrected software.intel.com * Removed dev catalog pages for EOL models * Added and updated baremetal README for P0 GPU models (#1447) * updated the GPU readme * PYT SPR BERT Large (#1472) * add avx-fp32 * Adapt newer BKC * remove idsid * update base image * updated tpp files for 2.12.1 release (#1479) * updated tpp files * added yolo5 * another update to TPPs (#1503) * resolve merge conflicts * Bump mlflow in /datasets/cloud_data_connector/samples/interoperability (#1492) Bumps [mlflow](https://github.com/mlflow/mlflow) from 2.5.0 to 2.6.0. * Bump mlflow in /datasets/cloud_data_connector/samples/azure (#1491) Bumps [mlflow](https://github.com/mlflow/mlflow) from 2.5.0 to 2.6.0. * fix issues with resolving conflicts * P0 models list (#1500) * sync with r2.12.1 --------- Co-authored-by: mahathis <[email protected]> Co-authored-by: Srikanth Ramakrishna <[email protected]> Co-authored-by: Tyler Titsworth <[email protected]> Co-authored-by: Sharvil Shah <[email protected]> Co-authored-by: Jitendra Patil <[email protected]>
1 parent 20a2fbb commit 56789bd

File tree

517 files changed

+12035
-22284
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

517 files changed

+12035
-22284
lines changed

README.md

+6-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors and Intel® Data Center GPUs.
44

5-
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://software.intel.com/containers).
5+
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html).
66

77
## Purpose of the Model Zoo
88

@@ -176,6 +176,11 @@ For best performance on Intel® Data Center GPU Flex and Max Series, please chec
176176
| [SSD-MobileNet*](https://arxiv.org/pdf/1704.04861.pdf)| TensorFlow | Inference | Flex Series| [Int8](/quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/README.md) |
177177
| [SSD-MobileNet](https://arxiv.org/pdf/1704.04861.pdf)| PyTorch | Inference | Flex Series | [Int8](/quickstart/object_detection/pytorch/ssd-mobilenet/inference/gpu/README.md) |
178178
| [Yolo V4](https://arxiv.org/pdf/1704.04861.pdf)| PyTorch | Inference | Flex Series | [Int8](/quickstart/object_detection/pytorch/yolov4/inference/gpu/README.md) |
179+
| [EfficientNet](https://arxiv.org/pdf/1905.11946.pdf) | TensorFlow | Inference | Flex Series | [FP16](/quickstart/image_recognition/tensorflow/efficientnet/inference/gpu/README.md) |
180+
| [MaskRCNN](https://arxiv.org/pdf/1703.06870.pdf) | TensorFlow | Inference | Flex Series | [FP16](/quickstart/image_segmentation/tensorflow/maskrcnn/inference/gpu/README.md) |
181+
| [Stable Diffusion](https://arxiv.org/pdf/2112.10752.pdf) | TensorFlow | Inference | Flex Series | [FP16 FP32](/quickstart/generative-ai/tensorflow/stable_diffusion/inference/gpu/README.md) |
182+
| [Stable Diffusion](https://arxiv.org/pdf/2112.10752.pdf) | PyTorch | Inference | Flex Series | [FP16 FP32](/quickstart/generative-ai/pytorch/stable_diffusion/inference/gpu/README.md) |
183+
| [Yolo V5](https://arxiv.org/pdf/2108.11539.pdf) | PyTorch | Inference | Flex Series | [FP16](/quickstart/object_detection/pytorch/yolov5/inference/gpu/README.md) |
179184
| [ResNet 50v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Inference | Max Series | [Int8 FP32 FP16](/quickstart/image_recognition/tensorflow/resnet50v1_5/inference/gpu/README_Max_Series.md) |
180185
| [ResNet 50 v1.5](https://github.com/tensorflow/models/tree/v2.11.0/official/legacy/image_classification/resnet) | TensorFlow | Training | Max Series | [BFloat16](/quickstart/image_recognition/tensorflow/resnet50v1_5/training/gpu/README.md) |
181186
| [ResNet 50 v1.5](https://arxiv.org/pdf/1512.03385.pdf) | PyTorch | Inference | Max Series |[Int8](/quickstart/image_recognition/pytorch/resnet50v1_5/inference/gpu/README_Max_Series.md) |

benchmarks/README.md

+28-26
Large diffs are not rendered by default.

benchmarks/common/tensorflow/start.sh

+251-4
Original file line numberDiff line numberDiff line change
@@ -562,9 +562,6 @@ function bert_options() {
562562
if [[ -n "${OPTIMIZED_SOFTMAX}" && ${OPTIMIZED_SOFTMAX} != "" ]]; then
563563
CMD=" ${CMD} --optimized-softmax=${OPTIMIZED_SOFTMAX}"
564564
fi
565-
if [[ -n "${AMP}" && ${AMP} != "" ]]; then
566-
CMD=" ${CMD} --amp=${AMP}"
567-
fi
568565

569566
if [[ -n "${MPI_WORKERS_SYNC_GRADIENTS}" && ${MPI_WORKERS_SYNC_GRADIENTS} != "" ]]; then
570567
CMD=" ${CMD} --mpi_workers_sync_gradients=${MPI_WORKERS_SYNC_GRADIENTS}"
@@ -1418,6 +1415,38 @@ function transformer_mlperf() {
14181415
fi
14191416
}
14201417

1418+
# GPT-J base model
1419+
function gpt_j() {
1420+
if [ ${MODE} == "inference" ]; then
1421+
if [[ (${PRECISION} == "bfloat16") || ( ${PRECISION} == "fp32") || ( ${PRECISION} == "fp16") ]]; then
1422+
if [[ -z "${CHECKPOINT_DIRECTORY}" ]]; then
1423+
echo "Checkpoint directory not found. The script will download the model."
1424+
else
1425+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1426+
export HF_HOME=${CHECKPOINT_DIRECTORY}
1427+
export HUGGINGFACE_HUB_CACHE=${CHECKPOINT_DIRECTORY}
1428+
export TRANSFORMERS_CACHE=${CHECKPOINT_DIRECTORY}
1429+
fi
1430+
1431+
if [ ${BENCHMARK_ONLY} == "True" ]; then
1432+
CMD=" ${CMD} --max_output_tokens=${MAX_OUTPUT_TOKENS}"
1433+
CMD=" ${CMD} --input_tokens=${INPUT_TOKENS}"
1434+
if [[ -z "${SKIP_ROWS}" ]]; then
1435+
SKIP_ROWS=0
1436+
fi
1437+
CMD=" ${CMD} --skip_rows=${SKIP_ROWS}"
1438+
fi
1439+
CMD=${CMD} run_model
1440+
else
1441+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME}."
1442+
exit 1
1443+
fi
1444+
else
1445+
echo "Only inference use-case is supported for now."
1446+
exit 1
1447+
fi
1448+
}
1449+
14211450
# Wavenet model
14221451
function wavenet() {
14231452
if [ ${PRECISION} == "fp32" ]; then
@@ -1563,6 +1592,189 @@ function distilbert_base() {
15631592
fi
15641593
}
15651594

1595+
function gpt_j_6B() {
1596+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "fp16" ] ||
1597+
[ ${PRECISION} == "bfloat16" ]; then
1598+
1599+
if [[ ${INSTALL_TRANSFORMER_FIX} != "True" ]]; then
1600+
echo "Information: Installing transformers from Hugging Face...!"
1601+
echo "python3 -m pip install git+https://github.com/intel-tensorflow/transformers@gptj_add_padding"
1602+
python3 -m pip install git+https://github.com/intel-tensorflow/transformers@gptj_add_padding
1603+
fi
1604+
1605+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1606+
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
1607+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1608+
1609+
if [[ ${MODE} == "training" ]]; then
1610+
if [[ -z "${TRAIN_OPTION}" ]]; then
1611+
echo "Error: Please specify a train option (GLUE, Lambada)"
1612+
exit 1
1613+
fi
1614+
1615+
CMD=" ${CMD} --train-option=${TRAIN_OPTION}"
1616+
fi
1617+
1618+
if [[ -z "${CACHE_DIR}" ]]; then
1619+
echo "Checkpoint directory not found. The script will download the model."
1620+
else
1621+
export HF_HOME=${CACHE_DIR}
1622+
export HUGGINGFACE_HUB_CACHE=${CACHE_DIR}
1623+
export TRANSFORMERS_CACHE=${CACHE_DIR}
1624+
fi
1625+
1626+
if [ ${NUM_INTER_THREADS} != "None" ]; then
1627+
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
1628+
fi
1629+
1630+
if [ ${NUM_INTRA_THREADS} != "None" ]; then
1631+
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
1632+
fi
1633+
1634+
if [[ -n "${NUM_TRAIN_EPOCHS}" && ${NUM_TRAIN_EPOCHS} != "" ]]; then
1635+
CMD=" ${CMD} --num-train-epochs=${NUM_TRAIN_EPOCHS}"
1636+
fi
1637+
1638+
if [[ -n "${LEARNING_RATE}" && ${LEARNING_RATE} != "" ]]; then
1639+
CMD=" ${CMD} --learning-rate=${LEARNING_RATE}"
1640+
fi
1641+
1642+
if [[ -n "${NUM_TRAIN_STEPS}" && ${NUM_TRAIN_STEPS} != "" ]]; then
1643+
CMD=" ${CMD} --num-train-steps=${NUM_TRAIN_STEPS}"
1644+
fi
1645+
1646+
if [[ -n "${DO_TRAIN}" && ${DO_TRAIN} != "" ]]; then
1647+
CMD=" ${CMD} --do-train=${DO_TRAIN}"
1648+
fi
1649+
1650+
if [[ -n "${DO_EVAL}" && ${DO_EVAL} != "" ]]; then
1651+
CMD=" ${CMD} --do-eval=${DO_EVAL}"
1652+
fi
1653+
1654+
if [[ -n "${TASK_NAME}" && ${TASK_NAME} != "" ]]; then
1655+
CMD=" ${CMD} --task-name=${TASK_NAME}"
1656+
fi
1657+
1658+
if [[ -n "${CACHE_DIR}" && ${CACHE_DIR} != "" ]]; then
1659+
CMD=" ${CMD} --cache-dir=${CACHE_DIR}"
1660+
fi
1661+
1662+
if [[ -n "${PROFILE}" && ${PROFILE} != "" ]]; then
1663+
CMD=" ${CMD} --profile=${PROFILE}"
1664+
fi
1665+
1666+
if [ -z ${STEPS} ]; then
1667+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1668+
fi
1669+
1670+
if [ -z $MAX_SEQ_LENGTH ]; then
1671+
CMD="${CMD} $(add_arg "--max-seq-length" ${MAX_SEQ_LENGTH})"
1672+
fi
1673+
CMD=${CMD} run_model
1674+
else
1675+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1676+
exit 1
1677+
fi
1678+
}
1679+
1680+
1681+
# vision-transformer base model
1682+
function vision_transformer() {
1683+
1684+
if [ ${MODE} == "training" ]; then
1685+
CMD="${CMD} $(add_arg "--init-checkpoint" ${INIT_CHECKPOINT})"
1686+
fi
1687+
1688+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] ||
1689+
[ ${PRECISION} == "fp16" ]; then
1690+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1691+
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
1692+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1693+
1694+
if [ ${NUM_INTER_THREADS} != "None" ]; then
1695+
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
1696+
fi
1697+
1698+
if [ ${NUM_INTRA_THREADS} != "None" ]; then
1699+
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
1700+
fi
1701+
1702+
if [ -z ${STEPS} ]; then
1703+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1704+
fi
1705+
CMD=${CMD} run_model
1706+
else
1707+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1708+
exit 1
1709+
fi
1710+
}
1711+
1712+
# mmoe base model
1713+
function mmoe() {
1714+
if [ ${MODE} == "inference" ]; then
1715+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
1716+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1717+
CMD="${CMD} $(add_arg "--warmup-steps" ${WARMUP_STEPS})"
1718+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1719+
1720+
if [ ${NUM_INTER_THREADS} != "None" ]; then
1721+
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
1722+
fi
1723+
1724+
if [ ${NUM_INTRA_THREADS} != "None" ]; then
1725+
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
1726+
fi
1727+
1728+
if [ -z ${STEPS} ]; then
1729+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1730+
fi
1731+
1732+
CMD=${CMD} run_model
1733+
else
1734+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1735+
exit 1
1736+
fi
1737+
elif [ ${MODE} == "training" ]; then
1738+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
1739+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1740+
CMD="${CMD} $(add_arg "--train-epochs" ${TRAIN_EPOCHS})"
1741+
CMD="${CMD} $(add_arg "--model_dir" ${CHECKPOINT_DIRECTORY})"
1742+
CMD=${CMD} run_model
1743+
else
1744+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1745+
exit 1
1746+
fi
1747+
fi
1748+
}
1749+
1750+
# rgat base model
1751+
function rgat() {
1752+
if [ ${MODE} == "inference" ]; then
1753+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
1754+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1755+
1756+
# Installing tensorflow_gnn from it's main branch
1757+
python3 -m pip install git+https://github.com/tensorflow/gnn.git@main
1758+
1759+
if [ ${NUM_INTER_THREADS} != "None" ]; then
1760+
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
1761+
fi
1762+
1763+
if [ ${NUM_INTRA_THREADS} != "None" ]; then
1764+
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
1765+
fi
1766+
1767+
CMD="${CMD} $(add_arg "--graph-schema-path" ${GRAPH_SCHEMA_PATH})"
1768+
CMD="${CMD} $(add_arg "--pretrained-model" ${PRETRAINED_MODEL})"
1769+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1770+
CMD=${CMD} run_model
1771+
else
1772+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1773+
exit 1
1774+
fi
1775+
fi
1776+
}
1777+
15661778
# Wide & Deep model
15671779
function wide_deep() {
15681780
if [ ${PRECISION} == "fp32" ]; then
@@ -1643,6 +1855,29 @@ function wide_deep_large_ds() {
16431855
fi
16441856
}
16451857

1858+
function graphsage() {
1859+
if [ ${MODE} == "inference" ]; then
1860+
if [ ${PRECISION} == "fp32" ] || [ ${PRECISION} == "bfloat16" ] || [ ${PRECISION} == "fp16" ]; then
1861+
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
1862+
1863+
if [ ${NUM_INTER_THREADS} != "None" ]; then
1864+
CMD="${CMD} $(add_arg "--num-inter-threads" ${NUM_INTER_THREADS})"
1865+
fi
1866+
1867+
if [ ${NUM_INTRA_THREADS} != "None" ]; then
1868+
CMD="${CMD} $(add_arg "--num-intra-threads" ${NUM_INTRA_THREADS})"
1869+
fi
1870+
1871+
CMD="${CMD} $(add_arg "--pretrained-model" ${PRETRAINED_MODEL})"
1872+
CMD="${CMD} $(add_arg "--steps" ${STEPS})"
1873+
CMD=${CMD} run_model
1874+
else
1875+
echo "PRECISION=${PRECISION} not supported for ${MODEL_NAME} in this repo."
1876+
exit 1
1877+
fi
1878+
fi
1879+
}
1880+
16461881
LOGFILE=${OUTPUT_DIR}/${LOG_FILENAME}
16471882

16481883
MODEL_NAME=$(echo ${MODEL_NAME} | tr 'A-Z' 'a-z')
@@ -1707,7 +1942,19 @@ elif [ ${MODEL_NAME} == "bert_large" ]; then
17071942
elif [ ${MODEL_NAME} == "dien" ]; then
17081943
dien
17091944
elif [ ${MODEL_NAME} == "distilbert_base" ]; then
1710-
distilbert_base
1945+
distilbert_base
1946+
elif [ ${MODEL_NAME} == "vision_transformer" ]; then
1947+
vision_transformer
1948+
elif [ ${MODEL_NAME} == "gpt_j_6b" ]; then
1949+
gpt_j_6B
1950+
elif [ ${MODEL_NAME} == "mmoe" ]; then
1951+
mmoe
1952+
elif [ ${MODEL_NAME} == "graphsage" ]; then
1953+
graphsage
1954+
elif [ ${MODEL_NAME} == "gpt_j" ]; then
1955+
gpt_j
1956+
elif [ ${MODEL_NAME} == "rgat" ]; then
1957+
rgat
17111958
else
17121959
echo "Unsupported model: ${MODEL_NAME}"
17131960
exit 1

benchmarks/image_recognition/tensorflow/densenet169/inference/README.md

-4
Original file line numberDiff line numberDiff line change
@@ -123,7 +123,3 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
123123
## Additional Resources
124124
125125
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [<int8 precision>](<int8 advanced readme link>) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
126-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
127-
workload container:<br />
128-
[https://www.intel.com/content/www/us/en/developer/articles/machine-learning-model/densenet169-fp32-inference-tensorflow-model.html](https://www.intel.com/content/www/us/en/developer/articles/machine-learning-model/densenet169-fp32-inference-tensorflow-model.html).
129-

benchmarks/image_recognition/tensorflow/inceptionv3/inference/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
137137
## Additional Resources
138138
139139
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
140-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
140+
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
141141
workload container:<br />
142-
[https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html).
142+
[https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv3-fp32-inference-tensorflow-container.html).
143143

benchmarks/image_recognition/tensorflow/inceptionv4/inference/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -128,7 +128,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
128128
## Additional Resources
129129
130130
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
131-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
131+
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
132132
workload container:<br />
133-
[https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html).
133+
[https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/inceptionv4-fp32-inference-tensorflow-container.html).
134134

benchmarks/image_recognition/tensorflow/mobilenet_v1/inference/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
135135
## Additional Resources
136136
137137
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [BFloat16](bfloat16/Advanced.md) for calling the `launch_benchmark.py` script directly.
138-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
138+
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
139139
workload container:<br />
140-
[https://software.intel.com/content/www/us/en/develop/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html).
140+
[https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/mobilenetv1-fp32-inference-tensorflow-container.html).
141141

benchmarks/image_recognition/tensorflow/resnet101/inference/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
135135
## Additional Resources
136136
137137
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
138-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
138+
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
139139
workload container:<br />
140-
[https://software.intel.com/content/www/us/en/develop/articles/containers/resnet101-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/resnet101-fp32-inference-tensorflow-container.html).
140+
[https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet101-fp32-inference-tensorflow-container.html).
141141

benchmarks/image_recognition/tensorflow/resnet50/inference/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ As an example, if the dataset location on Windows is `D:\user\ImageNet`, convert
135135
## Additional Resources
136136
137137
* To run more advanced use cases, see the instructions for the available precisions [FP32](fp32/Advanced.md) [Int8](int8/Advanced.md) [<bfloat16 precision>](<bfloat16 advanced readme link>) for calling the `launch_benchmark.py` script directly.
138-
* To run the model using docker, please see the [Intel® Developer Catalog](http://software.intel.com/containers)
138+
* To run the model using docker, please see the [Intel® Developer Catalog](https://www.intel.com/content/www/us/en/developer/tools/software-catalog/containers.html)
139139
workload container:<br />
140-
[https://software.intel.com/content/www/us/en/develop/articles/containers/resnet50-fp32-inference-tensorflow-container.html](https://software.intel.com/content/www/us/en/develop/articles/containers/resnet50-fp32-inference-tensorflow-container.html).
140+
[https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-fp32-inference-tensorflow-container.html](https://www.intel.com/content/www/us/en/developer/articles/containers/resnet50-fp32-inference-tensorflow-container.html).
141141

0 commit comments

Comments
 (0)