Skip to content

Commit dbacad2

Browse files
authored
[Enhancement] Update user api and docs with tools/xxx (#2603)
1 parent 57a16a0 commit dbacad2

File tree

88 files changed

+263
-268
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

88 files changed

+263
-268
lines changed

configs/pp_liteseg/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ export model=pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k # test resol
5454
# export model=pp_liteseg_stdc2_cityscapes_1024x512_scale1.0_160k
5555
# export model=pp_liteseg_stdc1_camvid_960x720_10k
5656
# export model=pp_liteseg_stdc2_camvid_960x720_10k
57-
python -m paddle.distributed.launch train.py \
57+
python -m paddle.distributed.launch tools/train.py \
5858
--config configs/pp_liteseg/${model}.yml \
5959
--save_dir output/${model} \
6060
--save_interval 1000 \
@@ -77,7 +77,7 @@ Refer to [doc](../../docs/evaluation/evaluate/evaluate.md) for the detailed usag
7777
export CUDA_VISIBLE_DEVICES=0
7878
export model=pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k
7979
# export other model
80-
python val.py \
80+
python tools/val.py \
8181
--config configs/pp_liteseg/${model}.yml \
8282
--model_path output/${model}/best_model/model.pdparams \
8383
--num_workers 3

configs/pssl/README.md

+2-3
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,6 @@ Make sure that the datasets have structures as follows:
3636

3737
```
3838
PaddleSeg
39-
│ train.py
4039
│ ...
4140
4241
└───data
@@ -72,7 +71,7 @@ Having installed PaddlePaddle and PaddleSeg and prepared datasets (ImageNet and
7271
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
7372

7473
path_save="work_dirs_stdc2_pssl"
75-
python -m paddle.distributed.launch --log_dir $path_save train.py \
74+
python -m paddle.distributed.launch --log_dir $path_save tools/train.py \
7675
--config configs/pssl/stdc2_seg_pssl.yml \
7776
--log_iters 200 \
7877
--num_workers 12 \
@@ -87,7 +86,7 @@ python -m paddle.distributed.launch --log_dir $path_save train.py \
8786
export CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7
8887

8988
path_save="work_dirs_pp_liteseg_stdc2_pssl"
90-
python -m paddle.distributed.launch --log_dir $path_save train.py \
89+
python -m paddle.distributed.launch --log_dir $path_save tools/train.py \
9190
--config configs/pssl/pp_liteseg_stdc2_pssl.yml \
9291
--log_iters 100 \
9392
--num_workers 12 \

configs/smrt/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -119,7 +119,7 @@ export CUDA_VISIBLE_DEVICES=0 # Linux下设置1张可用的卡
119119
120120
cd PaddleSeg
121121
122-
python train.py \
122+
python tools/train.py \
123123
--config configs/smrt/pp_liteseg_stdc2.yml \
124124
--do_eval \
125125
--use_vdl \
@@ -142,7 +142,7 @@ python train.py \
142142
```
143143
export CUDA_VISIBLE_DEVICES=0,1,2,3 # 设置4张可用的卡
144144
145-
python -m paddle.distributed.launch train.py \
145+
python -m paddle.distributed.launch tools/train.py \
146146
--config configs/smrt/pp_liteseg_stdc2.yml \
147147
--do_eval \
148148
--use_vdl \
@@ -155,7 +155,7 @@ python -m paddle.distributed.launch train.py \
155155
训练得到精度符合预期的模型后,可以导出预测模型,进行部署。详细的模型导出方法请参考[文档](../../docs/model_export_cn.md)
156156

157157
```
158-
python export.py \
158+
python tools/export.py \
159159
--config configs/smrt/pp_liteseg_stdc2.yml \
160160
--model_path output/pp_liteseg_stdc2/best_model/model.pdparams \
161161
--save_dir output/pp_liteseg_stdc2/infer_models

contrib/CityscapesSOTA/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ Firstly please download 3 files from [Cityscapes dataset](https://www.cityscapes
4747
Run the following commands to do the label conversion:
4848
```shell
4949
pip install cityscapesscripts
50-
python ../../tools/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
50+
python ../../tools/data/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
5151
```
5252
Where 'cityscapes_path' should be adjusted according to the actual dataset path. 'num_workers' determines the number of processes started and the size can be adjusted according to the actual situation.
5353

contrib/PP-HumanSeg/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -354,7 +354,7 @@ Run the following command to start finetuning. You should change the details, su
354354
```bash
355355
export CUDA_VISIBLE_DEVICES=0 # Set GPU on Linux
356356
# set CUDA_VISIBLE_DEVICES=0 # Set GPU on Windows
357-
python ../../train.py \
357+
python ../../tools/train.py \
358358
--config configs/human_pp_humansegv2_lite.yml \
359359
--save_dir output/human_pp_humansegv2_lite \
360360
--save_interval 100 --do_eval --use_vdl
@@ -365,7 +365,7 @@ python ../../train.py \
365365
Load model and trained weights and start model evaluation. The full usage of model evaluation in [url](../../docs/evaluation/evaluate/evaluate.md).
366366

367367
```bash
368-
python ../../val.py \
368+
python ../../tools/val.py \
369369
--config configs/human_pp_humansegv2_lite.yml \
370370
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams
371371
```
@@ -375,7 +375,7 @@ python ../../val.py \
375375
Load model and trained weights and start model prediction. The result are saved in `./data/images_result/added_prediction` and `./data/images_result/pseudo_color_prediction`
376376

377377
```bash
378-
python ../../predict.py \
378+
python ../../tools/predict.py \
379379
--config configs/human_pp_humansegv2_lite.yml \
380380
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams \
381381
--image_path data/images/human.jpg \
@@ -387,7 +387,7 @@ python ../../predict.py \
387387
Load model and trained weights and export inference model. The full usage of model exporting in [url](../../docs/model_export.md).
388388

389389
```shell
390-
python ../../export.py \
390+
python ../../tools/export.py \
391391
--config configs/human_pp_humansegv2_lite.yml \
392392
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams \
393393
--save_dir output/human_pp_humansegv2_lite \

contrib/PP-HumanSeg/README_cn.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -349,7 +349,7 @@ configs
349349
```bash
350350
export CUDA_VISIBLE_DEVICES=0 # Linux下设置1张可用的卡
351351
# set CUDA_VISIBLE_DEVICES=0 # Windows下设置1张可用的卡
352-
python ../../train.py \
352+
python ../../tools/train.py \
353353
--config configs/human_pp_humansegv2_lite.yml \
354354
--save_dir output/human_pp_humansegv2_lite \
355355
--save_interval 100 --do_eval --use_vdl
@@ -360,7 +360,7 @@ python ../../train.py \
360360
执行如下命令,加载模型和训练好的权重,进行模型评估,输出验证集上的评估精度。模型评估的详细文档,请参考[链接](../../docs/evaluation/evaluate/evaluate_cn.md)
361361

362362
```bash
363-
python ../../val.py \
363+
python ../../tools/val.py \
364364
--config configs/human_pp_humansegv2_lite.yml \
365365
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams
366366
```
@@ -370,7 +370,7 @@ python ../../val.py \
370370
执行如下命令,加载模型和训练好的权重,对单张图像进行预测,预测结果保存在`./data/images_result`目录下的`added_prediction``pseudo_color_prediction`文件夹中。
371371

372372
```bash
373-
python ../../predict.py \
373+
python ../../tools/predict.py \
374374
--config configs/human_pp_humansegv2_lite.yml \
375375
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams \
376376
--image_path data/images/human.jpg \
@@ -382,7 +382,7 @@ python ../../predict.py \
382382
执行如下命令,加载模型和训练好的权重,导出预测模型。模型导出的详细文档,请参考[链接](../../docs/model_export_cn.md)
383383

384384
```shell
385-
python ../../export.py \
385+
python ../../tools/export.py \
386386
--config configs/human_pp_humansegv2_lite.yml \
387387
--model_path pretrained_models/human_pp_humansegv2_lite_192x192_pretrained/model.pdparams \
388388
--save_dir output/human_pp_humansegv2_lite \

contrib/PaddleLabel/doc/CN/training/PdLabel_PdSeg.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ export CUDA_VISIBLE_DEVICES=0
125125
# --config 参数表示指定使用哪个配置文件
126126
# --do_eval 参数表示一遍训练一遍验证
127127
# --save_interval 参数表示每经过100个iters,进行一个模型的保存
128-
python PaddleSeg/train.py \
128+
python PaddleSeg/tools/train.py \
129129
--config PaddleSeg/configs/mynet.yml \
130130
--do_eval \
131131
--use_vdl \

docs/apis/datasets.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ dataset = Dataset(transforms = transforms,
7474
> CLASS paddleseg.datasets.PascalVOC(transforms, dataset_root=None, mode='train', edge=False)
7575
7676
PascalVOC2012 dataset `http://host.robots.ox.ac.uk/pascal/VOC/`.
77-
If you want to augment the dataset, please run the voc_augment.py in tools.
77+
If you want to augment the dataset, please run the voc_augment.py in tools/data.
7878

7979
> > Args
8080
> > > - **transforms** (list): Transforms for image.

docs/apis/datasets/datasets.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ class paddleseg.datasets.Cityscapes(transforms, dataset_root, mode='train', edge
7878
class paddleseg.datasets.PascalVOC(transforms, dataset_root=None, mode='train', edge=False)
7979
```
8080
PascalVOC2012 dataset `http://host.robots.ox.ac.uk/pascal/VOC/`.
81-
If you want to augment the dataset, please run the voc_augment.py in tools.
81+
If you want to augment the dataset, please run the voc_augment.py in tools/data.
8282

8383
### Args
8484
* **transforms** (list): Transforms for image.

docs/apis/datasets/datasets_cn.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -77,7 +77,7 @@ class paddleseg.datasets.Cityscapes(transforms, dataset_root, mode='train', edge
7777
class paddleseg.datasets.PascalVOC(transforms, dataset_root=None, mode='train', edge=False)
7878
```
7979
PascalVOC2012 数据集 `http://host.robots.ox.ac.uk/pascal/VOC/`。
80-
如果你想对数据集做数据增强, 请运行 tools 中的 voc_augment.py。
80+
如果你想对数据集做数据增强, 请运行 tools/data 中的 voc_augment.py。
8181

8282
### 参数
8383
* **transforms** (list): 对图像的变换方法。

docs/data/custom/data_prepare.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ If your dataset is not organized as the aforementioned structure, we suggest tha
5050

5151
The commands used are as follows, which supports enabling specific functions through different Flags.
5252
```
53-
python tools/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
53+
python tools/data/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
5454
```
5555
Parameters:
5656
- dataset_root: Dataset root directory
@@ -74,18 +74,18 @@ After running, `train.txt`, `val.txt`, `test.txt` and `labels.txt` will be gener
7474

7575
#### Example
7676
```
77-
python tools/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
77+
python tools/data/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
7878
```
7979

8080
### 1.2 Generate txt files
8181
If you only have a divided dataset, you can generate a file list by executing the following script:
8282
```
8383
# Generate a file list, the separator is a space, and the data format of the picture and the label set is png
84-
python tools/create_dataset_list.py <your/dataset/dir> --separator " " --format png png
84+
python tools/data/create_dataset_list.py <your/dataset/dir> --separator " " --format png png
8585
```
8686
```
8787
# Generate a list of files. The folders for pictures and tag sets are named img and gt, and the folders for training and validation sets are named training and validation. No test set list is generated.
88-
python tools/create_dataset_list.py <your/dataset/dir> \
88+
python tools/data/create_dataset_list.py <your/dataset/dir> \
8989
--folder img gt --second_folder training validation
9090
```
9191
**Note:** A custom dataset directory must be specified, and FLAG can be set as needed. There is no need to specify `--type`.

docs/data/custom/data_prepare_cn.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@
5050

5151
使用命令如下,支持通过不同的Flags来开启特定功能。
5252
```
53-
python tools/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
53+
python tools/data/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
5454
```
5555
参数说明:
5656
- dataset_root: 数据集根目录
@@ -73,7 +73,7 @@ FLAGS说明:
7373

7474
使用示例:
7575
```
76-
python tools/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
76+
python tools/data/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
7777
```
7878

7979

@@ -82,11 +82,11 @@ python tools/split_dataset_list.py <dataset_root> images annotations --split 0.6
8282
如果你只有划分好的数据集,可以通过执行以下脚本生成文件列表:
8383
```
8484
# 生成文件列表,其分隔符为空格,图片和标签集的数据格式都为png
85-
python tools/create_dataset_list.py <your/dataset/dir> --separator " " --format png png
85+
python tools/data/create_dataset_list.py <your/dataset/dir> --separator " " --format png png
8686
```
8787
```
8888
# 生成文件列表,其图片和标签集的文件夹名为img和gt,训练和验证集的文件夹名为training和validation,不生成测试集列表
89-
python tools/create_dataset_list.py <your/dataset/dir> \
89+
python tools/data/create_dataset_list.py <your/dataset/dir> \
9090
--folder img gt --second_folder training validation
9191
```
9292

docs/data/marker/marker.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ At the same time, PaddleSeg is also compatible with gray-scale icon annotations.
2828
If users need to convert to pseudo-color annotation maps, they can use our conversion tool. Applies to the following two common situations:
2929
* If you want to convert all grayscale annotation images in a specified directory to pseudo-color annotation images, execute the following command to specify the directory where the grayscale annotations are located.
3030
```buildoutcfg
31-
python tools/gray2pseudo_color.py <dir_or_file> <output_dir>
31+
python tools/data/gray2pseudo_color.py <dir_or_file> <output_dir>
3232
```
3333

3434
|Parameter|Effection|
@@ -38,7 +38,7 @@ python tools/gray2pseudo_color.py <dir_or_file> <output_dir>
3838

3939
* If you only want to convert part of the gray scale annotated image in the specified dataset to pseudo-color annotated image, execute the following command, you need an existing file list, and read the specified image according to the list.
4040
```buildoutcfg
41-
python tools/gray2pseudo_color.py <dir_or_file> <output_dir> --dataset_dir <dataset directory> --file_separator <file list separator>
41+
python tools/data/gray2pseudo_color.py <dir_or_file> <output_dir> --dataset_dir <dataset directory> --file_separator <file list separator>
4242
```
4343
|Parameter|Effection|
4444
|-|-|
@@ -84,7 +84,7 @@ For all data that is not divided into training set, validation set, and test set
8484
The following commands support enabling specific functions through different Flags.
8585

8686
```
87-
python tools/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
87+
python tools/data/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
8888
```
8989

9090
Parameters:
@@ -103,7 +103,7 @@ FLAGS:
103103

104104
The example of usage:
105105
```
106-
python tools/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
106+
python tools/data/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
107107
```
108108

109109
After running, `train.txt`, `val.txt`, `test.txt` and `labels.txt` will be generated in the root directory of the dataset.

docs/data/marker/marker_cn.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ PaddleSeg既支持灰度标注图,也支持伪彩色标注图。
3030
* 如果希望将指定目录下的所有灰度标注图转换为伪彩色标注图,则执行以下命令。
3131

3232
```buildoutcfg
33-
python tools/gray2pseudo_color.py <dir_or_file> <output_dir>
33+
python tools/data/gray2pseudo_color.py <dir_or_file> <output_dir>
3434
```
3535

3636
|参数|用途|
@@ -41,7 +41,7 @@ python tools/gray2pseudo_color.py <dir_or_file> <output_dir>
4141
* 如果仅希望将指定数据集中的部分灰度标注图转换为伪彩色标注图,则执行以下命令。
4242

4343
```buildoutcfg
44-
python tools/gray2pseudo_color.py <dir_or_file> <output_dir> --dataset_dir <dataset directory> --file_separator <file list separator>
44+
python tools/data/gray2pseudo_color.py <dir_or_file> <output_dir> --dataset_dir <dataset directory> --file_separator <file list separator>
4545
```
4646
|参数|用途|
4747
|-|-|
@@ -85,7 +85,7 @@ custom_dataset
8585
PaddleSeg提供了切分数据并生成文件列表的脚本。
8686

8787
```
88-
python tools/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
88+
python tools/data/split_dataset_list.py <dataset_root> <images_dir_name> <labels_dir_name> ${FLAGS}
8989
```
9090

9191
参数说明:
@@ -104,7 +104,7 @@ FLAGS说明:
104104

105105
使用示例:
106106
```
107-
python tools/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
107+
python tools/data/split_dataset_list.py <dataset_root> images annotations --split 0.6 0.2 0.2 --format jpg png
108108
```
109109

110110
运行后将在数据集根目录下生成`train.txt``val.txt``test.txt`,如下。

docs/data/pre_data.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ We recommend that you store dataset in `PaddleSeg/data` for full compatibility w
6262
Run the following command to convert labels:
6363
```shell
6464
pip install cityscapesscripts
65-
python tools/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
65+
python tools/data/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
6666
```
6767
where `cityscapes_path` should be adjusted according to the actual dataset path. `num_workers` determines the number of processes to be started. The value can be adjusted as required.
6868

@@ -74,7 +74,7 @@ Generally, we will use [SBD(Semantic Boundaries Dataset)](http://home.bharathh.i
7474
Run the following commands to download the SBD dataset and use it to expand:
7575
```shell
7676
cd PaddleSeg
77-
python tools/voc_augment.py --voc_path data/VOCdevkit --num_workers 8
77+
python tools/data/voc_augment.py --voc_path data/VOCdevkit --num_workers 8
7878
```
7979
where `voc_path`should be adjusted according to the actual dataset path.
8080

@@ -106,7 +106,7 @@ We recommend that you store dataset in `PaddleSeg/data` for full compatibility w
106106
Run the following command to convert labels:
107107

108108
```shell
109-
python tools/convert_cocostuff.py --annotation_path /PATH/TO/ANNOTATIONS --save_path /PATH/TO/CONVERT_ANNOTATIONS
109+
python tools/data/convert_cocostuff.py --annotation_path /PATH/TO/ANNOTATIONS --save_path /PATH/TO/CONVERT_ANNOTATIONS
110110
```
111111
where `annotation_path` should be filled according to the `cocostuff/annotations` actual path. `save_path` determines the location of the converted label.
112112

@@ -139,7 +139,7 @@ We recommend that you store dataset in `PaddleSeg/data` for full compatibility w
139139
Run the following command to convert labels:
140140

141141
```shell
142-
python tools/convert_voc2010.py --voc_path /PATH/TO/VOC ----annotation_path /PATH/TO/JSON
142+
python tools/data/convert_voc2010.py --voc_path /PATH/TO/VOC ----annotation_path /PATH/TO/JSON
143143
```
144144
where `voc_path` should be filled according to the voc2010 actual path. `annotation_path` is the trainval_merged.json saved path.
145145

docs/data/pre_data_cn.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Cityscapes是关于城市街道场景的语义理解图片数据集。它主要
6363

6464
```shell
6565
pip install cityscapesscripts
66-
python tools/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
66+
python tools/data/convert_cityscapes.py --cityscapes_path data/cityscapes --num_workers 8
6767
```
6868

6969
### ADE20K数据集
@@ -84,7 +84,7 @@ Pascal VOC 2012数据集以对象分割为主,包含20个类别和背景类,
8484

8585
```shell
8686
cd PaddleSeg
87-
python tools/voc_augment.py --voc_path data/VOCdevkit --num_workers 8
87+
python tools/data/voc_augment.py --voc_path data/VOCdevkit --num_workers 8
8888
```
8989

9090
### Coco Stuff数据集
@@ -111,7 +111,7 @@ Coco Stuff是基于Coco数据集的像素级别语义分割数据集。它主要
111111
运行下列命令进行标签转换,其中`annotation_path`应根据下载cocostuff/annotations文件夹的实际路径填写,`save_path`决定转换后标签的存放位置。
112112

113113
```shell
114-
python tools/convert_cocostuff.py --annotation_path /PATH/TO/ANNOTATIONS --save_path /PATH/TO/CONVERT_ANNOTATIONS
114+
python tools/data/convert_cocostuff.py --annotation_path /PATH/TO/ANNOTATIONS --save_path /PATH/TO/CONVERT_ANNOTATIONS
115115
```
116116

117117

@@ -142,7 +142,7 @@ Pascal Context是基于PASCAL VOC 2010数据集额外标注的像素级别的语
142142
运行下列命令进行标签转换:
143143

144144
```shell
145-
python tools/convert_voc2010.py --voc_path /PATH/TO/VOC ----annotation_path /PATH/TO/JSON
145+
python tools/data/convert_voc2010.py --voc_path /PATH/TO/VOC ----annotation_path /PATH/TO/JSON
146146
```
147147
其中`voc_path`应根据下载VOC2010文件夹的实际路径填写。 `annotation_path`决定下载trainval_merged.json的存放位置。
148148

0 commit comments

Comments
 (0)