Skip to content
This repository was archived by the owner on Feb 21, 2025. It is now read-only.

Commit f99725a

Browse files
authored
all demo use python-3 (dmlc#555)
1 parent 605b518 commit f99725a

File tree

17 files changed

+45
-45
lines changed

17 files changed

+45
-45
lines changed

examples/mxnet/gat/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,5 +19,5 @@ pip install requests
1919

2020
### Usage (make sure that DGLBACKEND is changed into mxnet)
2121
```bash
22-
DGLBACKEND=mxnet python gat_batch.py --dataset cora --gpu 0 --num-heads 8
22+
DGLBACKEND=mxnet python3 gat_batch.py --dataset cora --gpu 0 --num-heads 8
2323
```

examples/mxnet/rgcn/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ Example code was tested with rdflib 4.2.2 and pandas 0.23.4
2222
### Entity Classification
2323
AIFB: accuracy 97.22% (DGL), 95.83% (paper)
2424
```
25-
DGLBACKEND=mxnet python entity_classify.py -d aifb --testing --gpu 0
25+
DGLBACKEND=mxnet python3 entity_classify.py -d aifb --testing --gpu 0
2626
```
2727

2828
MUTAG: accuracy 76.47% (DGL), 73.23% (paper)
2929
```
30-
DGLBACKEND=mxnet python entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
30+
DGLBACKEND=mxnet python3 entity_classify.py -d mutag --l2norm 5e-4 --n-bases 40 --testing --gpu 0
3131
```
3232

3333
BGS: accuracy 79.31% (DGL, n-basese=20, OOM when >20), 83.10% (paper)
3434
```
35-
DGLBACKEND=mxnet python entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
35+
DGLBACKEND=mxnet python3 entity_classify.py -d bgs --l2norm 5e-4 --n-bases 20 --testing --gpu 0 --relabel
3636
```

examples/mxnet/sampling/README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -15,44 +15,44 @@ pip install mxnet --pre
1515
### Neighbor Sampling & Skip Connection
1616
cora: test accuracy ~83% with `--num-neighbors 2`, ~84% by training on the full graph
1717
```
18-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
18+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset cora --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
1919
```
2020

2121
citeseer: test accuracy ~69% with `--num-neighbors 2`, ~70% by training on the full graph
2222
```
23-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
23+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000 --test-batch-size 5000
2424
```
2525

2626
pubmed: test accuracy ~78% with `--num-neighbors 3`, ~77% by training on the full graph
2727
```
28-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
28+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000
2929
```
3030

3131
reddit: test accuracy ~91% with `--num-neighbors 3` and `--batch-size 1000`, ~93% by training on the full graph
3232
```
33-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
33+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_ns --dataset reddit-self-loop --num-neighbors 3 --batch-size 1000 --test-batch-size 5000 --n-hidden 64
3434
```
3535

3636

3737
### Control Variate & Skip Connection
3838
cora: test accuracy ~84% with `--num-neighbors 1`, ~84% by training on the full graph
3939
```
40-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
40+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
4141
```
4242

4343
citeseer: test accuracy ~69% with `--num-neighbors 1`, ~70% by training on the full graph
4444
```
45-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
45+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
4646
```
4747

4848
pubmed: test accuracy ~79% with `--num-neighbors 1`, ~77% by training on the full graph
4949
```
50-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
50+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000
5151
```
5252

5353
reddit: test accuracy ~93% with `--num-neighbors 1` and `--batch-size 1000`, ~93% by training on the full graph
5454
```
55-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
55+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model gcn_cv --dataset reddit-self-loop --num-neighbors 1 --batch-size 10000 --test-batch-size 5000 --n-hidden 64
5656
```
5757

5858
### Control Variate & GraphSAGE-mean
@@ -61,7 +61,7 @@ Following [Control Variate](https://arxiv.org/abs/1710.10568), we use the mean p
6161

6262
reddit: test accuracy 96.1% with `--num-neighbors 1` and `--batch-size 1000`, ~96.2% in [Control Variate](https://arxiv.org/abs/1710.10568) with `--num-neighbors 2` and `--batch-size 1000`
6363
```
64-
DGLBACKEND=mxnet python examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
64+
DGLBACKEND=mxnet python3 examples/mxnet/sampling/train.py --model graphsage_cv --batch-size 1000 --test-batch-size 5000 --n-epochs 50 --dataset reddit --num-neighbors 1 --n-hidden 128 --dropout 0.2 --weight-decay 0
6565
```
6666

6767
### Run multi-processing training

examples/mxnet/tree_lstm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ The script will download the [SST dataset] (http://nlp.stanford.edu/sentiment/in
2121

2222
## Usage
2323
```
24-
python train.py --gpu 0
24+
python3 train.py --gpu 0
2525
```
2626

2727
## Speed Test

examples/pytorch/appnp/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Results
2222

2323
Run with following (available dataset: "cora", "citeseer", "pubmed")
2424
```bash
25-
python train.py --dataset cora --gpu 0
25+
python3 train.py --dataset cora --gpu 0
2626
```
2727

2828
* cora: 0.8370 (paper: 0.850)

examples/pytorch/capsule/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ Training & Evaluation
1717
----------------------
1818
```bash
1919
# Run with default config
20-
python main.py
20+
python3 main.py
2121
# Run with train and test batch size 128, and for 50 epochs
22-
python main.py --batch-size 128 --test-batch-size 128 --epochs 50
22+
python3 main.py --batch-size 128 --test-batch-size 128 --epochs 50
2323
```

examples/pytorch/dgi/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,15 +20,15 @@ How to run
2020
Run with following:
2121

2222
```bash
23-
python train.py --dataset=cora --gpu=0 --self-loop
23+
python3 train.py --dataset=cora --gpu=0 --self-loop
2424
```
2525

2626
```bash
27-
python train.py --dataset=citeseer --gpu=0
27+
python3 train.py --dataset=citeseer --gpu=0
2828
```
2929

3030
```bash
31-
python train.py --dataset=pubmed --gpu=0
31+
python3 train.py --dataset=pubmed --gpu=0
3232
```
3333

3434
Results

examples/pytorch/dgmg/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -10,8 +10,8 @@ Yujia Li, Oriol Vinyals, Chris Dyer, Razvan Pascanu, Peter Battaglia.
1010

1111
## Usage
1212

13-
- Train with batch size 1: `python main.py`
14-
- Train with batch size larger than 1: `python main_batch.py`.
13+
- Train with batch size 1: `python3 main.py`
14+
- Train with batch size larger than 1: `python3 main_batch.py`.
1515

1616
## Performance
1717

examples/pytorch/gat/README.md

+4-4
Original file line numberDiff line numberDiff line change
@@ -23,19 +23,19 @@ How to run
2323
Run with following:
2424

2525
```bash
26-
python train.py --dataset=cora --gpu=0
26+
python3 train.py --dataset=cora --gpu=0
2727
```
2828

2929
```bash
30-
python train.py --dataset=citeseer --gpu=0
30+
python3 train.py --dataset=citeseer --gpu=0
3131
```
3232

3333
```bash
34-
python train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
34+
python3 train.py --dataset=pubmed --gpu=0 --num-out-heads=8 --weight-decay=0.001
3535
```
3636

3737
```bash
38-
python train_ppi.py --gpu=0
38+
python3 train_ppi.py --gpu=0
3939
```
4040

4141
Results

examples/pytorch/gcn/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ Results
2828

2929
Run with following (available dataset: "cora", "citeseer", "pubmed")
3030
```bash
31-
python train.py --dataset cora --gpu 0 --self-loop
31+
python3 train.py --dataset cora --gpu 0 --self-loop
3232
```
3333

3434
* cora: ~0.810 (0.79-0.83) (paper: 0.815)

examples/pytorch/gin/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,12 @@ How to run
2020
An experiment on the GIN in default settings can be run with
2121

2222
```bash
23-
python main.py
23+
python3 main.py
2424
```
2525

2626
An experiment on the GIN in customized settings can be run with
2727
```bash
28-
python main.py [--device 0 | --disable-cuda] --dataset COLLAB \
28+
python3 main.py [--device 0 | --disable-cuda] --dataset COLLAB \
2929
--graph_pooling_type max --neighbor_pooling_type sum
3030
```
3131

@@ -35,7 +35,7 @@ Results
3535
Run with following with the double SUM pooling way:
3636
(tested dataset: "MUTAG"(default), "COLLAB", "IMDBBINARY", "IMDBMULTI")
3737
```bash
38-
python train.py --dataset MUTAB --device 0 \
38+
python3 train.py --dataset MUTAB --device 0 \
3939
--graph_pooling_type sum --neighbor_pooling_type sum
4040
```
4141

examples/pytorch/graphsage/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Results
1919

2020
Run with following (available dataset: "cora", "citeseer", "pubmed")
2121
```bash
22-
python graphsage.py --dataset cora --gpu 0
22+
python3 graphsage.py --dataset cora --gpu 0
2323
```
2424

2525
* cora: ~0.8470

examples/pytorch/line_graph/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -22,12 +22,12 @@ How to run
2222
An experiment on the Stochastic Block Model in default settings can be run with
2323

2424
```bash
25-
python train.py
25+
python3 train.py
2626
```
2727

2828
An experiment on the Stochastic Block Model in customized settings can be run with
2929
```bash
30-
python train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
30+
python3 train.py --batch-size BATCH_SIZE --gpu GPU --n-communities N_COMMUNITIES \
3131
--n-features N_FEATURES --n-graphs N_GRAPH --n-iterations N_ITERATIONS \
3232
--n-layers N_LAYER --n-nodes N_NODE --model-path MODEL_PATH --radius RADIUS
3333
```

examples/pytorch/sampling/README.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -16,32 +16,32 @@ pip install torch requests
1616
### Neighbor Sampling & Skip Connection
1717
cora: test accuracy ~83% with --num-neighbors 2, ~84% by training on the full graph
1818
```
19-
python gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
19+
python3 gcn_ns_sc.py --dataset cora --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
2020
```
2121

2222
citeseer: test accuracy ~69% with --num-neighbors 2, ~70% by training on the full graph
2323
```
24-
python gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
24+
python3 gcn_ns_sc.py --dataset citeseer --self-loop --num-neighbors 2 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
2525
```
2626

2727
pubmed: test accuracy ~76% with --num-neighbors 3, ~77% by training on the full graph
2828
```
29-
python gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
29+
python3 gcn_ns_sc.py --dataset pubmed --self-loop --num-neighbors 3 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
3030
```
3131

3232
### Control Variate & Skip Connection
3333
cora: test accuracy ~84% with --num-neighbors 1, ~84% by training on the full graph
3434
```
35-
python gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
35+
python3 gcn_cv_sc.py --dataset cora --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
3636
```
3737

3838
citeseer: test accuracy ~69% with --num-neighbors 1, ~70% by training on the full graph
3939
```
40-
python gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
40+
python3 gcn_cv_sc.py --dataset citeseer --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
4141
```
4242

4343
pubmed: test accuracy ~77% with --num-neighbors 1, ~77% by training on the full graph
4444
```
45-
python gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
45+
python3 gcn_cv_sc.py --dataset pubmed --self-loop --num-neighbors 1 --batch-size 1000000 --test-batch-size 1000000 --gpu 0
4646
```
4747

examples/pytorch/sgc/README.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -22,9 +22,9 @@ Results
2222

2323
Run with following (available dataset: "cora", "citeseer", "pubmed")
2424
```bash
25-
python sgc.py --dataset cora --gpu 0
26-
python sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
27-
python sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
25+
python3 sgc.py --dataset cora --gpu 0
26+
python3 sgc.py --dataset citeseer --weight-decay 5e-5 --n-epochs 150 --bias --gpu 0
27+
python3 sgc.py --dataset pubmed --weight-decay 5e-5 --bias --gpu 0
2828
```
2929

3030
On NVIDIA V100

examples/pytorch/transformer/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -15,13 +15,13 @@ The folder contains training module and inferencing module (beam decoder) for Tr
1515
- For training:
1616

1717
```
18-
python translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
18+
python3 translation_train.py [--gpus id1,id2,...] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--universal]
1919
```
2020
2121
- For evaluating BLEU score on test set(by enabling `--print` to see translated text):
2222
2323
```
24-
python translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
24+
python3 translation_test.py [--gpu id] [--N #layers] [--dataset DATASET] [--batch BATCHSIZE] [--checkpoint CHECKPOINT] [--print] [--universal]
2525
```
2626
2727
Available datasets: `copy`, `sort`, `wmt14`, `multi30k`(default).

examples/pytorch/tree_lstm/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ pip install torch requests nltk
2424

2525
## Usage
2626
```
27-
python train.py --gpu 0
27+
python3 train.py --gpu 0
2828
```
2929

3030
## Speed

0 commit comments

Comments
 (0)