@@ -49,7 +49,7 @@ A local training job using `xception_65` can be run with the following command:
49
49
# From tensorflow/models/research/
50
50
python deeplab/train.py \
51
51
--logtostderr \
52
- --training_number_of_steps=50000 \
52
+ --training_number_of_steps=90000 \
53
53
--train_split=" train" \
54
54
--model_variant=" xception_65" \
55
55
--atrous_rates=6 \
@@ -60,21 +60,16 @@ python deeplab/train.py \
60
60
--train_crop_size=513 \
61
61
--train_crop_size=513 \
62
62
--train_batch_size=4 \
63
- --min_resize_value=350 \
64
- --max_resize_value=500 \
63
+ --min_resize_value=513 \
64
+ --max_resize_value=513 \
65
65
--resize_factor=16 \
66
- --fine_tune_batch_norm=False \
67
66
--dataset=" ade20k" \
68
- --initialize_last_layer=False \
69
- --last_layers_contain_logits_only=True \
70
67
--tf_initial_checkpoint=${PATH_TO_INITIAL_CHECKPOINT} \
71
68
--train_logdir=${PATH_TO_TRAIN_DIR} \
72
69
--dataset_dir=${PATH_TO_DATASET}
73
70
```
74
71
75
72
where ${PATH\_ TO\_ INITIAL\_ CHECKPOINT} is the path to the initial checkpoint.
76
- For example, if you are using the deeplabv3\_ pascal\_ train\_ aug checkppoint, you
77
- will set this to ` /path/to/deeplabv3\_pascal\_train\_aug/model.ckpt ` .
78
73
${PATH\_ TO\_ TRAIN\_ DIR} is the directory in which training checkpoints and
79
74
events will be written to (it is recommended to set it to the
80
75
` train_on_train_set/train ` above), and ${PATH\_ TO\_ DATASET} is the directory in
@@ -98,8 +93,6 @@ which the ADE20K dataset resides (the `tfrecord` above)
98
93
4 . The users could skip the flag, ` decoder_output_stride ` , if you do not want
99
94
to use the decoder structure.
100
95
101
- Currently there are no fine-tuned checkpoint for the ADE20K dataset.
102
-
103
96
## Running Tensorboard
104
97
105
98
Progress for training and evaluation jobs can be inspected using Tensorboard. If
0 commit comments