You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*[Download and preprocess datasets](#download-and-preprocess-datasets)
19
19
*[Model training and evaluation](#model-training-and-evaluation)
20
20
*[Translate using the model](#translate-using-the-model)
@@ -31,46 +31,53 @@ The model also applies embeddings on the input and output tokens, and adds a con
31
31
Below are the commands for running the Transformer model. See the [Detailed instrutions](#detailed-instructions) for more details on running the model.
32
32
33
33
```
34
-
PARAMS=big
34
+
cd /path/to/models/official/transformer
35
+
36
+
# Ensure that PYTHONPATH is correctly defined as described in
Currently, both big and base params run on a single GPU. The measurements below
69
+
Currently, both big and base parameter sets run on a single GPU. The measurements below
63
70
are reported from running the model on a P100 GPU.
64
71
65
-
Params | batches/sec | batches per epoch | time per epoch
72
+
Param Set | batches/sec | batches per epoch | time per epoch
66
73
--- | --- | --- | ---
67
74
base | 4.8 | 83244 | 4 hr
68
75
big | 1.1 | 41365 | 10 hr
69
76
70
77
### Evaluation results
71
78
Below are the case-insensitive BLEU scores after 10 epochs.
72
79
73
-
Params | Score
80
+
Param Set | Score
74
81
--- | --- |
75
82
base | 27.7
76
83
big | 28.9
@@ -79,13 +86,18 @@ big | 28.9
79
86
## Detailed instructions
80
87
81
88
82
-
0.### Export variables (optional)
89
+
0.### Environment preparation
90
+
91
+
#### Add models repo to PYTHONPATH
92
+
Follow the instructions described in the [Running the models](https://github.com/tensorflow/models/tree/master/official#running-the-models) section to add the models folder to the python path.
93
+
94
+
#### Export variables (optional)
83
95
84
96
Export the following variables, or modify the values in each of the snippets below:
*`--data_dir`: This should be set to the same directory given to the `data_download`'s `data_dir` argument.
117
129
*`--model_dir`: Directory to save Transformer model training checkpoints.
118
-
*`--params`: Parameter set to use when creating and training the model. Options are `base` and `big` (default).
130
+
*`--param_set`: Parameter set to use when creating and training the model. Options are `base` and `big` (default).
119
131
* Use the `--help` or `-h` flag to get a full list of possible arguments.
120
132
121
133
#### Customizing training schedule
122
134
123
135
By default, the model will train for 10 epochs, and evaluate after every epoch. The training schedule may be defined through the flags:
124
136
* Training with epochs (default):
125
137
*`--train_epochs`: The total number of complete passes to make through the dataset
126
-
*`--epochs_between_eval`: The number of epochs to train between evaluations.
138
+
*`--epochs_between_evals`: The number of epochs to train between evaluations.
127
139
* Training with steps:
128
140
*`--train_steps`: sets the total number of training steps to run.
129
-
*`--steps_between_eval`: Number of training steps to run between evaluations.
141
+
*`--steps_between_evals`: Number of training steps to run between evaluations.
130
142
131
-
Only one of `train_epochs` or `train_steps` may be set. Since the default option is to evaluate the model after training for an epoch, it may take 4 or more hours between model evaluations. To get more frequent evaluations, use the flags `--train_steps=250000 --steps_between_eval=1000`.
143
+
Only one of `train_epochs` or `train_steps` may be set. Since the default option is to evaluate the model after training for an epoch, it may take 4 or more hours between model evaluations. To get more frequent evaluations, use the flags `--train_steps=250000 --steps_between_evals=1000`.
132
144
133
145
Note: At the beginning of each training session, the training dataset is reloaded and shuffled. Stopping the training before completing an epoch may result in worse model quality, due to the chance that some examples may be seen more than others. Therefore, it is recommended to use epochs when the model quality is important.
134
146
@@ -137,7 +149,7 @@ big | 28.9
137
149
Use these flags to compute the BLEU when the model evaluates:
138
150
*`--bleu_source`: Path to file containing text to translate.
139
151
*`--bleu_ref`: Path to file containing the reference translation.
140
-
*`--bleu_threshold`: Train until the BLEU score reaches this lower bound. This setting overrides the `--train_steps` and `--train_epochs` flags.
152
+
*`--stop_threshold`: Train until the BLEU score reaches this lower bound. This setting overrides the `--train_steps` and `--train_epochs` flags.
141
153
142
154
The test source and reference files located in the `test_data` directory are extracted from the preprocessed dataset from the [NMT Seq2Seq tutorial](https://google.github.io/seq2seq/nmt/#download-data).
0 commit comments